sample init

master
박성영 3 weeks ago
commit 1024f2b3db

55
.gitignore vendored

@ -0,0 +1,55 @@
HELP.md
.gradle
build/
!gradle/wrapper/gradle-wrapper.jar
!**/src/main/**/build/
!**/src/test/**/build/
### STS ###
.apt_generated
.classpath
.factorypath
.project
.settings
.springBeans
.sts4-cache
bin/
!**/src/main/**/bin/
!**/src/test/**/bin/
### IntelliJ IDEA ###
.idea
*.iws
*.iml
*.ipr
out/
!**/src/main/**/out/
!**/src/test/**/out/
### NetBeans ###
/nbproject/private/
/nbbuild/
/dist/
/nbdist/
/.nb-gradle/
### VS Code ###
.vscode/
### Mac OS ###
.DS_Store
### Logs ###
logs/
*.log
### Temp files ###
*.tmp
*.temp
### Gradle Properties (보안) ###
gradle.properties
/gradle/wrapper/gradle-wrapper.jar
/gradle/wrapper/gradle-wrapper.properties
/.claude/settings.local.json
/gradlew.bat

@ -0,0 +1,357 @@
# 로컬 Nexus 테스트 환경 구성 가이드
Docker를 사용하여 로컬에서 Nexus Repository Manager를 실행하고 테스트하는 가이드입니다.
## 사전 요구사항
- Docker Desktop (Windows/Mac) 또는 Docker Engine (Linux)
- Docker Compose
## 1단계: Docker Compose로 Nexus 실행
### 1-1. Nexus 및 MariaDB 시작
```bash
# Nexus와 MariaDB 동시 실행
docker-compose -f docker-compose-nexus.yml up -d
# 로그 확인
docker-compose -f docker-compose-nexus.yml logs -f nexus
```
### 1-2. Nexus 초기 비밀번호 확인
Nexus가 처음 시작될 때 임시 비밀번호가 생성됩니다 (~2-3분 소요):
```bash
# Windows (PowerShell)
docker exec nexus cat /nexus-data/admin.password
# Linux/Mac
docker exec nexus cat /nexus-data/admin.password
```
**출력 예:**
```
a4b7c2d9-e3f1-4a5b-8c6d-1e2f3a4b5c6d
```
## 2단계: Nexus 웹 접속 및 초기 설정
### 2-1. 웹 브라우저 접속
```
http://localhost:8081
```
### 2-2. 로그인
- Username: `admin`
- Password: `(1단계에서 확인한 임시 비밀번호)`
### 2-3. 초기 설정 마법사
1. **새 비밀번호 설정**: `admin123` (또는 원하는 비밀번호)
2. **Anonymous Access**: `Enable anonymous access` 선택
3. **Finish** 클릭
## 3단계: Maven Repository 설정
### 3-1. Maven Central Proxy 확인
기본적으로 다음 Repository들이 생성되어 있습니다:
- **maven-central**: Maven Central Proxy
- **maven-releases**: Hosted Repository (Release)
- **maven-snapshots**: Hosted Repository (Snapshot)
- **maven-public**: Group Repository (위 3개 통합)
### 3-2. Repository 접근 확인
웹 UI에서:
1. 좌측 메뉴 → **Browse**
2. **maven-public** 선택
3. 라이브러리 검색 테스트
## 4단계: 프로젝트 설정
### 4-1. gradle.properties 생성
```properties
# Nexus 로컬 설정
nexusUrl=http://localhost:8081
nexusUsername=admin
nexusPassword=admin123
# Repository URLs
nexusMavenPublic=${nexusUrl}/repository/maven-public/
```
### 4-2. build.gradle 수정
```gradle
repositories {
maven {
url = "${nexusUrl}/repository/maven-public/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
allowInsecureProtocol = true // HTTP 사용
}
}
```
### 4-3. settings.gradle 수정 (선택사항)
```gradle
pluginManagement {
repositories {
maven {
url = "http://localhost:8081/repository/maven-public/"
credentials {
username = "admin"
password = "admin123"
}
allowInsecureProtocol = true
}
}
}
```
## 5단계: 빌드 테스트
```bash
# 캐시 정리
gradlew.bat clean
# 빌드 (의존성 다운로드)
gradlew.bat build --refresh-dependencies
```
첫 빌드 시 Nexus가 Maven Central에서 라이브러리를 다운로드하여 캐싱합니다.
## 6단계: Nexus에서 캐시 확인
### 6-1. 웹 UI에서 확인
1. Browse → **maven-central**
2. 다운로드된 라이브러리 확인 (예: org/springframework/boot/)
### 6-2. 캐시 통계 확인
- **Administration****System** → **Nodes**
- Blob Stores 크기 확인
## 7단계: 내부 라이브러리 배포 (선택사항)
### 7-1. build.gradle에 배포 설정 추가
```gradle
publishing {
publications {
maven(MavenPublication) {
from components.java
groupId = 'com.example'
artifactId = 'springbatch-test'
version = '1.0.0'
}
}
repositories {
maven {
name = 'nexus'
url = "${nexusUrl}/repository/maven-releases/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
allowInsecureProtocol = true
}
}
}
```
### 7-2. 배포 실행
```bash
gradlew.bat publish
```
### 7-3. Nexus에서 확인
Browse → **maven-releases** → com/example/springbatch-test
## Docker 명령어 모음
### Nexus 관리
```bash
# 시작
docker-compose -f docker-compose-nexus.yml up -d
# 중지
docker-compose -f docker-compose-nexus.yml stop
# 재시작
docker-compose -f docker-compose-nexus.yml restart nexus
# 로그 확인
docker-compose -f docker-compose-nexus.yml logs -f nexus
# 완전 삭제 (데이터 포함)
docker-compose -f docker-compose-nexus.yml down -v
```
### MariaDB 관리
```bash
# MariaDB 접속
docker exec -it batch-mariadb mysql -u batch_user -p
# 데이터베이스 확인
docker exec batch-mariadb mysql -u batch_user -pbatch_password -e "SHOW DATABASES;"
```
## 테스트 시나리오
### 시나리오 1: 의존성 다운로드 캐싱
1. 프로젝트 처음 빌드
2. Nexus 웹에서 maven-central 확인
3. 캐시된 라이브러리 확인
4. 두 번째 빌드 시 속도 향상 확인
### 시나리오 2: 폐쇄망 시뮬레이션
1. 첫 번째 빌드로 의존성 캐싱
2. 인터넷 연결 끊기 (Wi-Fi OFF)
3. 프로젝트 clean
4. 다시 빌드 → Nexus 캐시로 성공해야 함
### 시나리오 3: 내부 라이브러리 배포
1. 프로젝트 빌드
2. Nexus에 배포
3. 다른 프로젝트에서 의존성 추가
4. 정상 다운로드 확인
## 리소스 사용량
### 기본 메모리 할당
- Nexus: 2GB (1GB heap + 2GB direct memory)
- MariaDB: 256MB
### 메모리 증가 (필요시)
docker-compose-nexus.yml 수정:
```yaml
environment:
- INSTALL4J_ADD_VM_PARAMS=-Xms2g -Xmx2g -XX:MaxDirectMemorySize=4g
```
## 트러블슈팅
### Nexus 시작 안 됨
```bash
# 로그 확인
docker logs nexus
# 포트 충돌 확인
netstat -ano | findstr :8081
# 재시작
docker-compose -f docker-compose-nexus.yml restart nexus
```
### 비밀번호 초기화
```bash
# 컨테이너 중지
docker-compose -f docker-compose-nexus.yml stop nexus
# 데이터 볼륨 삭제 (모든 데이터 손실!)
docker volume rm springbatch-test_nexus-data
# 재시작
docker-compose -f docker-compose-nexus.yml up -d nexus
```
### 빌드 실패
```bash
# Gradle 캐시 정리
gradlew.bat clean --no-daemon
rm -rf %USERPROFILE%\.gradle\caches
# 의존성 새로고침
gradlew.bat build --refresh-dependencies
```
## Nexus 고급 설정
### 1. Gradle Plugin Portal Proxy
**Administration → Repository → Repositories → Create repository**
- Type: `maven2 (proxy)`
- Name: `gradle-plugins`
- Remote storage: `https://plugins.gradle.org/m2/`
### 2. 디스크 Cleanup
**Administration → Tasks → Create task**
- Type: `Admin - Compact blob store`
- Blob store: `default`
- Schedule: Daily
### 3. 익명 접근 비활성화
**Administration → Security → Anonymous Access**
- `Allow anonymous users to access the server` 체크 해제
## 프로덕션 환경으로 마이그레이션
로컬 테스트 완료 후 실제 Nexus 서버로 전환:
### gradle.properties 수정
```properties
# 로컬 Nexus (개발/테스트)
# nexusUrl=http://localhost:8081
# 실제 Nexus (프로덕션)
nexusUrl=http://nexus.company.com:8081
nexusUsername=your-username
nexusPassword=your-password
```
### HTTPS 사용
```gradle
maven {
url = "https://nexus.company.com/repository/maven-public/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
// allowInsecureProtocol = false (기본값)
}
```
## 참고 자료
- [Nexus Repository Manager Documentation](https://help.sonatype.com/repomanager3)
- [Docker Hub - Sonatype Nexus3](https://hub.docker.com/r/sonatype/nexus3)
## 다음 단계
1. [ ] Docker로 Nexus 실행
2. [ ] Nexus 웹 UI 접속 및 초기 설정
3. [ ] 프로젝트 빌드 테스트
4. [ ] 의존성 캐싱 확인
5. [ ] 내부 라이브러리 배포 테스트
6. [ ] 실제 Nexus 서버로 마이그레이션

@ -0,0 +1,512 @@
# Nexus Repository 설정 가이드 (폐쇄망 환경)
폐쇄망 환경에서 내부 Nexus Repository Manager를 사용하기 위한 설정 가이드입니다.
## 목차
1. [Nexus Repository란?](#nexus-repository란)
2. [설정 방법](#설정-방법)
3. [프로젝트별 설정](#프로젝트별-설정)
4. [전역 설정](#전역-설정)
5. [인증 정보 관리](#인증-정보-관리)
6. [트러블슈팅](#트러블슈팅)
## Nexus Repository란?
Nexus Repository Manager는 Maven, Gradle 등의 빌드 도구가 사용하는 라이브러리를 캐싱하고 관리하는 Repository Manager입니다.
### 폐쇄망에서 Nexus를 사용하는 이유
- 외부 인터넷 접속이 불가능한 환경에서 라이브러리 관리
- 라이브러리 다운로드 속도 향상 (캐싱)
- 보안 정책 준수 (승인된 라이브러리만 사용)
- 내부 개발 라이브러리 배포
## 설정 방법
### 방법 1: 프로젝트별 설정 (권장)
프로젝트 단위로 Nexus를 설정합니다.
#### 1-1. gradle.properties 파일 생성
```bash
# gradle.properties.example을 복사
cp gradle.properties.example gradle.properties
# Windows
copy gradle.properties.example gradle.properties
```
#### 1-2. gradle.properties 수정
```properties
# Nexus 서버 정보
nexusUrl=http://nexus.your-company.com:8081
nexusUsername=your-username
nexusPassword=your-password
```
#### 1-3. build.gradle 수정
`build.gradle` 파일의 repositories 섹션에서:
```gradle
repositories {
// 폐쇄망: Nexus 사용
maven {
url = "${nexusUrl}/repository/maven-public/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
allowInsecureProtocol = true // HTTP 사용 시
}
// 인터넷 접속 가능 시: 주석 해제
// mavenCentral()
}
```
### 방법 2: 전역 설정
모든 Gradle 프로젝트에 Nexus 설정을 적용합니다.
#### 2-1. init.gradle 파일 생성
**Windows:**
```cmd
mkdir %USERPROFILE%\.gradle
copy init.gradle.example %USERPROFILE%\.gradle\init.gradle
```
**Linux/Mac:**
```bash
mkdir -p ~/.gradle
cp init.gradle.example ~/.gradle/init.gradle
```
#### 2-2. init.gradle 수정
```gradle
allprojects {
repositories {
maven {
url 'http://nexus.your-company.com:8081/repository/maven-public/'
credentials {
username 'your-username'
password 'your-password'
}
}
}
}
```
#### 2-3. 적용 확인
```bash
gradlew.bat dependencies --refresh-dependencies
```
## 프로젝트별 설정
### 1. gradle.properties를 사용한 설정
**장점:**
- 프로젝트별로 다른 Nexus 서버 사용 가능
- Git에서 제외 가능 (.gitignore)
- 환경별 설정 관리 용이
**gradle.properties:**
```properties
nexusUrl=http://nexus.company.com:8081
nexusUsername=developer
nexusPassword=secret123
# Repository URLs
nexusMavenPublic=${nexusUrl}/repository/maven-public/
nexusMavenReleases=${nexusUrl}/repository/maven-releases/
nexusMavenSnapshots=${nexusUrl}/repository/maven-snapshots/
```
**build.gradle:**
```gradle
repositories {
maven {
url = "${nexusMavenPublic}"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
}
}
```
### 2. settings.gradle 설정
Plugin 저장소도 Nexus를 사용하도록 설정:
```gradle
pluginManagement {
repositories {
maven {
url = "${nexusUrl}/repository/gradle-plugins/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
allowInsecureProtocol = true
}
}
}
```
## 전역 설정
### 1. init.gradle 사용
모든 Gradle 프로젝트에 자동으로 적용됩니다.
**위치:**
- Windows: `%USERPROFILE%\.gradle\init.gradle`
- Linux/Mac: `~/.gradle/init.gradle`
**예제:**
```gradle
allprojects {
repositories {
all { ArtifactRepository repo ->
if (repo instanceof MavenArtifactRepository) {
def url = repo.url.toString()
if (url.startsWith('https://repo.maven.apache.org') ||
url.startsWith('https://jcenter')) {
remove repo
}
}
}
maven {
url 'http://nexus.company.com:8081/repository/maven-public/'
credentials {
username 'nexus-user'
password 'nexus-pass'
}
allowInsecureProtocol = true
}
}
}
```
### 2. gradle.properties 전역 설정
**위치:**
- Windows: `%USERPROFILE%\.gradle\gradle.properties`
- Linux/Mac: `~/.gradle/gradle.properties`
```properties
nexusUrl=http://nexus.company.com:8081
nexusUsername=your-username
nexusPassword=your-password
```
## 인증 정보 관리
### 1. gradle.properties 사용 (권장)
```properties
# .gitignore에 추가하여 보안 유지
nexusUsername=username
nexusPassword=password
```
**.gitignore에 추가:**
```
gradle.properties
```
### 2. 환경 변수 사용
**build.gradle:**
```gradle
repositories {
maven {
url = "${nexusUrl}/repository/maven-public/"
credentials {
username = System.getenv("NEXUS_USERNAME")
password = System.getenv("NEXUS_PASSWORD")
}
}
}
```
**환경 변수 설정:**
Windows:
```cmd
set NEXUS_USERNAME=your-username
set NEXUS_PASSWORD=your-password
```
Linux/Mac:
```bash
export NEXUS_USERNAME=your-username
export NEXUS_PASSWORD=your-password
```
### 3. Gradle Credentials Plugin 사용
고급 인증 관리가 필요한 경우:
```gradle
plugins {
id 'nu.studer.credentials' version '3.0'
}
repositories {
maven {
url = "${nexusUrl}/repository/maven-public/"
credentials(PasswordCredentials) {
username = credentials.nexusUsername
password = credentials.nexusPassword
}
}
}
```
## Nexus Repository 구성
일반적인 Nexus Repository 구성:
### 1. Maven Public (Group Repository)
모든 Maven 저장소를 통합한 그룹:
```
URL: http://nexus.company.com:8081/repository/maven-public/
```
**포함 저장소:**
- maven-central (Proxy)
- maven-releases (Hosted)
- maven-snapshots (Hosted)
### 2. Maven Central Proxy
Maven Central을 캐싱하는 프록시:
```
URL: http://nexus.company.com:8081/repository/maven-central/
```
### 3. Maven Releases
내부 릴리즈 라이브러리:
```
URL: http://nexus.company.com:8081/repository/maven-releases/
```
### 4. Maven Snapshots
내부 스냅샷 라이브러리:
```
URL: http://nexus.company.com:8081/repository/maven-snapshots/
```
### 5. Gradle Plugins
Gradle 플러그인용 저장소:
```
URL: http://nexus.company.com:8081/repository/gradle-plugins/
```
## SSL/TLS 설정
### HTTPS 사용 (권장)
```gradle
repositories {
maven {
url = "https://nexus.company.com/repository/maven-public/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
// allowInsecureProtocol = false (기본값)
}
}
```
### 자체 서명 인증서 신뢰
**gradle.properties:**
```properties
systemProp.javax.net.ssl.trustStore=/path/to/truststore.jks
systemProp.javax.net.ssl.trustStorePassword=changeit
```
**또는 JVM 옵션:**
```properties
org.gradle.jvmargs=-Djavax.net.ssl.trustStore=/path/to/truststore.jks \
-Djavax.net.ssl.trustStorePassword=changeit
```
### HTTP 사용 (비권장)
보안상 권장하지 않지만, 내부망에서 사용:
```gradle
repositories {
maven {
url = "http://nexus.company.com:8081/repository/maven-public/"
allowInsecureProtocol = true // 필수!
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
}
}
```
## 빌드 명령어
### 의존성 새로고침
```bash
# Windows
gradlew.bat clean build --refresh-dependencies
# Linux/Mac
./gradlew clean build --refresh-dependencies
```
### Nexus 연결 디버그
```bash
gradlew.bat dependencies --debug --stacktrace
```
### 캐시 정리
```bash
# Gradle 캐시 정리
gradlew.bat clean --no-daemon
rm -rf %USERPROFILE%\.gradle\caches
```
## 트러블슈팅
### 1. 인증 실패
**에러:**
```
> Could not resolve all dependencies
> HTTP 401 Unauthorized
```
**해결:**
- Nexus 사용자명/비밀번호 확인
- Nexus 사용자 권한 확인
- gradle.properties 파일 위치 확인
### 2. SSL 인증서 오류
**에러:**
```
> PKIX path building failed
> unable to find valid certification path
```
**해결:**
**방법 1: 인증서 신뢰 저장소에 추가**
```bash
keytool -import -alias nexus -keystore %JAVA_HOME%/lib/security/cacerts \
-file nexus-cert.crt
```
**방법 2: gradle.properties에 설정**
```properties
systemProp.javax.net.ssl.trustStore=/path/to/truststore.jks
systemProp.javax.net.ssl.trustStorePassword=changeit
```
**방법 3: HTTP 사용 (임시)**
```gradle
allowInsecureProtocol = true
```
### 3. 의존성 다운로드 실패
**에러:**
```
> Could not resolve com.example:library:1.0
```
**해결:**
1. Nexus에 해당 라이브러리가 있는지 확인
2. Nexus Proxy가 외부에서 다운로드했는지 확인
3. 캐시 정리 후 재시도:
```bash
gradlew.bat clean build --refresh-dependencies
```
### 4. 느린 빌드 속도
**해결:**
**gradle.properties 최적화:**
```properties
org.gradle.jvmargs=-Xmx2048m -XX:MaxMetaspaceSize=512m
org.gradle.parallel=true
org.gradle.caching=true
org.gradle.daemon=true
```
### 5. HTTP/HTTPS 프로토콜 오류
**에러:**
```
> Using insecure protocols with repositories is not allowed
```
**해결:**
Gradle 7.0 이상에서는 HTTP를 명시적으로 허용해야 함:
```gradle
maven {
url = "http://nexus.company.com:8081/repository/maven-public/"
allowInsecureProtocol = true // 추가 필수
}
```
## 보안 체크리스트
- [ ] HTTPS 사용 (HTTP는 가급적 피함)
- [ ] gradle.properties를 .gitignore에 추가
- [ ] 인증 정보를 환경 변수로 관리
- [ ] Nexus 사용자 최소 권한 부여
- [ ] 정기적인 비밀번호 변경
- [ ] SSL 인증서 유효성 검증
## 예제 파일 구조
```
springbatch-test/
├── build.gradle # Nexus 저장소 설정
├── settings.gradle # Plugin 저장소 설정
├── gradle.properties # Nexus 인증 정보 (Git 제외)
├── gradle.properties.example # 템플릿 (Git 포함)
├── init.gradle.example # 전역 설정 템플릿
└── .gitignore # gradle.properties 제외
```
## 참고 자료
- [Nexus Repository Manager Documentation](https://help.sonatype.com/repomanager3)
- [Gradle Repository Configuration](https://docs.gradle.org/current/userguide/declaring_repositories.html)
- [Gradle Build Cache](https://docs.gradle.org/current/userguide/build_cache.html)
## 문의
Nexus 관련 문제 발생 시:
1. 사내 DevOps 팀 문의
2. Nexus 관리자에게 저장소 권한 확인 요청
3. 네트워크 팀에 방화벽 설정 확인 요청

@ -0,0 +1,225 @@
# Nexus 환경 빠른 시작 가이드
폐쇄망 환경에서 내부 Nexus를 사용하여 프로젝트를 빌드하는 빠른 가이드입니다.
## 1단계: Nexus 정보 확인
DevOps 팀 또는 Nexus 관리자에게 다음 정보를 받으세요:
```
Nexus URL: http://nexus.your-company.com:8081
Username: your-username
Password: your-password
```
## 2단계: gradle.properties 파일 생성
### Windows
```cmd
cd D:\workspace\springbatch-test
copy gradle.properties.example gradle.properties
notepad gradle.properties
```
### Linux/Mac
```bash
cd /workspace/springbatch-test
cp gradle.properties.example gradle.properties
vi gradle.properties
```
### 설정 내용
```properties
nexusUrl=http://nexus.your-company.com:8081
nexusUsername=your-username
nexusPassword=your-password
```
## 3단계: build.gradle 수정
`build.gradle` 파일의 repositories 섹션 수정:
### 변경 전
```gradle
repositories {
// 폐쇄망 환경에서는 아래 Nexus 설정을 사용하고 mavenCentral()은 주석 처리
// Use Nexus repository in closed network environment
// Uncomment below and comment out mavenCentral()
/*
maven {
url = "${nexusUrl}/repository/maven-public/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
allowInsecureProtocol = false
}
*/
// 인터넷 접속 가능 환경
mavenCentral()
}
```
### 변경 후
```gradle
repositories {
// 폐쇄망 환경에서는 아래 Nexus 설정을 사용하고 mavenCentral()은 주석 처리
maven {
url = "${nexusUrl}/repository/maven-public/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
allowInsecureProtocol = true // HTTP 사용 시
}
// 인터넷 접속 가능 환경 - 주석 처리!
// mavenCentral()
}
```
## 4단계: settings.gradle 수정 (선택사항)
Plugin도 Nexus에서 다운로드하려면:
### 변경 전
```gradle
/*
pluginManagement {
repositories {
maven {
url = "${nexusUrl}/repository/gradle-plugins/"
...
}
}
}
*/
```
### 변경 후
```gradle
pluginManagement {
repositories {
maven {
url = "${nexusUrl}/repository/gradle-plugins/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
allowInsecureProtocol = true // HTTP 사용 시
}
maven {
url = "${nexusUrl}/repository/maven-public/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
allowInsecureProtocol = true
}
}
}
```
## 5단계: 빌드 테스트
```bash
# Windows
gradlew.bat clean build --refresh-dependencies
# Linux/Mac
./gradlew clean build --refresh-dependencies
```
## 6단계: 빌드 성공 확인
```
BUILD SUCCESSFUL in 15s
```
성공 메시지가 나오면 완료!
## 트러블슈팅
### 문제 1: 인증 실패 (401 Unauthorized)
**증상:**
```
> Could not resolve all dependencies
> HTTP 401 Unauthorized
```
**해결:**
1. gradle.properties의 사용자명/비밀번호 확인
2. Nexus 웹에서 로그인 테스트: `http://nexus.company.com:8081`
### 문제 2: SSL 인증서 오류
**증상:**
```
> PKIX path building failed
```
**해결:**
1. HTTPS 대신 HTTP 사용 시도
2. `allowInsecureProtocol = true` 설정 확인
### 문제 3: 의존성을 찾을 수 없음
**증상:**
```
> Could not find org.springframework.boot:spring-boot-starter-batch:2.7.18
```
**해결:**
1. Nexus 관리자에게 라이브러리 확인 요청
2. Nexus 웹에서 검색: Browse → maven-public
3. Proxy 설정 확인 요청
### 문제 4: HTTP 프로토콜 오류
**증상:**
```
> Using insecure protocols with repositories is not allowed
```
**해결:**
build.gradle에 `allowInsecureProtocol = true` 추가:
```gradle
maven {
url = "http://..."
allowInsecureProtocol = true // 이 줄 추가!
}
```
## 체크리스트
- [ ] Nexus URL, Username, Password 확인
- [ ] gradle.properties 파일 생성 및 설정
- [ ] build.gradle에서 Nexus 주석 해제
- [ ] build.gradle에서 mavenCentral() 주석 처리
- [ ] allowInsecureProtocol 설정 (HTTP 사용 시)
- [ ] settings.gradle 수정 (plugin 사용 시)
- [ ] 빌드 테스트 성공
## 추가 도움말
상세한 설정은 `NEXUS_SETUP.md` 파일을 참고하세요.
```bash
# 전체 가이드
cat NEXUS_SETUP.md
# 또는 편집기로 열기
notepad NEXUS_SETUP.md
```
## 문의
문제 발생 시:
1. NEXUS_SETUP.md의 트러블슈팅 섹션 확인
2. DevOps 팀 또는 Nexus 관리자에게 문의
3. 네트워크 팀에 방화벽 설정 확인

@ -0,0 +1,334 @@
# Spring Batch 대용량 처리 프로젝트
Spring Boot + Spring Batch + Quartz를 사용한 대용량 데이터 처리 배치 프로젝트입니다.
## 기술 스택
- **Java**: OpenJDK 1.8
- **Framework**: Spring Boot 2.3.12
- **Batch**: Spring Batch
- **Scheduler**: Quartz (Clustering 지원)
- **Database**: MariaDB
- **ORM**: MyBatis
- **Build Tool**: Gradle
## 주요 기능
### 1. 대용량 데이터 처리
- Chunk 기반 처리 (5,000건씩)
- JdbcPagingItemReader를 사용한 효율적인 데이터 읽기
- MyBatis Batch Insert를 통한 성능 최적화
### 2. 배치 처리 흐름
```
파일/DB 읽기 → 데이터 처리 → API 호출 → 결과 저장
```
### 3. 실패 대응
- Skip 처리: 최대 100건까지 오류 허용
- Retry 처리: 실패 시 최대 3회 재시도
- 실패 로그 자동 저장
- JobExecutionListener를 통한 모니터링
### 4. 스케줄링 관리
- Quartz Scheduler 사용
- Cron 표현식을 통한 유연한 스케줄링
- 기본 설정: 매일 새벽 2시 실행
### 5. 서버 다중화 지원
- Quartz Clustering 설정
- DB 기반 Job 동기화
- 중복 실행 방지 (@DisallowConcurrentExecution)
### 6. Transaction 관리
- Spring Batch의 Chunk 단위 트랜잭션
- MyBatis 트랜잭션 통합
## 프로젝트 구조
```
src/main/java/com/example/batch/
├── BatchApplication.java # Main 클래스
├── config/
│ ├── BatchConfig.java # Spring Batch 설정
│ └── MyBatisConfig.java # MyBatis 설정
├── domain/
│ ├── Customer.java # 고객 도메인
│ ├── CustomerProcessed.java # 처리된 고객 데이터
│ └── BatchLog.java # 배치 로그
├── job/
│ ├── CustomerBatchJobConfig.java # 배치 Job 설정
│ └── CustomerJobExecutionListener.java # Job 실행 리스너
├── mapper/
│ ├── CustomerMapper.java # 고객 Mapper
│ └── BatchLogMapper.java # 로그 Mapper
└── scheduler/
├── BatchScheduler.java # Quartz 스케줄러 설정
└── CustomerBatchQuartzJob.java # Quartz Job
src/main/resources/
├── application.yml # 애플리케이션 설정
├── db/
│ └── schema.sql # DB 스키마
└── mapper/
├── CustomerMapper.xml # 고객 쿼리
└── BatchLogMapper.xml # 로그 쿼리
```
## 폐쇄망 환경 지원 (Nexus)
이 프로젝트는 폐쇄망 환경에서 내부 Nexus Repository Manager를 사용할 수 있도록 설정되어 있습니다.
### 빠른 시작
폐쇄망에서 사용하려면 다음 파일들을 참고하세요:
- **QUICK_START_NEXUS.md**: Nexus 빠른 설정 가이드
- **NEXUS_SETUP.md**: Nexus 상세 설정 가이드
- **NEXUS_LOCAL_SETUP.md**: Docker로 로컬 Nexus 테스트 환경 구성
### Nexus 설정 방법
1. `gradle.properties.example`을 복사하여 `gradle.properties` 생성
2. Nexus 정보 입력:
```properties
nexusUrl=http://nexus.your-company.com:8081
nexusUsername=your-username
nexusPassword=your-password
```
3. `build.gradle`에서 Nexus 주석 해제 및 mavenCentral() 주석 처리
4. 빌드: `gradlew.bat clean build`
자세한 내용은 **NEXUS_SETUP.md**를 참고하세요.
## 설치 및 실행
### 1. 데이터베이스 설정
MariaDB에 데이터베이스 및 사용자를 생성합니다:
```sql
CREATE DATABASE batch_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'batch_user'@'%' IDENTIFIED BY 'batch_password';
GRANT ALL PRIVILEGES ON batch_db.* TO 'batch_user'@'%';
FLUSH PRIVILEGES;
```
### 2. 스키마 생성
`src/main/resources/db/schema.sql` 파일을 실행하여 필요한 테이블을 생성합니다:
```bash
mysql -u batch_user -p batch_db < src/main/resources/db/schema.sql
```
또는 MySQL 클라이언트에서:
```sql
USE batch_db;
SOURCE D:/workspace/springbatch-test/src/main/resources/db/schema.sql;
```
### 3. application.yml 수정
`src/main/resources/application.yml`에서 데이터베이스 연결 정보를 수정합니다:
```yaml
spring:
datasource:
url: jdbc:mariadb://localhost:3306/batch_db
username: batch_user
password: batch_password
```
### 4. 빌드 및 실행
```bash
# Gradle 빌드
./gradlew clean build
# 애플리케이션 실행
./gradlew bootRun
# 또는 JAR 파일 실행
java -jar build/libs/springbatch-test-1.0.0.jar
```
Windows에서는:
```cmd
gradlew.bat clean build
gradlew.bat bootRun
```
## 배치 Job 실행 방법
### 1. 스케줄러에 의한 자동 실행
- 기본 설정: 매일 새벽 2시 실행
- Quartz Trigger 설정 변경: `BatchScheduler.java`
### 2. 수동 실행 (테스트용)
REST API를 추가하여 수동 실행할 수 있습니다:
```java
@RestController
@RequestMapping("/api/batch")
public class BatchController {
@Autowired
private JobLauncher jobLauncher;
@Autowired
@Qualifier("customerProcessingJob")
private Job customerProcessingJob;
@PostMapping("/run")
public String runBatch() throws Exception {
JobParameters jobParameters = new JobParametersBuilder()
.addLong("timestamp", System.currentTimeMillis())
.toJobParameters();
jobLauncher.run(customerProcessingJob, jobParameters);
return "Batch job started";
}
}
```
## 설정 커스터마이징
### 1. Chunk Size 변경
`CustomerBatchJobConfig.java`:
```java
private static final int CHUNK_SIZE = 5000; // 원하는 크기로 변경
private static final int PAGE_SIZE = 5000; // 원하는 크기로 변경
```
### 2. 스케줄 변경
`BatchScheduler.java`:
```java
// Cron 표현식 예시:
// "0 0 2 * * ?" - 매일 02:00
// "0 0 */6 * * ?" - 6시간마다
// "0 0/30 * * * ?" - 30분마다
CronScheduleBuilder.cronSchedule("0 0 2 * * ?")
```
### 3. Skip/Retry 정책 변경
`CustomerBatchJobConfig.java`:
```java
.faultTolerant()
.skip(Exception.class)
.skipLimit(100) // Skip 허용 횟수
.retryLimit(3) // Retry 횟수
.retry(Exception.class)
```
### 4. API 엔드포인트 변경
`CustomerBatchJobConfig.java``callExternalApi()` 메서드:
```java
WebClient webClient = WebClient.builder()
.baseUrl("https://your-api-endpoint.com") // 실제 API URL로 변경
.build();
```
## 모니터링
### 1. 배치 실행 로그 확인
```sql
-- 최근 배치 실행 내역
SELECT * FROM TB_BATCH_LOG ORDER BY CREATED_AT DESC LIMIT 10;
-- 실패한 배치 조회
SELECT * FROM TB_BATCH_LOG WHERE STATUS = 'FAILED';
```
### 2. Spring Batch 메타데이터 확인
```sql
-- Job 실행 내역
SELECT * FROM BATCH_JOB_EXECUTION ORDER BY CREATE_TIME DESC;
-- Step 실행 상세
SELECT * FROM BATCH_STEP_EXECUTION ORDER BY START_TIME DESC;
```
### 3. Quartz 스케줄러 상태 확인
```sql
-- 등록된 Job 확인
SELECT * FROM QRTZ_JOB_DETAILS;
-- Trigger 상태 확인
SELECT * FROM QRTZ_TRIGGERS;
-- 클러스터링 상태 확인
SELECT * FROM QRTZ_SCHEDULER_STATE;
```
## 성능 튜닝 가이드
### 1. 대용량 데이터 처리 최적화
```yaml
# application.yml
batch:
chunk-size: 5000 # 한 번에 처리할 레코드 수
page-size: 5000 # DB에서 읽어올 레코드 수
max-thread-pool-size: 5 # 병렬 처리 스레드 수
```
### 2. DB Connection Pool 조정
```yaml
spring:
datasource:
hikari:
maximum-pool-size: 10
minimum-idle: 5
```
### 3. MyBatis Batch Insert
`CustomerMapper.xml``insertProcessedCustomerBatch`를 사용하면 성능 향상
## 다중 서버 운영
### Quartz Clustering 설정
1. 모든 서버가 같은 데이터베이스를 바라보도록 설정
2. `application.yml`에서 클러스터링 활성화 (기본 설정됨)
3. 각 서버는 자동으로 인스턴스 ID를 할당받음
4. 한 서버에서만 Job이 실행되며, 장애 시 다른 서버가 인계받음
## 문제 해결
### 1. 배치가 실행되지 않을 때
- Quartz 테이블이 생성되었는지 확인
- `QRTZ_SCHEDULER_STATE` 테이블에서 스케줄러 상태 확인
- 로그에서 에러 메시지 확인
### 2. 중복 실행이 발생할 때
- Quartz 클러스터링이 올바르게 설정되었는지 확인
- 모든 서버가 같은 DB를 사용하는지 확인
- `@DisallowConcurrentExecution` 어노테이션 확인
### 3. 데이터베이스 연결 오류
- MariaDB가 실행 중인지 확인
- 방화벽 설정 확인
- 연결 정보(URL, 사용자명, 비밀번호) 확인
## 라이선스
MIT License

@ -0,0 +1,377 @@
# Spring Batch 프로젝트 설정 가이드
## 1. 개발 환경 준비
### 필수 설치 항목
- OpenJDK 1.8 이상
- MariaDB 10.x 이상
- Gradle 6.x 이상 (또는 Wrapper 사용)
## 2. 데이터베이스 설정
### 2.1 MariaDB 설치 및 실행
Windows:
```cmd
# MariaDB 다운로드 및 설치
# https://mariadb.org/download/
# MariaDB 서비스 시작
net start MySQL
```
Linux:
```bash
sudo systemctl start mariadb
```
### 2.2 데이터베이스 및 사용자 생성
```sql
-- MariaDB에 접속
mysql -u root -p
-- 데이터베이스 생성
CREATE DATABASE batch_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
-- 사용자 생성 및 권한 부여
CREATE USER 'batch_user'@'%' IDENTIFIED BY 'batch_password';
GRANT ALL PRIVILEGES ON batch_db.* TO 'batch_user'@'%';
FLUSH PRIVILEGES;
-- 확인
SHOW DATABASES;
SELECT user, host FROM mysql.user WHERE user = 'batch_user';
-- 종료
EXIT;
```
### 2.3 스키마 생성
#### 방법 1: 명령줄에서 실행 (권장)
Windows:
```cmd
cd D:\workspace\springbatch-test
mysql -u batch_user -p batch_db < src\main\resources\db\schema.sql
```
Linux:
```bash
cd /path/to/springbatch-test
mysql -u batch_user -p batch_db < src/main/resources/db/schema.sql
```
#### 방법 2: MySQL 클라이언트에서 실행
```sql
mysql -u batch_user -p
USE batch_db;
SOURCE D:/workspace/springbatch-test/src/main/resources/db/schema.sql;
```
### 2.4 테이블 생성 확인
```sql
USE batch_db;
SHOW TABLES;
-- 예상 출력:
-- BATCH_JOB_EXECUTION
-- BATCH_JOB_EXECUTION_CONTEXT
-- BATCH_JOB_EXECUTION_PARAMS
-- BATCH_JOB_EXECUTION_SEQ
-- BATCH_JOB_INSTANCE
-- BATCH_JOB_SEQ
-- BATCH_STEP_EXECUTION
-- BATCH_STEP_EXECUTION_CONTEXT
-- BATCH_STEP_EXECUTION_SEQ
-- QRTZ_* (Quartz 관련 테이블들)
-- TB_CUSTOMER
-- TB_CUSTOMER_PROCESSED
-- TB_BATCH_LOG
```
## 3. 애플리케이션 설정
### 3.1 application.yml 수정
`src/main/resources/application.yml` 파일에서 데이터베이스 연결 정보를 확인/수정:
```yaml
spring:
datasource:
url: jdbc:mariadb://localhost:3306/batch_db
username: batch_user
password: batch_password
```
### 3.2 스케줄 설정 (선택사항)
테스트를 위해 스케줄을 짧게 설정하려면 `src/main/java/com/example/batch/scheduler/BatchScheduler.java` 수정:
```java
// 기본: 매일 새벽 2시
CronScheduleBuilder.cronSchedule("0 0 2 * * ?")
// 테스트용: 30초마다 실행
CronScheduleBuilder.cronSchedule("0/30 * * * * ?")
```
## 4. 프로젝트 빌드 및 실행
### 4.1 Gradle Wrapper 생성 (처음 한 번만)
```bash
gradle wrapper
```
### 4.2 빌드
Windows:
```cmd
gradlew.bat clean build
```
Linux/Mac:
```bash
./gradlew clean build
```
### 4.3 실행
Windows:
```cmd
gradlew.bat bootRun
```
Linux/Mac:
```bash
./gradlew bootRun
```
또는 JAR 파일로 실행:
```bash
java -jar build/libs/springbatch-test-1.0.0.jar
```
## 5. 배치 실행 테스트
### 5.1 수동 실행 (REST API)
배치 잡을 수동으로 실행:
```bash
curl -X POST http://localhost:8080/api/batch/customer/run
```
또는 브라우저/Postman에서:
```
POST http://localhost:8080/api/batch/customer/run
```
### 5.2 상태 확인
Health Check:
```bash
curl http://localhost:8080/api/batch/health
```
### 5.3 배치 실행 로그 확인
데이터베이스에서 확인:
```sql
-- 배치 실행 로그
SELECT * FROM TB_BATCH_LOG ORDER BY CREATED_AT DESC;
-- 처리된 고객 데이터
SELECT * FROM TB_CUSTOMER_PROCESSED ORDER BY PROCESSED_AT DESC;
-- Spring Batch 메타데이터
SELECT
je.JOB_EXECUTION_ID,
ji.JOB_NAME,
je.STATUS,
je.START_TIME,
je.END_TIME
FROM BATCH_JOB_EXECUTION je
JOIN BATCH_JOB_INSTANCE ji ON je.JOB_INSTANCE_ID = ji.JOB_INSTANCE_ID
ORDER BY je.CREATE_TIME DESC;
```
## 6. 다중 서버 테스트
### 6.1 동일한 애플리케이션을 다른 포트로 실행
터미널 1:
```bash
java -jar build/libs/springbatch-test-1.0.0.jar --server.port=8080
```
터미널 2:
```bash
java -jar build/libs/springbatch-test-1.0.0.jar --server.port=8081
```
### 6.2 클러스터링 확인
Quartz 스케줄러 상태 확인:
```sql
SELECT * FROM QRTZ_SCHEDULER_STATE;
```
두 개의 인스턴스가 표시되어야 하며, 배치는 한 서버에서만 실행됩니다.
## 7. 대용량 데이터 테스트
### 7.1 테스트 데이터 생성
대용량 데이터 삽입 스크립트:
```sql
-- 100만 건 테스트 데이터 생성 (약 1-2분 소요)
DELIMITER $$
DROP PROCEDURE IF EXISTS generate_customers$$
CREATE PROCEDURE generate_customers(IN num_rows INT)
BEGIN
DECLARE i INT DEFAULT 0;
DECLARE batch_size INT DEFAULT 10000;
WHILE i < num_rows DO
INSERT INTO TB_CUSTOMER (CUSTOMER_NAME, EMAIL, PHONE, ADDRESS, STATUS)
SELECT
CONCAT('Customer ', i + seq) as CUSTOMER_NAME,
CONCAT('customer', i + seq, '@example.com') as EMAIL,
CONCAT('010-', LPAD(FLOOR(RAND() * 10000), 4, '0'), '-', LPAD(FLOOR(RAND() * 10000), 4, '0')) as PHONE,
CONCAT('Address ', FLOOR(RAND() * 1000)) as ADDRESS,
'ACTIVE' as STATUS
FROM (
SELECT @row := @row + 1 as seq
FROM (SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) t1,
(SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) t2,
(SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) t3,
(SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) t4,
(SELECT @row := 0) r
LIMIT batch_size
) seq_table
WHERE i + seq <= num_rows;
SET i = i + batch_size;
-- 진행 상황 출력
SELECT CONCAT('Inserted ', i, ' / ', num_rows, ' records') AS Progress;
END WHILE;
END$$
DELIMITER ;
-- 100만 건 생성 (테스트용)
CALL generate_customers(1000000);
-- 데이터 확인
SELECT COUNT(*) FROM TB_CUSTOMER;
```
### 7.2 성능 측정
배치 실행 후 성능 확인:
```sql
SELECT
JOB_NAME,
STATUS,
START_TIME,
END_TIME,
TIMESTAMPDIFF(SECOND, START_TIME, END_TIME) as DURATION_SECONDS,
TOTAL_COUNT,
SUCCESS_COUNT,
FAIL_COUNT,
ROUND(TOTAL_COUNT / TIMESTAMPDIFF(SECOND, START_TIME, END_TIME), 2) as RECORDS_PER_SECOND
FROM TB_BATCH_LOG
ORDER BY START_TIME DESC
LIMIT 10;
```
## 8. 트러블슈팅
### 8.1 데이터베이스 연결 오류
```
Error: Communications link failure
```
해결방법:
1. MariaDB 서비스 실행 확인
2. 포트 확인 (기본: 3306)
3. 방화벽 설정 확인
4. application.yml의 연결 정보 확인
### 8.2 Quartz 테이블 오류
```
Error: Table 'batch_db.QRTZ_LOCKS' doesn't exist
```
해결방법:
schema.sql 파일을 다시 실행하여 Quartz 테이블 생성
### 8.3 Batch 테이블 오류
```
Error: Table 'batch_db.BATCH_JOB_INSTANCE' doesn't exist
```
해결방법:
schema.sql 파일을 다시 실행하여 Spring Batch 메타데이터 테이블 생성
### 8.4 메모리 부족 오류
```
java.lang.OutOfMemoryError: Java heap space
```
해결방법:
JVM 힙 메모리 증가:
```bash
java -Xmx2g -jar build/libs/springbatch-test-1.0.0.jar
```
또는 Chunk Size 감소:
```java
private static final int CHUNK_SIZE = 1000; // 5000 -> 1000
```
## 9. 프로덕션 체크리스트
- [ ] 데이터베이스 연결 정보를 환경변수로 관리
- [ ] 로그 레벨을 INFO로 변경
- [ ] 스케줄 시간을 프로덕션 요구사항에 맞게 조정
- [ ] API 엔드포인트를 실제 서비스로 변경
- [ ] 에러 알림 시스템 연동 (이메일, Slack 등)
- [ ] 모니터링 도구 연동 (Prometheus, Grafana 등)
- [ ] Chunk Size와 성능 튜닝
- [ ] 배치 실행 결과 알림 설정
- [ ] 데이터베이스 백업 정책 수립
- [ ] 서버 리소스 모니터링 설정
## 10. 다음 단계
1. **파일 기반 배치 추가**: CSV/Excel 파일 읽기 처리
2. **멀티 스레드 처리**: 병렬 처리로 성능 향상
3. **파티셔닝**: 대용량 데이터를 분할 처리
4. **동적 Job Parameter**: UI에서 파라미터 설정
5. **배치 모니터링 대시보드**: Web UI 추가
6. **실패 재처리**: 실패한 건만 재실행하는 기능
7. **알림 시스템**: 배치 완료/실패 시 알림
## 참고 자료
- [Spring Batch Documentation](https://docs.spring.io/spring-batch/docs/current/reference/html/)
- [Quartz Scheduler Documentation](http://www.quartz-scheduler.org/documentation/)
- [MyBatis Documentation](https://mybatis.org/mybatis-3/)

@ -0,0 +1,71 @@
plugins {
id 'java'
id 'org.springframework.boot' version '2.7.18'
id 'io.spring.dependency-management' version '1.0.15.RELEASE'
}
group = 'com.example'
version = '1.0.0'
sourceCompatibility = '1.8'
configurations {
compileOnly {
extendsFrom annotationProcessor
}
}
repositories {
// Nexus mavenCentral()
// Use Nexus repository in closed network environment
// Uncomment below and comment out mavenCentral()
/*
maven {
url = "${nexusUrl}/repository/maven-public/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
//
allowInsecureProtocol = false // HTTP true
}
*/
//
mavenCentral()
}
dependencies {
// Spring Boot Starter
implementation 'org.springframework.boot:spring-boot-starter'
implementation 'org.springframework.boot:spring-boot-starter-web'
// Spring Batch
implementation 'org.springframework.boot:spring-boot-starter-batch'
// Quartz Scheduler
implementation 'org.springframework.boot:spring-boot-starter-quartz'
// MyBatis
implementation 'org.mybatis.spring.boot:mybatis-spring-boot-starter:2.3.2'
// MariaDB
implementation 'org.mariadb.jdbc:mariadb-java-client:2.7.9'
// Database Connection Pool
implementation 'com.zaxxer:HikariCP'
// Lombok
compileOnly 'org.projectlombok:lombok'
annotationProcessor 'org.projectlombok:lombok'
// HTTP Client for API calls
implementation 'org.springframework.boot:spring-boot-starter-webflux'
// Test
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testImplementation 'org.springframework.batch:spring-batch-test'
}
test {
useJUnitPlatform()
}

@ -0,0 +1,74 @@
version: '3.8'
# =====================================================
# Nexus Repository Manager - Docker Compose
# =====================================================
# 로컬 테스트용 Nexus 환경 구성
# For local Nexus testing environment
#
# 사용법 / Usage:
# docker-compose -f docker-compose-nexus.yml up -d
# docker-compose -f docker-compose-nexus.yml down
services:
nexus:
image: sonatype/nexus3:latest
container_name: nexus
restart: unless-stopped
ports:
- "8081:8081" # Nexus Web UI
- "8082:8082" # Docker Registry (선택사항)
volumes:
- nexus-data:/nexus-data
environment:
# JVM 메모리 설정 (필요시 조정)
- INSTALL4J_ADD_VM_PARAMS=-Xms1g -Xmx1g -XX:MaxDirectMemorySize=2g
networks:
- batch-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8081/"]
interval: 30s
timeout: 10s
retries: 5
start_period: 120s
# MariaDB (배치 프로젝트용)
mariadb:
image: mariadb:10.11
container_name: batch-mariadb
restart: unless-stopped
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root_password
MYSQL_DATABASE: batch_db
MYSQL_USER: batch_user
MYSQL_PASSWORD: batch_password
volumes:
- mariadb-data:/var/lib/mysql
- ./src/main/resources/db/schema.sql:/docker-entrypoint-initdb.d/schema.sql
networks:
- batch-network
command:
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
volumes:
nexus-data:
driver: local
mariadb-data:
driver: local
networks:
batch-network:
driver: bridge

@ -0,0 +1,53 @@
# =====================================================
# Gradle Properties Template
# =====================================================
# 이 파일을 'gradle.properties'로 복사하여 사용하세요
# Copy this file to 'gradle.properties' and configure
# =====================================================
# Nexus Repository Configuration (폐쇄망 환경)
# =====================================================
# Nexus 서버 URL (프로토콜 포함)
# Example: http://nexus.company.com:8081 or https://nexus.company.com
nexusUrl=http://nexus.your-company.com:8081
# Nexus 사용자 인증 정보
nexusUsername=your-nexus-username
nexusPassword=your-nexus-password
# =====================================================
# Maven Repository URLs (선택사항)
# =====================================================
# Maven Public Repository (proxy + hosted)
nexusMavenPublic=${nexusUrl}/repository/maven-public/
# Maven Releases Repository
nexusMavenReleases=${nexusUrl}/repository/maven-releases/
# Maven Snapshots Repository
nexusMavenSnapshots=${nexusUrl}/repository/maven-snapshots/
# =====================================================
# Build Configuration
# =====================================================
# Gradle 데몬 메모리 설정
org.gradle.jvmargs=-Xmx2048m -XX:MaxMetaspaceSize=512m
# 병렬 빌드 활성화
org.gradle.parallel=true
# 빌드 캐시 활성화
org.gradle.caching=true
# 데몬 사용
org.gradle.daemon=true
# =====================================================
# Security (선택사항)
# =====================================================
# HTTP 프로토콜 허용 여부 (보안상 권장하지 않음)
# allowInsecureProtocol=true
# 자체 서명 인증서 신뢰 여부
# systemProp.javax.net.ssl.trustStore=/path/to/truststore.jks
# systemProp.javax.net.ssl.trustStorePassword=changeit

251
gradlew vendored

@ -0,0 +1,251 @@
#!/bin/sh
#
# Copyright © 2015-2021 the original authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0
#
##############################################################################
#
# Gradle start up script for POSIX generated by Gradle.
#
# Important for running:
#
# (1) You need a POSIX-compliant shell to run this script. If your /bin/sh is
# noncompliant, but you have some other compliant shell such as ksh or
# bash, then to run this script, type that shell name before the whole
# command line, like:
#
# ksh Gradle
#
# Busybox and similar reduced shells will NOT work, because this script
# requires all of these POSIX shell features:
# * functions;
# * expansions «$var», «${var}», «${var:-default}», «${var+SET}»,
# «${var#prefix}», «${var%suffix}», and «$( cmd )»;
# * compound commands having a testable exit status, especially «case»;
# * various built-in commands including «command», «set», and «ulimit».
#
# Important for patching:
#
# (2) This script targets any POSIX shell, so it avoids extensions provided
# by Bash, Ksh, etc; in particular arrays are avoided.
#
# The "traditional" practice of packing multiple parameters into a
# space-separated string is a well documented source of bugs and security
# problems, so this is (mostly) avoided, by progressively accumulating
# options in "$@", and eventually passing that to Java.
#
# Where the inherited environment variables (DEFAULT_JVM_OPTS, JAVA_OPTS,
# and GRADLE_OPTS) rely on word-splitting, this is performed explicitly;
# see the in-line comments for details.
#
# There are tweaks for specific operating systems such as AIX, CygWin,
# Darwin, MinGW, and NonStop.
#
# (3) This script is generated from the Groovy template
# https://github.com/gradle/gradle/blob/HEAD/platforms/jvm/plugins-application/src/main/resources/org/gradle/api/internal/plugins/unixStartScript.txt
# within the Gradle project.
#
# You can find Gradle at https://github.com/gradle/gradle/.
#
##############################################################################
# Attempt to set APP_HOME
# Resolve links: $0 may be a link
app_path=$0
# Need this for daisy-chained symlinks.
while
APP_HOME=${app_path%"${app_path##*/}"} # leaves a trailing /; empty if no leading path
[ -h "$app_path" ]
do
ls=$( ls -ld "$app_path" )
link=${ls#*' -> '}
case $link in #(
/*) app_path=$link ;; #(
*) app_path=$APP_HOME$link ;;
esac
done
# This is normally unused
# shellcheck disable=SC2034
APP_BASE_NAME=${0##*/}
# Discard cd standard output in case $CDPATH is set (https://github.com/gradle/gradle/issues/25036)
APP_HOME=$( cd -P "${APP_HOME:-./}" > /dev/null && printf '%s\n' "$PWD" ) || exit
# Use the maximum available, or set MAX_FD != -1 to use that value.
MAX_FD=maximum
warn () {
echo "$*"
} >&2
die () {
echo
echo "$*"
echo
exit 1
} >&2
# OS specific support (must be 'true' or 'false').
cygwin=false
msys=false
darwin=false
nonstop=false
case "$( uname )" in #(
CYGWIN* ) cygwin=true ;; #(
Darwin* ) darwin=true ;; #(
MSYS* | MINGW* ) msys=true ;; #(
NONSTOP* ) nonstop=true ;;
esac
CLASSPATH="\\\"\\\""
# Determine the Java command to use to start the JVM.
if [ -n "$JAVA_HOME" ] ; then
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD=$JAVA_HOME/jre/sh/java
else
JAVACMD=$JAVA_HOME/bin/java
fi
if [ ! -x "$JAVACMD" ] ; then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
else
JAVACMD=java
if ! command -v java >/dev/null 2>&1
then
die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
fi
# Increase the maximum file descriptors if we can.
if ! "$cygwin" && ! "$darwin" && ! "$nonstop" ; then
case $MAX_FD in #(
max*)
# In POSIX sh, ulimit -H is undefined. That's why the result is checked to see if it worked.
# shellcheck disable=SC2039,SC3045
MAX_FD=$( ulimit -H -n ) ||
warn "Could not query maximum file descriptor limit"
esac
case $MAX_FD in #(
'' | soft) :;; #(
*)
# In POSIX sh, ulimit -n is undefined. That's why the result is checked to see if it worked.
# shellcheck disable=SC2039,SC3045
ulimit -n "$MAX_FD" ||
warn "Could not set maximum file descriptor limit to $MAX_FD"
esac
fi
# Collect all arguments for the java command, stacking in reverse order:
# * args from the command line
# * the main class name
# * -classpath
# * -D...appname settings
# * --module-path (only if needed)
# * DEFAULT_JVM_OPTS, JAVA_OPTS, and GRADLE_OPTS environment variables.
# For Cygwin or MSYS, switch paths to Windows format before running java
if "$cygwin" || "$msys" ; then
APP_HOME=$( cygpath --path --mixed "$APP_HOME" )
CLASSPATH=$( cygpath --path --mixed "$CLASSPATH" )
JAVACMD=$( cygpath --unix "$JAVACMD" )
# Now convert the arguments - kludge to limit ourselves to /bin/sh
for arg do
if
case $arg in #(
-*) false ;; # don't mess with options #(
/?*) t=${arg#/} t=/${t%%/*} # looks like a POSIX filepath
[ -e "$t" ] ;; #(
*) false ;;
esac
then
arg=$( cygpath --path --ignore --mixed "$arg" )
fi
# Roll the args list around exactly as many times as the number of
# args, so each arg winds up back in the position where it started, but
# possibly modified.
#
# NB: a `for` loop captures its iteration list before it begins, so
# changing the positional parameters here affects neither the number of
# iterations, nor the values presented in `arg`.
shift # remove old arg
set -- "$@" "$arg" # push replacement arg
done
fi
# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"'
# Collect all arguments for the java command:
# * DEFAULT_JVM_OPTS, JAVA_OPTS, and optsEnvironmentVar are not allowed to contain shell fragments,
# and any embedded shellness will be escaped.
# * For example: A user cannot expect ${Hostname} to be expanded, as it is an environment variable and will be
# treated as '${Hostname}' itself on the command line.
set -- \
"-Dorg.gradle.appname=$APP_BASE_NAME" \
-classpath "$CLASSPATH" \
-jar "$APP_HOME/gradle/wrapper/gradle-wrapper.jar" \
"$@"
# Stop when "xargs" is not available.
if ! command -v xargs >/dev/null 2>&1
then
die "xargs is not available"
fi
# Use "xargs" to parse quoted args.
#
# With -n1 it outputs one arg per line, with the quotes and backslashes removed.
#
# In Bash we could simply go:
#
# readarray ARGS < <( xargs -n1 <<<"$var" ) &&
# set -- "${ARGS[@]}" "$@"
#
# but POSIX shell has neither arrays nor command substitution, so instead we
# post-process each arg (as a line of input to sed) to backslash-escape any
# character that might be a shell metacharacter, then use eval to reverse
# that process (while maintaining the separation between arguments), and wrap
# the whole thing up as a single "set" statement.
#
# This will of course break if any of these variables contains a newline or
# an unmatched quote.
#
eval "set -- $(
printf '%s\n' "$DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS" |
xargs -n1 |
sed ' s~[^-[:alnum:]+,./:=@_]~\\&~g; ' |
tr '\n' ' '
)" '"$@"'
exec "$JAVACMD" "$@"

@ -0,0 +1,107 @@
/**
* =====================================================
* Gradle Init Script for Nexus Repository
* =====================================================
*
* 이 파일은 모든 Gradle 프로젝트에 Nexus 설정을 적용합니다.
* This file applies Nexus configuration to all Gradle projects.
*
* 사용 방법 / Usage:
* 1. 이 파일을 ~/.gradle/init.gradle 또는 %USERPROFILE%\.gradle\init.gradle 로 복사
* 2. nexusUrl, nexusUsername, nexusPassword를 실제 값으로 변경
*
* 또는 프로젝트별로 적용:
* gradlew build --init-script init.gradle
*/
allprojects {
repositories {
// 기존 저장소 제거 (선택사항)
all { ArtifactRepository repo ->
if (repo instanceof MavenArtifactRepository) {
def url = repo.url.toString()
// Maven Central, Google, JCenter 등 외부 저장소 제거
if (url.contains('maven.org') ||
url.contains('jcenter') ||
url.contains('google.com')) {
remove repo
}
}
}
// Nexus Repository 설정
maven {
name 'NexusMavenPublic'
url 'http://nexus.your-company.com:8081/repository/maven-public/'
credentials {
username 'your-nexus-username'
password 'your-nexus-password'
}
// HTTP 사용 시 (HTTPS 권장)
allowInsecureProtocol = true
}
// Gradle Plugin Portal용 Nexus 설정 (선택사항)
maven {
name 'NexusGradlePlugins'
url 'http://nexus.your-company.com:8081/repository/gradle-plugins/'
credentials {
username 'your-nexus-username'
password 'your-nexus-password'
}
allowInsecureProtocol = true
}
}
// Plugin Resolution Strategy
buildscript {
repositories {
maven {
name 'NexusMavenPublic'
url 'http://nexus.your-company.com:8081/repository/maven-public/'
credentials {
username 'your-nexus-username'
password 'your-nexus-password'
}
allowInsecureProtocol = true
}
}
}
}
// Settings for Plugin Management
settingsEvaluated { settings ->
settings.pluginManagement {
repositories {
maven {
name 'NexusGradlePlugins'
url 'http://nexus.your-company.com:8081/repository/gradle-plugins/'
credentials {
username 'your-nexus-username'
password 'your-nexus-password'
}
allowInsecureProtocol = true
}
maven {
name 'NexusMavenPublic'
url 'http://nexus.your-company.com:8081/repository/maven-public/'
credentials {
username 'your-nexus-username'
password 'your-nexus-password'
}
allowInsecureProtocol = true
}
}
}
}

@ -0,0 +1,29 @@
rootProject.name = 'springbatch-test'
// =====================================================
// Plugin Management for Nexus ( )
// =====================================================
// Gradle Plugin Nexus
// Uncomment below section when using Nexus in closed network
/*
pluginManagement {
repositories {
maven {
url = "${nexusUrl}/repository/gradle-plugins/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
allowInsecureProtocol = false // HTTP true
}
maven {
url = "${nexusUrl}/repository/maven-public/"
credentials {
username = "${nexusUsername}"
password = "${nexusPassword}"
}
allowInsecureProtocol = false
}
}
}
*/

@ -0,0 +1,12 @@
package com.example.batch;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class BatchApplication {
public static void main(String[] args) {
SpringApplication.run(BatchApplication.class, args);
}
}

@ -0,0 +1,40 @@
package com.example.batch.config;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.core.launch.support.SimpleJobLauncher;
import org.springframework.batch.core.repository.JobRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.task.SimpleAsyncTaskExecutor;
import org.springframework.transaction.PlatformTransactionManager;
@Configuration
@EnableBatchProcessing
public class BatchConfig {
@Autowired
public JobBuilderFactory jobBuilderFactory;
@Autowired
public StepBuilderFactory stepBuilderFactory;
@Autowired
private PlatformTransactionManager transactionManager;
/**
* JobLauncher for async execution
* Prevents blocking when job is triggered by scheduler
*/
@Bean
public JobLauncher asyncJobLauncher(JobRepository jobRepository) throws Exception {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(jobRepository);
jobLauncher.setTaskExecutor(new SimpleAsyncTaskExecutor());
jobLauncher.afterPropertiesSet();
return jobLauncher;
}
}

@ -0,0 +1,36 @@
package com.example.batch.config;
import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import javax.sql.DataSource;
@Configuration
@MapperScan(basePackages = "com.example.batch.mapper")
public class MyBatisConfig {
@Bean
public SqlSessionFactory sqlSessionFactory(DataSource dataSource) throws Exception {
SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
sessionFactory.setDataSource(dataSource);
sessionFactory.setMapperLocations(
new PathMatchingResourcePatternResolver().getResources("classpath:mapper/**/*.xml")
);
sessionFactory.setTypeAliasesPackage("com.example.batch.domain");
org.apache.ibatis.session.Configuration configuration = new org.apache.ibatis.session.Configuration();
configuration.setMapUnderscoreToCamelCase(true);
configuration.setCacheEnabled(false);
configuration.setLazyLoadingEnabled(false);
configuration.setDefaultFetchSize(1000);
configuration.setDefaultStatementTimeout(30);
sessionFactory.setConfiguration(configuration);
return sessionFactory.getObject();
}
}

@ -0,0 +1,84 @@
package com.example.batch.controller;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.JobParametersBuilder;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
import java.time.LocalDateTime;
import java.util.HashMap;
import java.util.Map;
/**
* Batch Job Manual Execution Controller
* For testing and manual triggering
*/
@Slf4j
@RestController
@RequestMapping("/api/batch")
@RequiredArgsConstructor
public class BatchController {
@Qualifier("asyncJobLauncher")
private final JobLauncher jobLauncher;
@Qualifier("customerProcessingJob")
private final Job customerProcessingJob;
/**
* Manually trigger customer processing batch job
*
* Usage: POST http://localhost:8080/api/batch/customer/run
*/
@PostMapping("/customer/run")
public ResponseEntity<Map<String, Object>> runCustomerBatch() {
Map<String, Object> response = new HashMap<>();
try {
log.info("Manual batch execution requested");
// Create unique job parameters to allow re-execution
JobParameters jobParameters = new JobParametersBuilder()
.addString("requestTime", LocalDateTime.now().toString())
.addLong("timestamp", System.currentTimeMillis())
.addString("trigger", "MANUAL")
.toJobParameters();
// Launch batch job asynchronously
jobLauncher.run(customerProcessingJob, jobParameters);
response.put("status", "SUCCESS");
response.put("message", "Batch job started successfully");
response.put("timestamp", System.currentTimeMillis());
log.info("Batch job triggered successfully");
return ResponseEntity.ok(response);
} catch (Exception e) {
log.error("Failed to execute batch job", e);
response.put("status", "ERROR");
response.put("message", "Failed to start batch job: " + e.getMessage());
response.put("timestamp", System.currentTimeMillis());
return ResponseEntity.internalServerError().body(response);
}
}
/**
* Health check endpoint
*/
@GetMapping("/health")
public ResponseEntity<Map<String, String>> health() {
Map<String, String> response = new HashMap<>();
response.put("status", "UP");
response.put("timestamp", LocalDateTime.now().toString());
return ResponseEntity.ok(response);
}
}

@ -0,0 +1,26 @@
package com.example.batch.domain;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.time.LocalDateTime;
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class BatchLog {
private Long logId;
private String jobName;
private Long jobExecutionId;
private String status;
private LocalDateTime startTime;
private LocalDateTime endTime;
private Long totalCount;
private Long successCount;
private Long failCount;
private String errorMessage;
private LocalDateTime createdAt;
}

@ -0,0 +1,23 @@
package com.example.batch.domain;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.time.LocalDateTime;
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class Customer {
private Long customerId;
private String customerName;
private String email;
private String phone;
private String address;
private String status;
private LocalDateTime createdAt;
private LocalDateTime updatedAt;
}

@ -0,0 +1,24 @@
package com.example.batch.domain;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.time.LocalDateTime;
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class CustomerProcessed {
private Long processId;
private Long customerId;
private String customerName;
private String email;
private String phone;
private String processedData;
private String apiCallStatus;
private String apiResponse;
private LocalDateTime processedAt;
}

@ -0,0 +1,210 @@
package com.example.batch.job;
import com.example.batch.domain.Customer;
import com.example.batch.domain.CustomerProcessed;
import com.example.batch.mapper.CustomerMapper;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.JobScope;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.batch.item.ItemWriter;
import org.springframework.batch.item.database.JdbcPagingItemReader;
import org.springframework.batch.item.database.Order;
import org.springframework.batch.item.database.support.MySqlPagingQueryProvider;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.jdbc.core.BeanPropertyRowMapper;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Mono;
import javax.sql.DataSource;
import java.time.Duration;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
@Slf4j
@Configuration
@RequiredArgsConstructor
public class CustomerBatchJobConfig {
private final JobBuilderFactory jobBuilderFactory;
private final StepBuilderFactory stepBuilderFactory;
private final DataSource dataSource;
private final CustomerMapper customerMapper;
private final CustomerJobExecutionListener customerJobExecutionListener;
private static final int CHUNK_SIZE = 5000;
private static final int PAGE_SIZE = 5000;
/**
* Customer Processing Batch Job
* Flow: Read customers from DB -> Process data -> Call API -> Save results
*/
@Bean
public Job customerProcessingJob() {
return jobBuilderFactory.get("customerProcessingJob")
.listener(customerJobExecutionListener)
.start(customerProcessingStep())
.build();
}
@Bean
@JobScope
public Step customerProcessingStep() {
return stepBuilderFactory.get("customerProcessingStep")
.<Customer, CustomerProcessed>chunk(CHUNK_SIZE)
.reader(customerReader())
.processor(customerProcessor())
.writer(customerWriter())
.faultTolerant()
.skip(Exception.class)
.skipLimit(100) // Allow up to 100 errors
.retryLimit(3) // Retry failed items up to 3 times
.retry(Exception.class)
.build();
}
/**
* ItemReader: Read customers from database with pagination
* Uses JdbcPagingItemReader for efficient large data processing
*/
@Bean
@StepScope
public JdbcPagingItemReader<Customer> customerReader() {
JdbcPagingItemReader<Customer> reader = new JdbcPagingItemReader<>();
reader.setDataSource(dataSource);
reader.setPageSize(PAGE_SIZE);
reader.setRowMapper(new BeanPropertyRowMapper<>(Customer.class));
MySqlPagingQueryProvider queryProvider = new MySqlPagingQueryProvider();
queryProvider.setSelectClause("CUSTOMER_ID, CUSTOMER_NAME, EMAIL, PHONE, ADDRESS, STATUS, CREATED_AT, UPDATED_AT");
queryProvider.setFromClause("FROM TB_CUSTOMER");
queryProvider.setWhereClause("WHERE STATUS = 'ACTIVE'");
Map<String, Order> sortKeys = new HashMap<>();
sortKeys.put("CUSTOMER_ID", Order.ASCENDING);
queryProvider.setSortKeys(sortKeys);
reader.setQueryProvider(queryProvider);
return reader;
}
/**
* ItemProcessor: Process customer data and prepare for API call
* This is where business logic is applied
*/
@Bean
@StepScope
public ItemProcessor<Customer, CustomerProcessed> customerProcessor() {
return customer -> {
try {
log.debug("Processing customer: {}", customer.getCustomerId());
// Business logic: Process customer data
String processedData = processCustomerData(customer);
// Call external API
String apiResponse = callExternalApi(customer);
// Build processed result
return CustomerProcessed.builder()
.customerId(customer.getCustomerId())
.customerName(customer.getCustomerName())
.email(customer.getEmail())
.phone(customer.getPhone())
.processedData(processedData)
.apiCallStatus("SUCCESS")
.apiResponse(apiResponse)
.build();
} catch (Exception e) {
log.error("Failed to process customer: {}, error: {}", customer.getCustomerId(), e.getMessage());
// Return failed record for tracking
return CustomerProcessed.builder()
.customerId(customer.getCustomerId())
.customerName(customer.getCustomerName())
.email(customer.getEmail())
.phone(customer.getPhone())
.apiCallStatus("FAILED")
.apiResponse("Error: " + e.getMessage())
.build();
}
};
}
/**
* ItemWriter: Save processed data to database
* Uses MyBatis batch insert for better performance
*/
@Bean
@StepScope
public ItemWriter<CustomerProcessed> customerWriter() {
return items -> {
try {
log.info("Writing {} processed customers", items.size());
// Batch insert for performance
customerMapper.insertProcessedCustomerBatch((List<CustomerProcessed>) items);
log.info("Successfully wrote {} customers", items.size());
} catch (Exception e) {
log.error("Failed to write customers", e);
throw e;
}
};
}
/**
* Business logic: Process customer data
*/
private String processCustomerData(Customer customer) {
// Example: Transform and enrich customer data
return String.format("Processed: %s (%s)", customer.getCustomerName(), customer.getEmail());
}
/**
* Call external API with customer data
* Using WebClient for non-blocking HTTP calls
*/
private String callExternalApi(Customer customer) {
try {
WebClient webClient = WebClient.builder()
.baseUrl("https://jsonplaceholder.typicode.com")
.build();
// Example API call (using public test API)
// Create request body (Java 8 compatible)
Map<String, Object> requestBody = new HashMap<>();
requestBody.put("title", customer.getCustomerName());
requestBody.put("body", customer.getEmail());
requestBody.put("userId", 1);
String response = webClient.post()
.uri("/posts")
.bodyValue(requestBody)
.retrieve()
.bodyToMono(String.class)
.timeout(Duration.ofSeconds(5))
.onErrorResume(e -> {
log.error("API call failed for customer {}: {}", customer.getCustomerId(), e.getMessage());
return Mono.just("API_ERROR");
})
.block();
return response != null ? response : "NO_RESPONSE";
} catch (Exception e) {
log.error("Failed to call API for customer {}: {}", customer.getCustomerId(), e.getMessage());
throw new RuntimeException("API call failed", e);
}
}
}

@ -0,0 +1,109 @@
package com.example.batch.job;
import com.example.batch.domain.BatchLog;
import com.example.batch.mapper.BatchLogMapper;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.batch.core.BatchStatus;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobExecutionListener;
import org.springframework.stereotype.Component;
import java.time.LocalDateTime;
/**
* Job Execution Listener for logging and monitoring
* Tracks batch execution status and handles failures
*/
@Slf4j
@Component
@RequiredArgsConstructor
public class CustomerJobExecutionListener implements JobExecutionListener {
private final BatchLogMapper batchLogMapper;
@Override
public void beforeJob(JobExecution jobExecution) {
log.info("==================================================");
log.info("Job Started: {}", jobExecution.getJobInstance().getJobName());
log.info("Job Execution ID: {}", jobExecution.getId());
log.info("Job Parameters: {}", jobExecution.getJobParameters());
log.info("==================================================");
// Insert initial batch log
BatchLog batchLog = BatchLog.builder()
.jobName(jobExecution.getJobInstance().getJobName())
.jobExecutionId(jobExecution.getId())
.status("STARTED")
.startTime(LocalDateTime.now())
.totalCount(0L)
.successCount(0L)
.failCount(0L)
.build();
batchLogMapper.insertBatchLog(batchLog);
}
@Override
public void afterJob(JobExecution jobExecution) {
long totalCount = 0;
long successCount = 0;
long failCount = 0;
// Aggregate statistics from all steps
jobExecution.getStepExecutions().forEach(stepExecution -> {
log.info("Step: {} - Read: {}, Write: {}, Skip: {}",
stepExecution.getStepName(),
stepExecution.getReadCount(),
stepExecution.getWriteCount(),
stepExecution.getSkipCount());
});
// Calculate totals
totalCount = jobExecution.getStepExecutions().stream()
.mapToLong(step -> step.getReadCount())
.sum();
successCount = jobExecution.getStepExecutions().stream()
.mapToLong(step -> step.getWriteCount())
.sum();
failCount = jobExecution.getStepExecutions().stream()
.mapToLong(step -> step.getSkipCount())
.sum();
String status = jobExecution.getStatus().toString();
String errorMessage = null;
if (jobExecution.getStatus() == BatchStatus.FAILED) {
errorMessage = jobExecution.getAllFailureExceptions().stream()
.map(Throwable::getMessage)
.reduce((a, b) -> a + "; " + b)
.orElse("Unknown error");
}
log.info("==================================================");
log.info("Job Finished: {}", jobExecution.getJobInstance().getJobName());
log.info("Status: {}", status);
log.info("Total Processed: {}", totalCount);
log.info("Success Count: {}", successCount);
log.info("Failed Count: {}", failCount);
log.info("Duration: {} ms", jobExecution.getEndTime().getTime() - jobExecution.getStartTime().getTime());
if (errorMessage != null) {
log.error("Error Message: {}", errorMessage);
}
log.info("==================================================");
// Update batch log
BatchLog batchLog = batchLogMapper.selectBatchLogByExecutionId(jobExecution.getId());
if (batchLog != null) {
batchLog.setStatus(status);
batchLog.setEndTime(LocalDateTime.now());
batchLog.setTotalCount(totalCount);
batchLog.setSuccessCount(successCount);
batchLog.setFailCount(failCount);
batchLog.setErrorMessage(errorMessage);
batchLogMapper.updateBatchLog(batchLog);
}
}
}

@ -0,0 +1,24 @@
package com.example.batch.mapper;
import com.example.batch.domain.BatchLog;
import org.apache.ibatis.annotations.Mapper;
import org.apache.ibatis.annotations.Param;
@Mapper
public interface BatchLogMapper {
/**
* Insert batch execution log
*/
void insertBatchLog(BatchLog batchLog);
/**
* Update batch execution log
*/
void updateBatchLog(BatchLog batchLog);
/**
* Find batch log by job execution ID
*/
BatchLog selectBatchLogByExecutionId(@Param("jobExecutionId") Long jobExecutionId);
}

@ -0,0 +1,37 @@
package com.example.batch.mapper;
import com.example.batch.domain.Customer;
import com.example.batch.domain.CustomerProcessed;
import org.apache.ibatis.annotations.Mapper;
import org.apache.ibatis.annotations.Param;
import java.util.List;
@Mapper
public interface CustomerMapper {
/**
* Fetch customers with pagination for batch processing
*/
List<Customer> selectCustomersByPage(@Param("offset") int offset, @Param("limit") int limit);
/**
* Get total customer count for status tracking
*/
Long selectTotalCustomerCount();
/**
* Insert processed customer data
*/
void insertProcessedCustomer(CustomerProcessed customerProcessed);
/**
* Batch insert processed customers for better performance
*/
void insertProcessedCustomerBatch(@Param("list") List<CustomerProcessed> list);
/**
* Update customer status
*/
void updateCustomerStatus(@Param("customerId") Long customerId, @Param("status") String status);
}

@ -0,0 +1,72 @@
package com.example.batch.scheduler;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.quartz.*;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* Quartz Scheduler Configuration
* Supports multi-server clustering to prevent duplicate execution
*/
@Slf4j
@Configuration
@RequiredArgsConstructor
public class BatchScheduler {
/**
* Customer Processing Job Detail
* Job will not be executed concurrently (@DisallowConcurrentExecution)
*/
@Bean
public JobDetail customerBatchJobDetail() {
return JobBuilder.newJob(CustomerBatchQuartzJob.class)
.withIdentity("customerBatchJob", "batch-jobs")
.withDescription("Customer data processing batch job")
.storeDurably()
.build();
}
/**
* Customer Batch Job Trigger
* Runs every day at 2:00 AM
* Change cron expression as needed
*/
@Bean
public Trigger customerBatchJobTrigger() {
// Cron expression: Every day at 2:00 AM
// For testing, you can use: "0/30 * * * * ?" (every 30 seconds)
CronScheduleBuilder scheduleBuilder = CronScheduleBuilder
.cronSchedule("0 0 2 * * ?") // Every day at 2:00 AM
.withMisfireHandlingInstructionDoNothing();
return TriggerBuilder.newTrigger()
.forJob(customerBatchJobDetail())
.withIdentity("customerBatchTrigger", "batch-triggers")
.withDescription("Trigger for customer batch job")
.withSchedule(scheduleBuilder)
.build();
}
/**
* For testing: Manual trigger that runs every 5 minutes
* Comment out in production
*/
/*
@Bean
public Trigger customerBatchTestTrigger() {
SimpleScheduleBuilder scheduleBuilder = SimpleScheduleBuilder
.simpleSchedule()
.withIntervalInMinutes(5)
.repeatForever();
return TriggerBuilder.newTrigger()
.forJob(customerBatchJobDetail())
.withIdentity("customerBatchTestTrigger", "test-triggers")
.withDescription("Test trigger for customer batch job")
.withSchedule(scheduleBuilder)
.build();
}
*/
}

@ -0,0 +1,61 @@
package com.example.batch.scheduler;
import lombok.extern.slf4j.Slf4j;
import org.quartz.DisallowConcurrentExecution;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.JobParametersBuilder;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.scheduling.quartz.QuartzJobBean;
import org.springframework.stereotype.Component;
import java.time.LocalDateTime;
/**
* Quartz Job that triggers Spring Batch Job
* @DisallowConcurrentExecution prevents concurrent execution on the same node
* Quartz clustering prevents execution on multiple nodes
*/
@Slf4j
@Component
@DisallowConcurrentExecution
public class CustomerBatchQuartzJob extends QuartzJobBean {
@Autowired
@Qualifier("asyncJobLauncher")
private JobLauncher jobLauncher;
@Autowired
@Qualifier("customerProcessingJob")
private Job customerProcessingJob;
@Override
protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
try {
log.info("========================================");
log.info("Starting scheduled batch job execution");
log.info("Trigger: {}", context.getTrigger().getKey());
log.info("Scheduled Fire Time: {}", context.getScheduledFireTime());
log.info("========================================");
// Create unique job parameters to allow re-execution
JobParameters jobParameters = new JobParametersBuilder()
.addString("requestTime", LocalDateTime.now().toString())
.addLong("timestamp", System.currentTimeMillis())
.toJobParameters();
// Launch batch job
jobLauncher.run(customerProcessingJob, jobParameters);
log.info("Batch job triggered successfully");
} catch (Exception e) {
log.error("Failed to execute batch job", e);
throw new JobExecutionException("Batch job execution failed", e);
}
}
}

@ -0,0 +1,79 @@
spring:
application:
name: springbatch-test
# DataSource Configuration
datasource:
driver-class-name: org.mariadb.jdbc.Driver
url: jdbc:mariadb://localhost:3306/batch_db?characterEncoding=UTF-8&serverTimezone=Asia/Seoul
username: batch_user
password: batch_password
hikari:
maximum-pool-size: 10
minimum-idle: 5
connection-timeout: 30000
idle-timeout: 600000
max-lifetime: 1800000
# Batch Configuration
batch:
job:
enabled: false # Prevent auto-execution on startup
initialize-schema: never # Use external SQL script
table-prefix: BATCH_
# Quartz Configuration
quartz:
job-store-type: jdbc
jdbc:
initialize-schema: never # Use external SQL script
properties:
org:
quartz:
scheduler:
instanceName: BatchScheduler
instanceId: AUTO
jobStore:
class: org.quartz.impl.jdbcjobstore.JobStoreTX
driverDelegateClass: org.quartz.impl.jdbcjobstore.StdJDBCDelegate
tablePrefix: QRTZ_
isClustered: true # Enable clustering for multi-server
clusterCheckinInterval: 20000
useProperties: false
threadPool:
class: org.quartz.simpl.SimpleThreadPool
threadCount: 10
threadPriority: 5
threadsInheritContextClassLoaderOfInitializingThread: true
# MyBatis Configuration
mybatis:
mapper-locations: classpath:mapper/**/*.xml
type-aliases-package: com.example.batch.domain
configuration:
map-underscore-to-camel-case: true
cache-enabled: false
lazy-loading-enabled: false
default-fetch-size: 1000
default-statement-timeout: 30
# Batch Processing Configuration
batch:
chunk-size: 5000 # Process 5000 records at a time
page-size: 5000
max-thread-pool-size: 5
skip-limit: 100 # Allow up to 100 skip errors
# Server Configuration
server:
port: 8080
# Logging
logging:
level:
root: INFO
com.example.batch: DEBUG
org.springframework.batch: DEBUG
org.quartz: INFO
pattern:
console: "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n"

@ -0,0 +1,168 @@
-- =============================================
-- Sample Data Generation Script
-- =============================================
-- Small dataset for initial testing (5 records already in schema.sql)
-- Medium dataset (1,000 records) - For basic testing
-- Uncomment to use:
/*
INSERT INTO TB_CUSTOMER (CUSTOMER_NAME, EMAIL, PHONE, ADDRESS, STATUS)
SELECT
CONCAT('Customer ', @row := @row + 1) as CUSTOMER_NAME,
CONCAT('customer', @row, '@example.com') as EMAIL,
CONCAT('010-', LPAD(FLOOR(RAND() * 10000), 4, '0'), '-', LPAD(FLOOR(RAND() * 10000), 4, '0')) as PHONE,
CASE
WHEN @row % 5 = 0 THEN 'Seoul, Korea'
WHEN @row % 5 = 1 THEN 'Busan, Korea'
WHEN @row % 5 = 2 THEN 'Incheon, Korea'
WHEN @row % 5 = 3 THEN 'Daegu, Korea'
ELSE 'Daejeon, Korea'
END as ADDRESS,
CASE WHEN @row % 10 = 0 THEN 'INACTIVE' ELSE 'ACTIVE' END as STATUS
FROM (SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5
UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 10) t1,
(SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5
UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 10) t2,
(SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5
UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 10) t3,
(SELECT @row := 0) r
LIMIT 1000;
*/
-- =============================================
-- Large Dataset Generation Procedure
-- =============================================
DELIMITER $$
DROP PROCEDURE IF EXISTS generate_customers$$
CREATE PROCEDURE generate_customers(IN num_rows INT)
BEGIN
DECLARE i INT DEFAULT 0;
DECLARE batch_size INT DEFAULT 10000;
DECLARE start_time DATETIME;
DECLARE current_time DATETIME;
SET start_time = NOW();
-- Disable keys for faster insertion
SET FOREIGN_KEY_CHECKS = 0;
SET UNIQUE_CHECKS = 0;
SET AUTOCOMMIT = 0;
WHILE i < num_rows DO
INSERT INTO TB_CUSTOMER (CUSTOMER_NAME, EMAIL, PHONE, ADDRESS, STATUS)
SELECT
CONCAT('Customer ', i + seq) as CUSTOMER_NAME,
CONCAT('customer', i + seq, '@example.com') as EMAIL,
CONCAT('010-', LPAD(FLOOR(RAND() * 10000), 4, '0'), '-', LPAD(FLOOR(RAND() * 10000), 4, '0')) as PHONE,
CASE
WHEN (i + seq) % 10 = 0 THEN 'Seoul, Korea'
WHEN (i + seq) % 10 = 1 THEN 'Busan, Korea'
WHEN (i + seq) % 10 = 2 THEN 'Incheon, Korea'
WHEN (i + seq) % 10 = 3 THEN 'Daegu, Korea'
WHEN (i + seq) % 10 = 4 THEN 'Daejeon, Korea'
WHEN (i + seq) % 10 = 5 THEN 'Gwangju, Korea'
WHEN (i + seq) % 10 = 6 THEN 'Ulsan, Korea'
WHEN (i + seq) % 10 = 7 THEN 'Suwon, Korea'
WHEN (i + seq) % 10 = 8 THEN 'Changwon, Korea'
ELSE 'Goyang, Korea'
END as ADDRESS,
-- 10% INACTIVE, 90% ACTIVE
CASE WHEN (i + seq) % 10 = 0 THEN 'INACTIVE' ELSE 'ACTIVE' END as STATUS
FROM (
SELECT @row := @row + 1 as seq
FROM (SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) t1,
(SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) t2,
(SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) t3,
(SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) t4,
(SELECT @row := 0) r
LIMIT batch_size
) seq_table
WHERE i + seq <= num_rows;
COMMIT;
SET i = i + batch_size;
-- Progress report
SET current_time = NOW();
SELECT
CONCAT('Progress: ', i, ' / ', num_rows, ' records (', ROUND(i * 100.0 / num_rows, 2), '%)') AS Status,
CONCAT('Elapsed: ', TIMESTAMPDIFF(SECOND, start_time, current_time), ' seconds') AS Time,
CONCAT('Speed: ', ROUND(i / TIMESTAMPDIFF(SECOND, start_time, current_time), 2), ' records/sec') AS Speed;
END WHILE;
-- Re-enable keys
SET FOREIGN_KEY_CHECKS = 1;
SET UNIQUE_CHECKS = 1;
SET AUTOCOMMIT = 1;
-- Final summary
SELECT
CONCAT('Completed! Generated ', num_rows, ' records') AS Summary,
CONCAT('Total time: ', TIMESTAMPDIFF(SECOND, start_time, NOW()), ' seconds') AS Duration;
END$$
DELIMITER ;
-- =============================================
-- Usage Examples
-- =============================================
-- Generate 10,000 records (for basic testing - ~2 seconds)
-- CALL generate_customers(10000);
-- Generate 100,000 records (for medium testing - ~20 seconds)
-- CALL generate_customers(100000);
-- Generate 1,000,000 records (for large testing - ~3 minutes)
-- CALL generate_customers(1000000);
-- Generate 10,000,000 records (for very large testing - ~30 minutes)
-- CALL generate_customers(10000000);
-- Generate 30,000,000 records (for production scale testing - ~90 minutes)
-- WARNING: This will take significant time and disk space
-- CALL generate_customers(30000000);
-- =============================================
-- Verification Queries
-- =============================================
-- Check total count
-- SELECT COUNT(*) as TOTAL_RECORDS FROM TB_CUSTOMER;
-- Check status distribution
-- SELECT STATUS, COUNT(*) as COUNT FROM TB_CUSTOMER GROUP BY STATUS;
-- Check sample data
-- SELECT * FROM TB_CUSTOMER ORDER BY CUSTOMER_ID DESC LIMIT 10;
-- Estimate table size
-- SELECT
-- TABLE_NAME,
-- ROUND(((DATA_LENGTH + INDEX_LENGTH) / 1024 / 1024), 2) AS SIZE_MB,
-- TABLE_ROWS
-- FROM information_schema.TABLES
-- WHERE TABLE_SCHEMA = 'batch_db' AND TABLE_NAME = 'TB_CUSTOMER';
-- =============================================
-- Cleanup (if needed)
-- =============================================
-- Delete all test data (keep only original 5 records)
-- DELETE FROM TB_CUSTOMER WHERE CUSTOMER_ID > 5;
-- Reset auto increment
-- ALTER TABLE TB_CUSTOMER AUTO_INCREMENT = 6;
-- Truncate all customer data
-- TRUNCATE TABLE TB_CUSTOMER;
-- TRUNCATE TABLE TB_CUSTOMER_PROCESSED;

@ -0,0 +1,320 @@
-- =============================================
-- Spring Batch Metadata Tables (MariaDB)
-- =============================================
CREATE TABLE IF NOT EXISTS BATCH_JOB_INSTANCE (
JOB_INSTANCE_ID BIGINT NOT NULL PRIMARY KEY AUTO_INCREMENT,
VERSION BIGINT,
JOB_NAME VARCHAR(100) NOT NULL,
JOB_KEY VARCHAR(32) NOT NULL,
CONSTRAINT JOB_INST_UN UNIQUE (JOB_NAME, JOB_KEY)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS BATCH_JOB_EXECUTION (
JOB_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY AUTO_INCREMENT,
VERSION BIGINT,
JOB_INSTANCE_ID BIGINT NOT NULL,
CREATE_TIME DATETIME NOT NULL,
START_TIME DATETIME DEFAULT NULL,
END_TIME DATETIME DEFAULT NULL,
STATUS VARCHAR(10),
EXIT_CODE VARCHAR(2500),
EXIT_MESSAGE VARCHAR(2500),
LAST_UPDATED DATETIME,
JOB_CONFIGURATION_LOCATION VARCHAR(2500) NULL,
CONSTRAINT JOB_INST_EXEC_FK FOREIGN KEY (JOB_INSTANCE_ID)
REFERENCES BATCH_JOB_INSTANCE(JOB_INSTANCE_ID)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS BATCH_JOB_EXECUTION_PARAMS (
JOB_EXECUTION_ID BIGINT NOT NULL,
TYPE_CD VARCHAR(6) NOT NULL,
KEY_NAME VARCHAR(100) NOT NULL,
STRING_VAL VARCHAR(250),
DATE_VAL DATETIME DEFAULT NULL,
LONG_VAL BIGINT,
DOUBLE_VAL DOUBLE PRECISION,
IDENTIFYING CHAR(1) NOT NULL,
CONSTRAINT JOB_EXEC_PARAMS_FK FOREIGN KEY (JOB_EXECUTION_ID)
REFERENCES BATCH_JOB_EXECUTION(JOB_EXECUTION_ID)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS BATCH_STEP_EXECUTION (
STEP_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY AUTO_INCREMENT,
VERSION BIGINT NOT NULL,
STEP_NAME VARCHAR(100) NOT NULL,
JOB_EXECUTION_ID BIGINT NOT NULL,
START_TIME DATETIME NOT NULL,
END_TIME DATETIME DEFAULT NULL,
STATUS VARCHAR(10),
COMMIT_COUNT BIGINT,
READ_COUNT BIGINT,
FILTER_COUNT BIGINT,
WRITE_COUNT BIGINT,
READ_SKIP_COUNT BIGINT,
WRITE_SKIP_COUNT BIGINT,
PROCESS_SKIP_COUNT BIGINT,
ROLLBACK_COUNT BIGINT,
EXIT_CODE VARCHAR(2500),
EXIT_MESSAGE VARCHAR(2500),
LAST_UPDATED DATETIME,
CONSTRAINT JOB_EXEC_STEP_FK FOREIGN KEY (JOB_EXECUTION_ID)
REFERENCES BATCH_JOB_EXECUTION(JOB_EXECUTION_ID)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS BATCH_STEP_EXECUTION_CONTEXT (
STEP_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY,
SHORT_CONTEXT VARCHAR(2500) NOT NULL,
SERIALIZED_CONTEXT TEXT,
CONSTRAINT STEP_EXEC_CTX_FK FOREIGN KEY (STEP_EXECUTION_ID)
REFERENCES BATCH_STEP_EXECUTION(STEP_EXECUTION_ID)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS BATCH_JOB_EXECUTION_CONTEXT (
JOB_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY,
SHORT_CONTEXT VARCHAR(2500) NOT NULL,
SERIALIZED_CONTEXT TEXT,
CONSTRAINT JOB_EXEC_CTX_FK FOREIGN KEY (JOB_EXECUTION_ID)
REFERENCES BATCH_JOB_EXECUTION(JOB_EXECUTION_ID)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS BATCH_STEP_EXECUTION_SEQ (
ID BIGINT NOT NULL,
UNIQUE_KEY CHAR(1) NOT NULL,
CONSTRAINT UNIQUE_KEY_UN UNIQUE (UNIQUE_KEY)
) ENGINE=InnoDB;
INSERT INTO BATCH_STEP_EXECUTION_SEQ (ID, UNIQUE_KEY) SELECT 0, '0' FROM DUAL WHERE NOT EXISTS(SELECT * FROM BATCH_STEP_EXECUTION_SEQ);
CREATE TABLE IF NOT EXISTS BATCH_JOB_EXECUTION_SEQ (
ID BIGINT NOT NULL,
UNIQUE_KEY CHAR(1) NOT NULL,
CONSTRAINT UNIQUE_KEY_UN_JOB UNIQUE (UNIQUE_KEY)
) ENGINE=InnoDB;
INSERT INTO BATCH_JOB_EXECUTION_SEQ (ID, UNIQUE_KEY) SELECT 0, '0' FROM DUAL WHERE NOT EXISTS(SELECT * FROM BATCH_JOB_EXECUTION_SEQ);
CREATE TABLE IF NOT EXISTS BATCH_JOB_SEQ (
ID BIGINT NOT NULL,
UNIQUE_KEY CHAR(1) NOT NULL,
CONSTRAINT UNIQUE_KEY_UN_JOB_SEQ UNIQUE (UNIQUE_KEY)
) ENGINE=InnoDB;
INSERT INTO BATCH_JOB_SEQ (ID, UNIQUE_KEY) SELECT 0, '0' FROM DUAL WHERE NOT EXISTS(SELECT * FROM BATCH_JOB_SEQ);
-- =============================================
-- Quartz Scheduler Tables
-- =============================================
CREATE TABLE IF NOT EXISTS QRTZ_JOB_DETAILS (
SCHED_NAME VARCHAR(120) NOT NULL,
JOB_NAME VARCHAR(200) NOT NULL,
JOB_GROUP VARCHAR(200) NOT NULL,
DESCRIPTION VARCHAR(250) NULL,
JOB_CLASS_NAME VARCHAR(250) NOT NULL,
IS_DURABLE TINYINT(1) NOT NULL,
IS_NONCONCURRENT TINYINT(1) NOT NULL,
IS_UPDATE_DATA TINYINT(1) NOT NULL,
REQUESTS_RECOVERY TINYINT(1) NOT NULL,
JOB_DATA BLOB NULL,
PRIMARY KEY (SCHED_NAME, JOB_NAME, JOB_GROUP)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS QRTZ_TRIGGERS (
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
JOB_NAME VARCHAR(200) NOT NULL,
JOB_GROUP VARCHAR(200) NOT NULL,
DESCRIPTION VARCHAR(250) NULL,
NEXT_FIRE_TIME BIGINT(13) NULL,
PREV_FIRE_TIME BIGINT(13) NULL,
PRIORITY INTEGER NULL,
TRIGGER_STATE VARCHAR(16) NOT NULL,
TRIGGER_TYPE VARCHAR(8) NOT NULL,
START_TIME BIGINT(13) NOT NULL,
END_TIME BIGINT(13) NULL,
CALENDAR_NAME VARCHAR(200) NULL,
MISFIRE_INSTR SMALLINT(2) NULL,
JOB_DATA BLOB NULL,
PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME, JOB_NAME, JOB_GROUP)
REFERENCES QRTZ_JOB_DETAILS(SCHED_NAME, JOB_NAME, JOB_GROUP)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS QRTZ_SIMPLE_TRIGGERS (
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
REPEAT_COUNT BIGINT(7) NOT NULL,
REPEAT_INTERVAL BIGINT(12) NOT NULL,
TIMES_TRIGGERED BIGINT(10) NOT NULL,
PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
REFERENCES QRTZ_TRIGGERS(SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS QRTZ_CRON_TRIGGERS (
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
CRON_EXPRESSION VARCHAR(120) NOT NULL,
TIME_ZONE_ID VARCHAR(80),
PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
REFERENCES QRTZ_TRIGGERS(SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS QRTZ_SIMPROP_TRIGGERS (
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
STR_PROP_1 VARCHAR(512) NULL,
STR_PROP_2 VARCHAR(512) NULL,
STR_PROP_3 VARCHAR(512) NULL,
INT_PROP_1 INT NULL,
INT_PROP_2 INT NULL,
LONG_PROP_1 BIGINT NULL,
LONG_PROP_2 BIGINT NULL,
DEC_PROP_1 NUMERIC(13,4) NULL,
DEC_PROP_2 NUMERIC(13,4) NULL,
BOOL_PROP_1 TINYINT(1) NULL,
BOOL_PROP_2 TINYINT(1) NULL,
PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
REFERENCES QRTZ_TRIGGERS(SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS QRTZ_BLOB_TRIGGERS (
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
BLOB_DATA BLOB NULL,
PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
REFERENCES QRTZ_TRIGGERS(SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS QRTZ_CALENDARS (
SCHED_NAME VARCHAR(120) NOT NULL,
CALENDAR_NAME VARCHAR(200) NOT NULL,
CALENDAR BLOB NOT NULL,
PRIMARY KEY (SCHED_NAME, CALENDAR_NAME)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS QRTZ_PAUSED_TRIGGER_GRPS (
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
PRIMARY KEY (SCHED_NAME, TRIGGER_GROUP)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS QRTZ_FIRED_TRIGGERS (
SCHED_NAME VARCHAR(120) NOT NULL,
ENTRY_ID VARCHAR(95) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
INSTANCE_NAME VARCHAR(200) NOT NULL,
FIRED_TIME BIGINT(13) NOT NULL,
SCHED_TIME BIGINT(13) NOT NULL,
PRIORITY INTEGER NOT NULL,
STATE VARCHAR(16) NOT NULL,
JOB_NAME VARCHAR(200) NULL,
JOB_GROUP VARCHAR(200) NULL,
IS_NONCONCURRENT TINYINT(1) NULL,
REQUESTS_RECOVERY TINYINT(1) NULL,
PRIMARY KEY (SCHED_NAME, ENTRY_ID)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS QRTZ_SCHEDULER_STATE (
SCHED_NAME VARCHAR(120) NOT NULL,
INSTANCE_NAME VARCHAR(200) NOT NULL,
LAST_CHECKIN_TIME BIGINT(13) NOT NULL,
CHECKIN_INTERVAL BIGINT(13) NOT NULL,
PRIMARY KEY (SCHED_NAME, INSTANCE_NAME)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS QRTZ_LOCKS (
SCHED_NAME VARCHAR(120) NOT NULL,
LOCK_NAME VARCHAR(40) NOT NULL,
PRIMARY KEY (SCHED_NAME, LOCK_NAME)
) ENGINE=InnoDB;
-- Create indexes for Quartz
CREATE INDEX IDX_QRTZ_J_REQ_RECOVERY ON QRTZ_JOB_DETAILS(SCHED_NAME, REQUESTS_RECOVERY);
CREATE INDEX IDX_QRTZ_J_GRP ON QRTZ_JOB_DETAILS(SCHED_NAME, JOB_GROUP);
CREATE INDEX IDX_QRTZ_T_J ON QRTZ_TRIGGERS(SCHED_NAME, JOB_NAME, JOB_GROUP);
CREATE INDEX IDX_QRTZ_T_JG ON QRTZ_TRIGGERS(SCHED_NAME, JOB_GROUP);
CREATE INDEX IDX_QRTZ_T_C ON QRTZ_TRIGGERS(SCHED_NAME, CALENDAR_NAME);
CREATE INDEX IDX_QRTZ_T_G ON QRTZ_TRIGGERS(SCHED_NAME, TRIGGER_GROUP);
CREATE INDEX IDX_QRTZ_T_STATE ON QRTZ_TRIGGERS(SCHED_NAME, TRIGGER_STATE);
CREATE INDEX IDX_QRTZ_T_N_STATE ON QRTZ_TRIGGERS(SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP, TRIGGER_STATE);
CREATE INDEX IDX_QRTZ_T_N_G_STATE ON QRTZ_TRIGGERS(SCHED_NAME, TRIGGER_GROUP, TRIGGER_STATE);
CREATE INDEX IDX_QRTZ_T_NEXT_FIRE_TIME ON QRTZ_TRIGGERS(SCHED_NAME, NEXT_FIRE_TIME);
CREATE INDEX IDX_QRTZ_T_NFT_ST ON QRTZ_TRIGGERS(SCHED_NAME, TRIGGER_STATE, NEXT_FIRE_TIME);
CREATE INDEX IDX_QRTZ_T_NFT_MISFIRE ON QRTZ_TRIGGERS(SCHED_NAME, MISFIRE_INSTR, NEXT_FIRE_TIME);
CREATE INDEX IDX_QRTZ_T_NFT_ST_MISFIRE ON QRTZ_TRIGGERS(SCHED_NAME, MISFIRE_INSTR, NEXT_FIRE_TIME, TRIGGER_STATE);
CREATE INDEX IDX_QRTZ_T_NFT_ST_MISFIRE_GRP ON QRTZ_TRIGGERS(SCHED_NAME, MISFIRE_INSTR, NEXT_FIRE_TIME, TRIGGER_GROUP, TRIGGER_STATE);
CREATE INDEX IDX_QRTZ_FT_TRIG_INST_NAME ON QRTZ_FIRED_TRIGGERS(SCHED_NAME, INSTANCE_NAME);
CREATE INDEX IDX_QRTZ_FT_INST_JOB_REQ_RCVRY ON QRTZ_FIRED_TRIGGERS(SCHED_NAME, INSTANCE_NAME, REQUESTS_RECOVERY);
CREATE INDEX IDX_QRTZ_FT_J_G ON QRTZ_FIRED_TRIGGERS(SCHED_NAME, JOB_NAME, JOB_GROUP);
CREATE INDEX IDX_QRTZ_FT_JG ON QRTZ_FIRED_TRIGGERS(SCHED_NAME, JOB_GROUP);
CREATE INDEX IDX_QRTZ_FT_T_G ON QRTZ_FIRED_TRIGGERS(SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP);
CREATE INDEX IDX_QRTZ_FT_TG ON QRTZ_FIRED_TRIGGERS(SCHED_NAME, TRIGGER_GROUP);
-- =============================================
-- Business Tables - Sample Customer Data
-- =============================================
-- Source customer data table (simulate 30 million records)
CREATE TABLE IF NOT EXISTS TB_CUSTOMER (
CUSTOMER_ID BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
CUSTOMER_NAME VARCHAR(100) NOT NULL,
EMAIL VARCHAR(100),
PHONE VARCHAR(20),
ADDRESS VARCHAR(255),
STATUS VARCHAR(20) DEFAULT 'ACTIVE',
CREATED_AT DATETIME DEFAULT CURRENT_TIMESTAMP,
UPDATED_AT DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX IDX_STATUS (STATUS),
INDEX IDX_CREATED_AT (CREATED_AT)
) ENGINE=InnoDB;
-- Processed customer data table
CREATE TABLE IF NOT EXISTS TB_CUSTOMER_PROCESSED (
PROCESS_ID BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
CUSTOMER_ID BIGINT NOT NULL,
CUSTOMER_NAME VARCHAR(100) NOT NULL,
EMAIL VARCHAR(100),
PHONE VARCHAR(20),
PROCESSED_DATA TEXT,
API_CALL_STATUS VARCHAR(20),
API_RESPONSE TEXT,
PROCESSED_AT DATETIME DEFAULT CURRENT_TIMESTAMP,
INDEX IDX_CUSTOMER_ID (CUSTOMER_ID),
INDEX IDX_API_CALL_STATUS (API_CALL_STATUS)
) ENGINE=InnoDB;
-- Batch execution log table
CREATE TABLE IF NOT EXISTS TB_BATCH_LOG (
LOG_ID BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
JOB_NAME VARCHAR(100) NOT NULL,
JOB_EXECUTION_ID BIGINT,
STATUS VARCHAR(20),
START_TIME DATETIME,
END_TIME DATETIME,
TOTAL_COUNT BIGINT DEFAULT 0,
SUCCESS_COUNT BIGINT DEFAULT 0,
FAIL_COUNT BIGINT DEFAULT 0,
ERROR_MESSAGE TEXT,
CREATED_AT DATETIME DEFAULT CURRENT_TIMESTAMP,
INDEX IDX_JOB_NAME (JOB_NAME),
INDEX IDX_STATUS (STATUS)
) ENGINE=InnoDB;
-- Insert sample data (for testing)
INSERT INTO TB_CUSTOMER (CUSTOMER_NAME, EMAIL, PHONE, ADDRESS, STATUS) VALUES
('Customer 1', 'customer1@example.com', '010-1111-1111', 'Seoul, Korea', 'ACTIVE'),
('Customer 2', 'customer2@example.com', '010-2222-2222', 'Busan, Korea', 'ACTIVE'),
('Customer 3', 'customer3@example.com', '010-3333-3333', 'Incheon, Korea', 'ACTIVE'),
('Customer 4', 'customer4@example.com', '010-4444-4444', 'Daegu, Korea', 'INACTIVE'),
('Customer 5', 'customer5@example.com', '010-5555-5555', 'Daejeon, Korea', 'ACTIVE');

@ -0,0 +1,77 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.example.batch.mapper.BatchLogMapper">
<resultMap id="batchLogResultMap" type="com.example.batch.domain.BatchLog">
<id property="logId" column="LOG_ID"/>
<result property="jobName" column="JOB_NAME"/>
<result property="jobExecutionId" column="JOB_EXECUTION_ID"/>
<result property="status" column="STATUS"/>
<result property="startTime" column="START_TIME"/>
<result property="endTime" column="END_TIME"/>
<result property="totalCount" column="TOTAL_COUNT"/>
<result property="successCount" column="SUCCESS_COUNT"/>
<result property="failCount" column="FAIL_COUNT"/>
<result property="errorMessage" column="ERROR_MESSAGE"/>
<result property="createdAt" column="CREATED_AT"/>
</resultMap>
<!-- Insert batch log -->
<insert id="insertBatchLog" parameterType="com.example.batch.domain.BatchLog"
useGeneratedKeys="true" keyProperty="logId">
INSERT INTO TB_BATCH_LOG (
JOB_NAME,
JOB_EXECUTION_ID,
STATUS,
START_TIME,
END_TIME,
TOTAL_COUNT,
SUCCESS_COUNT,
FAIL_COUNT,
ERROR_MESSAGE
) VALUES (
#{jobName},
#{jobExecutionId},
#{status},
#{startTime},
#{endTime},
#{totalCount},
#{successCount},
#{failCount},
#{errorMessage}
)
</insert>
<!-- Update batch log -->
<update id="updateBatchLog">
UPDATE TB_BATCH_LOG
SET STATUS = #{status},
END_TIME = #{endTime},
TOTAL_COUNT = #{totalCount},
SUCCESS_COUNT = #{successCount},
FAIL_COUNT = #{failCount},
ERROR_MESSAGE = #{errorMessage}
WHERE LOG_ID = #{logId}
</update>
<!-- Select batch log by execution ID -->
<select id="selectBatchLogByExecutionId" resultMap="batchLogResultMap">
SELECT
LOG_ID,
JOB_NAME,
JOB_EXECUTION_ID,
STATUS,
START_TIME,
END_TIME,
TOTAL_COUNT,
SUCCESS_COUNT,
FAIL_COUNT,
ERROR_MESSAGE,
CREATED_AT
FROM TB_BATCH_LOG
WHERE JOB_EXECUTION_ID = #{jobExecutionId}
</select>
</mapper>

@ -0,0 +1,100 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.example.batch.mapper.CustomerMapper">
<resultMap id="customerResultMap" type="com.example.batch.domain.Customer">
<id property="customerId" column="CUSTOMER_ID"/>
<result property="customerName" column="CUSTOMER_NAME"/>
<result property="email" column="EMAIL"/>
<result property="phone" column="PHONE"/>
<result property="address" column="ADDRESS"/>
<result property="status" column="STATUS"/>
<result property="createdAt" column="CREATED_AT"/>
<result property="updatedAt" column="UPDATED_AT"/>
</resultMap>
<!-- Pagination query for chunk processing -->
<select id="selectCustomersByPage" resultMap="customerResultMap">
SELECT
CUSTOMER_ID,
CUSTOMER_NAME,
EMAIL,
PHONE,
ADDRESS,
STATUS,
CREATED_AT,
UPDATED_AT
FROM TB_CUSTOMER
WHERE STATUS = 'ACTIVE'
ORDER BY CUSTOMER_ID
LIMIT #{limit} OFFSET #{offset}
</select>
<!-- Get total count -->
<select id="selectTotalCustomerCount" resultType="java.lang.Long">
SELECT COUNT(*)
FROM TB_CUSTOMER
WHERE STATUS = 'ACTIVE'
</select>
<!-- Insert single processed customer -->
<insert id="insertProcessedCustomer" parameterType="com.example.batch.domain.CustomerProcessed"
useGeneratedKeys="true" keyProperty="processId">
INSERT INTO TB_CUSTOMER_PROCESSED (
CUSTOMER_ID,
CUSTOMER_NAME,
EMAIL,
PHONE,
PROCESSED_DATA,
API_CALL_STATUS,
API_RESPONSE,
PROCESSED_AT
) VALUES (
#{customerId},
#{customerName},
#{email},
#{phone},
#{processedData},
#{apiCallStatus},
#{apiResponse},
NOW()
)
</insert>
<!-- Batch insert for better performance -->
<insert id="insertProcessedCustomerBatch">
INSERT INTO TB_CUSTOMER_PROCESSED (
CUSTOMER_ID,
CUSTOMER_NAME,
EMAIL,
PHONE,
PROCESSED_DATA,
API_CALL_STATUS,
API_RESPONSE,
PROCESSED_AT
) VALUES
<foreach collection="list" item="item" separator=",">
(
#{item.customerId},
#{item.customerName},
#{item.email},
#{item.phone},
#{item.processedData},
#{item.apiCallStatus},
#{item.apiResponse},
NOW()
)
</foreach>
</insert>
<!-- Update customer status -->
<update id="updateCustomerStatus">
UPDATE TB_CUSTOMER
SET STATUS = #{status},
UPDATED_AT = NOW()
WHERE CUSTOMER_ID = #{customerId}
</update>
</mapper>
Loading…
Cancel
Save