# PlanitAI KPI 개발기 v12: 배포 전략 > 시리즈: PlanitAI KPI 개발 여정 (12/16) > 작성일: 2024년 12월 ## 개요 안정적인 SaaS 서비스 운영을 위해서는 체계적인 배포 전략이 필수입니다. 이번 글에서는 PlanitAI KPI의 CI/CD 파이프라인, 컨테이너화, 인프라 구성을 설계합니다. --- ## 1. 배포 아키텍처 개요 ### 1.1 전체 인프라 구성 ``` ┌─────────────────────────────────────────────────────────────────┐ │ Cloud Infrastructure │ ├─────────────────────────────────────────────────────────────────┤ │ ┌─────────────────────────────────────────────────────────┐ │ │ │ CDN (CloudFront) │ │ │ │ Static Assets Cache │ │ │ └───────────────────────────┬─────────────────────────────┘ │ │ ↓ │ │ ┌─────────────────────────────────────────────────────────┐ │ │ │ Application Load Balancer │ │ │ │ SSL Termination │ │ │ └───────────────────────────┬─────────────────────────────┘ │ │ ↓ │ │ ┌──────────────────┐ ┌──────────────────┐ │ │ │ ECS Cluster │ │ ECS Cluster │ │ │ │ (Frontend) │ │ (Backend) │ │ │ │ Next.js │ │ FastAPI │ │ │ └────────┬─────────┘ └────────┬─────────┘ │ │ │ │ │ │ │ ┌────────────────┴────────────────┐ │ │ │ ↓ ↓ │ │ │ ┌────────────┐ ┌────────────────┐ │ │ │ │ RDS │ │ ElastiCache │ │ │ │ │ Postgres │ │ Redis │ │ │ │ └────────────┘ └────────────────┘ │ │ │ │ │ └──→ S3 (Static Files, Exports) │ └─────────────────────────────────────────────────────────────────┘ ``` ### 1.2 환경 구성 | 환경 | 용도 | 인스턴스 | |------|------|---------| | Development | 개발자 로컬 | Docker Compose | | Staging | QA/테스트 | ECS (최소 구성) | | Production | 운영 | ECS (오토스케일링) | --- ## 2. 컨테이너화 (Docker) ### 2.1 Backend Dockerfile ```dockerfile # backend/Dockerfile FROM python:3.11-slim as base # 빌드 인자 ARG ENVIRONMENT=production # 시스템 패키지 설치 RUN apt-get update && apt-get install -y \ build-essential \ curl \ && rm -rf /var/lib/apt/lists/* # Poetry 설치 ENV POETRY_VERSION=1.7.1 RUN curl -sSL https://install.python-poetry.org | python3 - ENV PATH="/root/.local/bin:$PATH" # 작업 디렉토리 설정 WORKDIR /app # 의존성 파일 복사 COPY pyproject.toml poetry.lock ./ # 의존성 설치 (운영 환경에서는 dev 제외) RUN if [ "$ENVIRONMENT" = "production" ]; then \ poetry config virtualenvs.create false && \ poetry install --no-dev --no-interaction --no-ansi; \ else \ poetry config virtualenvs.create false && \ poetry install --no-interaction --no-ansi; \ fi # 소스 코드 복사 COPY . . # 비루트 사용자로 실행 RUN useradd --create-home appuser USER appuser # 포트 노출 EXPOSE 8000 # 헬스체크 HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ CMD curl -f http://localhost:8000/health || exit 1 # 실행 커맨드 CMD ["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "8000"] # 프로덕션 최적화 스테이지 FROM base as production # Gunicorn으로 실행 CMD ["gunicorn", "src.main:app", \ "-w", "4", \ "-k", "uvicorn.workers.UvicornWorker", \ "-b", "0.0.0.0:8000", \ "--access-logfile", "-", \ "--error-logfile", "-"] ``` ### 2.2 Frontend Dockerfile ```dockerfile # frontend/Dockerfile FROM node:20-alpine AS base # 의존성 설치 스테이지 FROM base AS deps WORKDIR /app COPY package.json pnpm-lock.yaml ./ RUN npm install -g pnpm && pnpm install --frozen-lockfile # 빌드 스테이지 FROM base AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . # 환경변수 설정 ARG NEXT_PUBLIC_API_URL ENV NEXT_PUBLIC_API_URL=$NEXT_PUBLIC_API_URL # 빌드 RUN npm install -g pnpm && pnpm build # 프로덕션 스테이지 FROM base AS runner WORKDIR /app ENV NODE_ENV=production # 비루트 사용자 RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs # 필요한 파일만 복사 COPY --from=builder /app/public ./public COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static USER nextjs EXPOSE 3000 ENV PORT 3000 ENV HOSTNAME "0.0.0.0" CMD ["node", "server.js"] ``` ### 2.3 Docker Compose (개발 환경) ```yaml # docker-compose.yml version: '3.8' services: # PostgreSQL postgres: image: postgres:15-alpine environment: POSTGRES_DB: planitai_kpi POSTGRES_USER: planitai POSTGRES_PASSWORD: ${DB_PASSWORD:-devpassword} volumes: - postgres_data:/var/lib/postgresql/data - ./scripts/init-db.sql:/docker-entrypoint-initdb.d/init.sql ports: - "5432:5432" healthcheck: test: ["CMD-SHELL", "pg_isready -U planitai -d planitai_kpi"] interval: 10s timeout: 5s retries: 5 # Redis redis: image: redis:7-alpine command: redis-server --appendonly yes volumes: - redis_data:/data ports: - "6379:6379" healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 10s timeout: 5s retries: 5 # Backend backend: build: context: ./backend args: ENVIRONMENT: development environment: DATABASE_URL: postgresql://planitai:${DB_PASSWORD:-devpassword}@postgres:5432/planitai_kpi REDIS_URL: redis://redis:6379 GEMINI_API_KEY: ${GEMINI_API_KEY} JWT_SECRET: ${JWT_SECRET:-devsecret} ENVIRONMENT: development volumes: - ./backend:/app - /app/__pycache__ ports: - "8000:8000" depends_on: postgres: condition: service_healthy redis: condition: service_healthy command: uvicorn src.main:app --host 0.0.0.0 --port 8000 --reload # Frontend frontend: build: context: ./frontend target: deps environment: NEXT_PUBLIC_API_URL: http://localhost:8000 volumes: - ./frontend:/app - /app/node_modules - /app/.next ports: - "3000:3000" depends_on: - backend command: pnpm dev # Nginx (리버스 프록시) nginx: image: nginx:alpine volumes: - ./nginx/nginx.dev.conf:/etc/nginx/nginx.conf:ro ports: - "80:80" depends_on: - frontend - backend volumes: postgres_data: redis_data: ``` --- ## 3. CI/CD 파이프라인 ### 3.1 GitHub Actions 워크플로우 ```yaml # .github/workflows/ci-cd.yml name: CI/CD Pipeline on: push: branches: [main, develop] pull_request: branches: [main] env: AWS_REGION: ap-northeast-1 ECR_REPOSITORY_BACKEND: planitai-kpi-backend ECR_REPOSITORY_FRONTEND: planitai-kpi-frontend ECS_CLUSTER: planitai-kpi-cluster ECS_SERVICE_BACKEND: planitai-kpi-backend-service ECS_SERVICE_FRONTEND: planitai-kpi-frontend-service jobs: # ───────────────────────────────────────── # 테스트 # ───────────────────────────────────────── test: runs-on: ubuntu-latest services: postgres: image: postgres:15 env: POSTGRES_PASSWORD: postgres options: >- --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 ports: - 5432:5432 redis: image: redis:7 ports: - 6379:6379 steps: - uses: actions/checkout@v4 - name: Set up Python uses: actions/setup-python@v5 with: python-version: '3.11' cache: 'pip' - name: Install dependencies working-directory: ./backend run: | pip install poetry poetry install - name: Run tests working-directory: ./backend env: DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test REDIS_URL: redis://localhost:6379 run: | poetry run pytest tests/ -v --cov=src --cov-report=xml - name: Upload coverage uses: codecov/codecov-action@v4 with: file: ./backend/coverage.xml # ───────────────────────────────────────── # Frontend 테스트 # ───────────────────────────────────────── test-frontend: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: '20' cache: 'pnpm' cache-dependency-path: frontend/pnpm-lock.yaml - name: Install pnpm run: npm install -g pnpm - name: Install dependencies working-directory: ./frontend run: pnpm install --frozen-lockfile - name: Lint working-directory: ./frontend run: pnpm lint - name: Type check working-directory: ./frontend run: pnpm type-check - name: Run tests working-directory: ./frontend run: pnpm test # ───────────────────────────────────────── # 빌드 & 푸시 (Backend) # ───────────────────────────────────────── build-backend: needs: [test] runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/develop' outputs: image: ${{ steps.build-image.outputs.image }} steps: - uses: actions/checkout@v4 - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }} - name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v2 - name: Build, tag, and push image id: build-image working-directory: ./backend env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} IMAGE_TAG: ${{ github.sha }} run: | docker build \ --target production \ -t $ECR_REGISTRY/$ECR_REPOSITORY_BACKEND:$IMAGE_TAG \ -t $ECR_REGISTRY/$ECR_REPOSITORY_BACKEND:latest \ . docker push $ECR_REGISTRY/$ECR_REPOSITORY_BACKEND:$IMAGE_TAG docker push $ECR_REGISTRY/$ECR_REPOSITORY_BACKEND:latest echo "image=$ECR_REGISTRY/$ECR_REPOSITORY_BACKEND:$IMAGE_TAG" >> $GITHUB_OUTPUT # ───────────────────────────────────────── # 빌드 & 푸시 (Frontend) # ───────────────────────────────────────── build-frontend: needs: [test-frontend] runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/develop' outputs: image: ${{ steps.build-image.outputs.image }} steps: - uses: actions/checkout@v4 - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }} - name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v2 - name: Build, tag, and push image id: build-image working-directory: ./frontend env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} IMAGE_TAG: ${{ github.sha }} NEXT_PUBLIC_API_URL: ${{ secrets.NEXT_PUBLIC_API_URL }} run: | docker build \ --build-arg NEXT_PUBLIC_API_URL=$NEXT_PUBLIC_API_URL \ -t $ECR_REGISTRY/$ECR_REPOSITORY_FRONTEND:$IMAGE_TAG \ -t $ECR_REGISTRY/$ECR_REPOSITORY_FRONTEND:latest \ . docker push $ECR_REGISTRY/$ECR_REPOSITORY_FRONTEND:$IMAGE_TAG docker push $ECR_REGISTRY/$ECR_REPOSITORY_FRONTEND:latest echo "image=$ECR_REGISTRY/$ECR_REPOSITORY_FRONTEND:$IMAGE_TAG" >> $GITHUB_OUTPUT # ───────────────────────────────────────── # Staging 배포 # ───────────────────────────────────────── deploy-staging: needs: [build-backend, build-frontend] runs-on: ubuntu-latest if: github.ref == 'refs/heads/develop' environment: staging steps: - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }} - name: Deploy to ECS (Backend) run: | aws ecs update-service \ --cluster ${{ env.ECS_CLUSTER }}-staging \ --service ${{ env.ECS_SERVICE_BACKEND }}-staging \ --force-new-deployment - name: Deploy to ECS (Frontend) run: | aws ecs update-service \ --cluster ${{ env.ECS_CLUSTER }}-staging \ --service ${{ env.ECS_SERVICE_FRONTEND }}-staging \ --force-new-deployment - name: Wait for deployment run: | aws ecs wait services-stable \ --cluster ${{ env.ECS_CLUSTER }}-staging \ --services ${{ env.ECS_SERVICE_BACKEND }}-staging ${{ env.ECS_SERVICE_FRONTEND }}-staging # ───────────────────────────────────────── # Production 배포 # ───────────────────────────────────────── deploy-production: needs: [build-backend, build-frontend] runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' environment: production steps: - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }} - name: Deploy to ECS (Backend) run: | aws ecs update-service \ --cluster ${{ env.ECS_CLUSTER }} \ --service ${{ env.ECS_SERVICE_BACKEND }} \ --force-new-deployment - name: Deploy to ECS (Frontend) run: | aws ecs update-service \ --cluster ${{ env.ECS_CLUSTER }} \ --service ${{ env.ECS_SERVICE_FRONTEND }} \ --force-new-deployment - name: Wait for deployment run: | aws ecs wait services-stable \ --cluster ${{ env.ECS_CLUSTER }} \ --services ${{ env.ECS_SERVICE_BACKEND }} ${{ env.ECS_SERVICE_FRONTEND }} - name: Notify Slack if: always() uses: 8398a7/action-slack@v3 with: status: ${{ job.status }} fields: repo,message,commit,author,action,workflow env: SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }} ``` --- ## 4. 인프라 구성 (Terraform) ### 4.1 메인 설정 ```hcl # infrastructure/main.tf terraform { required_version = ">= 1.5.0" required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } backend "s3" { bucket = "planitai-kpi-terraform-state" key = "infrastructure/terraform.tfstate" region = "ap-northeast-1" encrypt = true dynamodb_table = "terraform-state-lock" } } provider "aws" { region = var.aws_region default_tags { tags = { Project = "PlanitAI-KPI" Environment = var.environment ManagedBy = "Terraform" } } } # 변수 variable "aws_region" { default = "ap-northeast-1" } variable "environment" { description = "Environment (staging/production)" } variable "db_password" { description = "Database password" sensitive = true } ``` ### 4.2 VPC 설정 ```hcl # infrastructure/vpc.tf module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "~> 5.0" name = "planitai-kpi-${var.environment}" cidr = "10.0.0.0/16" azs = ["${var.aws_region}a", "${var.aws_region}c"] private_subnets = ["10.0.1.0/24", "10.0.2.0/24"] public_subnets = ["10.0.101.0/24", "10.0.102.0/24"] enable_nat_gateway = true single_nat_gateway = var.environment == "staging" enable_dns_hostnames = true enable_dns_support = true # VPC Flow Logs enable_flow_log = true create_flow_log_cloudwatch_log_group = true create_flow_log_cloudwatch_iam_role = true } ``` ### 4.3 ECS 클러스터 ```hcl # infrastructure/ecs.tf resource "aws_ecs_cluster" "main" { name = "planitai-kpi-${var.environment}" setting { name = "containerInsights" value = "enabled" } } resource "aws_ecs_cluster_capacity_providers" "main" { cluster_name = aws_ecs_cluster.main.name capacity_providers = ["FARGATE", "FARGATE_SPOT"] default_capacity_provider_strategy { capacity_provider = var.environment == "production" ? "FARGATE" : "FARGATE_SPOT" weight = 1 } } # Backend Task Definition resource "aws_ecs_task_definition" "backend" { family = "planitai-kpi-backend-${var.environment}" requires_compatibilities = ["FARGATE"] network_mode = "awsvpc" cpu = var.environment == "production" ? 1024 : 512 memory = var.environment == "production" ? 2048 : 1024 execution_role_arn = aws_iam_role.ecs_execution.arn task_role_arn = aws_iam_role.ecs_task.arn container_definitions = jsonencode([ { name = "backend" image = "${aws_ecr_repository.backend.repository_url}:latest" portMappings = [ { containerPort = 8000 hostPort = 8000 protocol = "tcp" } ] environment = [ { name = "ENVIRONMENT", value = var.environment }, { name = "DATABASE_URL", value = "postgresql://${var.db_username}:${var.db_password}@${aws_db_instance.main.endpoint}/${var.db_name}" }, { name = "REDIS_URL", value = "redis://${aws_elasticache_cluster.main.cache_nodes[0].address}:6379" }, ] secrets = [ { name = "JWT_SECRET", valueFrom = aws_secretsmanager_secret.jwt_secret.arn }, { name = "GEMINI_API_KEY", valueFrom = aws_secretsmanager_secret.gemini_api_key.arn }, ] logConfiguration = { logDriver = "awslogs" options = { "awslogs-group" = aws_cloudwatch_log_group.backend.name "awslogs-region" = var.aws_region "awslogs-stream-prefix" = "backend" } } healthCheck = { command = ["CMD-SHELL", "curl -f http://localhost:8000/health || exit 1"] interval = 30 timeout = 5 retries = 3 startPeriod = 60 } } ]) } # Backend Service resource "aws_ecs_service" "backend" { name = "planitai-kpi-backend-${var.environment}" cluster = aws_ecs_cluster.main.id task_definition = aws_ecs_task_definition.backend.arn desired_count = var.environment == "production" ? 2 : 1 capacity_provider_strategy { capacity_provider = var.environment == "production" ? "FARGATE" : "FARGATE_SPOT" weight = 1 } network_configuration { subnets = module.vpc.private_subnets security_groups = [aws_security_group.ecs_backend.id] assign_public_ip = false } load_balancer { target_group_arn = aws_lb_target_group.backend.arn container_name = "backend" container_port = 8000 } deployment_configuration { maximum_percent = 200 minimum_healthy_percent = 100 } depends_on = [aws_lb_listener.https] } # Auto Scaling (Production only) resource "aws_appautoscaling_target" "backend" { count = var.environment == "production" ? 1 : 0 max_capacity = 10 min_capacity = 2 resource_id = "service/${aws_ecs_cluster.main.name}/${aws_ecs_service.backend.name}" scalable_dimension = "ecs:service:DesiredCount" service_namespace = "ecs" } resource "aws_appautoscaling_policy" "backend_cpu" { count = var.environment == "production" ? 1 : 0 name = "backend-cpu-scaling" policy_type = "TargetTrackingScaling" resource_id = aws_appautoscaling_target.backend[0].resource_id scalable_dimension = aws_appautoscaling_target.backend[0].scalable_dimension service_namespace = aws_appautoscaling_target.backend[0].service_namespace target_tracking_scaling_policy_configuration { predefined_metric_specification { predefined_metric_type = "ECSServiceAverageCPUUtilization" } target_value = 70.0 scale_in_cooldown = 300 scale_out_cooldown = 60 } } ``` ### 4.4 RDS (PostgreSQL) ```hcl # infrastructure/rds.tf resource "aws_db_subnet_group" "main" { name = "planitai-kpi-${var.environment}" subnet_ids = module.vpc.private_subnets } resource "aws_db_instance" "main" { identifier = "planitai-kpi-${var.environment}" engine = "postgres" engine_version = "15.4" instance_class = var.environment == "production" ? "db.r6g.large" : "db.t4g.micro" allocated_storage = 20 max_allocated_storage = var.environment == "production" ? 100 : 50 storage_type = "gp3" storage_encrypted = true db_name = var.db_name username = var.db_username password = var.db_password db_subnet_group_name = aws_db_subnet_group.main.name vpc_security_group_ids = [aws_security_group.rds.id] multi_az = var.environment == "production" publicly_accessible = false deletion_protection = var.environment == "production" skip_final_snapshot = var.environment != "production" final_snapshot_identifier = var.environment == "production" ? "planitai-kpi-final-snapshot" : null backup_retention_period = var.environment == "production" ? 7 : 1 backup_window = "03:00-04:00" maintenance_window = "Mon:04:00-Mon:05:00" performance_insights_enabled = var.environment == "production" tags = { Name = "planitai-kpi-${var.environment}" } } ``` ### 4.5 ElastiCache (Redis) ```hcl # infrastructure/elasticache.tf resource "aws_elasticache_subnet_group" "main" { name = "planitai-kpi-${var.environment}" subnet_ids = module.vpc.private_subnets } resource "aws_elasticache_cluster" "main" { cluster_id = "planitai-kpi-${var.environment}" engine = "redis" node_type = var.environment == "production" ? "cache.r6g.large" : "cache.t4g.micro" num_cache_nodes = 1 parameter_group_name = "default.redis7" engine_version = "7.0" port = 6379 subnet_group_name = aws_elasticache_subnet_group.main.name security_group_ids = [aws_security_group.redis.id] snapshot_retention_limit = var.environment == "production" ? 7 : 0 tags = { Name = "planitai-kpi-${var.environment}" } } ``` --- ## 5. 모니터링 및 로깅 ### 5.1 CloudWatch 대시보드 ```hcl # infrastructure/monitoring.tf resource "aws_cloudwatch_dashboard" "main" { dashboard_name = "PlanitAI-KPI-${var.environment}" dashboard_body = jsonencode({ widgets = [ # ECS CPU/Memory { type = "metric" x = 0 y = 0 width = 12 height = 6 properties = { title = "ECS Backend - CPU & Memory" region = var.aws_region metrics = [ ["AWS/ECS", "CPUUtilization", "ServiceName", aws_ecs_service.backend.name, "ClusterName", aws_ecs_cluster.main.name], [".", "MemoryUtilization", ".", ".", ".", "."] ] period = 60 stat = "Average" } }, # API Response Time { type = "metric" x = 12 y = 0 width = 12 height = 6 properties = { title = "API Response Time" region = var.aws_region metrics = [ ["AWS/ApplicationELB", "TargetResponseTime", "TargetGroup", aws_lb_target_group.backend.arn_suffix, "LoadBalancer", aws_lb.main.arn_suffix] ] period = 60 stat = "p95" } }, # RDS Connections { type = "metric" x = 0 y = 6 width = 8 height = 6 properties = { title = "RDS Database Connections" region = var.aws_region metrics = [ ["AWS/RDS", "DatabaseConnections", "DBInstanceIdentifier", aws_db_instance.main.identifier] ] period = 60 stat = "Average" } }, # Redis { type = "metric" x = 8 y = 6 width = 8 height = 6 properties = { title = "Redis Cache Hit Rate" region = var.aws_region metrics = [ ["AWS/ElastiCache", "CacheHitRate", "CacheClusterId", aws_elasticache_cluster.main.cluster_id] ] period = 60 stat = "Average" } }, # Error Count { type = "metric" x = 16 y = 6 width = 8 height = 6 properties = { title = "HTTP 5xx Errors" region = var.aws_region metrics = [ ["AWS/ApplicationELB", "HTTPCode_Target_5XX_Count", "LoadBalancer", aws_lb.main.arn_suffix] ] period = 60 stat = "Sum" } } ] }) } # CloudWatch Alarms resource "aws_cloudwatch_metric_alarm" "backend_cpu_high" { alarm_name = "planitai-kpi-${var.environment}-backend-cpu-high" comparison_operator = "GreaterThanThreshold" evaluation_periods = 2 metric_name = "CPUUtilization" namespace = "AWS/ECS" period = 60 statistic = "Average" threshold = 80 alarm_description = "Backend CPU utilization is too high" dimensions = { ClusterName = aws_ecs_cluster.main.name ServiceName = aws_ecs_service.backend.name } alarm_actions = [aws_sns_topic.alerts.arn] ok_actions = [aws_sns_topic.alerts.arn] } resource "aws_cloudwatch_metric_alarm" "api_5xx_errors" { alarm_name = "planitai-kpi-${var.environment}-api-5xx-errors" comparison_operator = "GreaterThanThreshold" evaluation_periods = 1 metric_name = "HTTPCode_Target_5XX_Count" namespace = "AWS/ApplicationELB" period = 60 statistic = "Sum" threshold = 10 alarm_description = "Too many 5xx errors" dimensions = { LoadBalancer = aws_lb.main.arn_suffix } alarm_actions = [aws_sns_topic.alerts.arn] } ``` --- ## 6. 배포 체크리스트 ### 6.1 배포 전 체크리스트 ```markdown ## Pre-Deployment Checklist ### 코드 - [ ] 모든 테스트 통과 - [ ] 코드 리뷰 완료 - [ ] 마이그레이션 스크립트 준비 ### 인프라 - [ ] 환경 변수 설정 확인 - [ ] 시크릿 업데이트 여부 - [ ] 스케일링 설정 확인 ### 보안 - [ ] 보안 스캔 통과 - [ ] 의존성 취약점 체크 ### 모니터링 - [ ] 로그 설정 확인 - [ ] 알람 설정 확인 - [ ] 대시보드 확인 ### 롤백 계획 - [ ] 이전 이미지 태그 확인 - [ ] 롤백 절차 문서화 - [ ] DB 롤백 스크립트 준비 ``` --- ## 7. まとめ ### 배포 전략 요약 | 구성 요소 | 기술 선택 | |----------|----------| | 컨테이너 | Docker + ECR | | 오케스트레이션 | ECS Fargate | | CI/CD | GitHub Actions | | IaC | Terraform | | 데이터베이스 | RDS PostgreSQL | | 캐시 | ElastiCache Redis | | CDN | CloudFront | | 모니터링 | CloudWatch | ### 次回予告 v13에서는 **MVP 구현 계획**을 다룹니다: - Phase 1 스코프 정의 - 핵심 기능 구현 순서 - 기술 부채 관리 --- *PlanitAI KPI - AI가 당신의 KPI를 계획하고 분석합니다*