# Deployment Guide ## Production Deployment ### Prerequisites - Docker 20.10+ - Docker Compose 2.0+ - Kubernetes 1.20+ (for K8s deployment) - 4GB RAM minimum - 2 CPU cores minimum ### Environment Setup #### 1. Create Environment File ```bash cp .env.example .env ``` Edit `.env` with your production values: ```env # GitHub Configuration GITHUB_WEBHOOK_SECRET=your-secure-webhook-secret GITHUB_CLIENT_ID=your-github-oauth-client-id GITHUB_CLIENT_SECRET=your-github-oauth-client-secret # AI Configuration AI_MODEL_NAME=gpt-4-code-review AI_API_KEY=your-openai-api-key AI_TEMPERATURE=0.3 # Performance CACHE_TTL=3600 MAX_CACHE_SIZE=10000 WORKER_PROCESSES=4 # Security CORS_ORIGINS=https://yourdomain.com RATE_LIMIT=100/minute JWT_SECRET=your-jwt-secret # Monitoring LOG_LEVEL=INFO METRICS_ENABLED=true HEALTH_CHECK_INTERVAL=30 ``` #### 2. Docker Deployment Create `docker-compose.yml`: ```yaml version: '3.8' services: # Main API Gateway main: build: . ports: - "8000:8000" environment: - SERVICE_NAME=main - PORT=8000 env_file: - .env depends_on: - redis - postgres restart: unless-stopped healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8000/health"] interval: 30s timeout: 10s retries: 3 # GitHub Integration Service github: build: . command: python github_service.py ports: - "8001:8001" environment: - SERVICE_NAME=github - PORT=8001 env_file: - .env depends_on: - redis restart: unless-stopped # Analysis Engine analysis: build: . command: python analysis_engine.py ports: - "8002:8002" environment: - SERVICE_NAME=analysis - PORT=8002 env_file: - .env restart: unless-stopped # AI Core Service ai-core: build: . command: python ai_core.py ports: - "8003:8003" environment: - SERVICE_NAME=ai-core - PORT=8003 env_file: - .env restart: unless-stopped # Review Generation review: build: . command: python review_service.py ports: - "8004:8004" environment: - SERVICE_NAME=review - PORT=8004 env_file: - .env restart: unless-stopped # Dashboard Service dashboard: build: . command: python dashboard_service.py ports: - "8005:8005" environment: - SERVICE_NAME=dashboard - PORT=8005 env_file: - .env restart: unless-stopped # Performance Monitoring performance: build: . command: python performance_service.py ports: - "8006:8006" environment: - SERVICE_NAME=performance - PORT=8006 env_file: - .env restart: unless-stopped # Redis Cache redis: image: redis:7-alpine ports: - "6379:6379" volumes: - redis_data:/data restart: unless-stopped command: redis-server --appendonly yes # PostgreSQL Database postgres: image: postgres:15-alpine ports: - "5432:5432" environment: - POSTGRES_DB=ai_code_review - POSTGRES_USER=ai_user - POSTGRES_PASSWORD=secure_password volumes: - postgres_data:/var/lib/postgresql/data restart: unless-stopped # Nginx Reverse Proxy nginx: image: nginx:alpine ports: - "80:80" - "443:443" volumes: - ./nginx.conf:/etc/nginx/nginx.conf - ./ssl:/etc/nginx/ssl depends_on: - main restart: unless-stopped volumes: redis_data: postgres_data: ``` #### 3. Nginx Configuration Create `nginx.conf`: ```nginx events { worker_connections 1024; } http { upstream api_servers { server main:8000; server github:8001; server analysis:8002; server ai-core:8003; server review:8004; server dashboard:8005; server performance:8006; } # Rate limiting limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s; server { listen 80; server_name your-domain.com; # Redirect to HTTPS return 301 https://$server_name$request_uri; } server { listen 443 ssl http2; server_name your-domain.com; # SSL Configuration ssl_certificate /etc/nginx/ssl/cert.pem; ssl_certificate_key /etc/nginx/ssl/key.pem; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; # Security Headers add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; # API Routes location /api/ { limit_req zone=api burst=20 nodelay; proxy_pass http://api_servers; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } # Dashboard location / { proxy_pass http://dashboard:8005; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } # Health Checks location /health { proxy_pass http://main:8000/health; access_log off; } } } ``` #### 4. Deploy with Docker Compose ```bash # Build and start services docker-compose up -d --build # Check service health docker-compose ps # View logs docker-compose logs -f main # Scale services if needed docker-compose up -d --scale main=3 --scale analysis=2 ``` ### Kubernetes Deployment #### 1. Namespace ```yaml apiVersion: v1 kind: Namespace metadata: name: ai-code-review ``` #### 2. ConfigMap ```yaml apiVersion: v1 kind: ConfigMap metadata: name: ai-config namespace: ai-code-review data: GITHUB_WEBHOOK_SECRET: "your-secret" AI_MODEL_NAME: "gpt-4-code-review" CACHE_TTL: "3600" ``` #### 3. Secret ```yaml apiVersion: v1 kind: Secret metadata: name: ai-secrets namespace: ai-code-review type: Opaque data: github-client-secret: ai-api-key: jwt-secret: ``` #### 4. Main Deployment ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: ai-main namespace: ai-code-review spec: replicas: 3 selector: matchLabels: app: ai-main template: metadata: labels: app: ai-main spec: containers: - name: main image: ai-code-review-assistant:latest ports: - containerPort: 8000 env: - name: PORT value: "8000" envFrom: - configMapRef: name: ai-config - secretRef: name: ai-secrets resources: requests: memory: "512Mi" cpu: "250m" limits: memory: "1Gi" cpu: "500m" livenessProbe: httpGet: path: /health port: 8000 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /health port: 8000 initialDelaySeconds: 5 periodSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: ai-main-service namespace: ai-code-review spec: selector: app: ai-main ports: - port: 80 targetPort: 8000 type: ClusterIP ``` #### 5. Ingress ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ai-ingress namespace: ai-code-review annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/rate-limit: "100" spec: tls: - hosts: - your-domain.com secretName: ai-tls rules: - host: your-domain.com http: paths: - path: /api pathType: Prefix backend: service: name: ai-main-service port: number: 80 - path: / pathType: Prefix backend: service: name: ai-dashboard-service port: number: 80 ``` #### 6. Deploy to Kubernetes ```bash kubectl apply -f k8s/ ``` ### Cloud Platform Deployment #### AWS ECS ```bash # Create ECS cluster aws ecs create-cluster --cluster-name ai-code-review # Create task definition aws ecs register-task-definition --cli-input-json file://task-definition.json # Create service aws ecs create-service \ --cluster ai-code-review \ --service-name ai-main \ --task-definition ai-code-review:1 \ --desired-count 3 ``` #### Google Cloud Run ```bash # Build and deploy gcloud builds submit --tag gcr.io/PROJECT-ID/ai-code-review # Deploy to Cloud Run gcloud run deploy ai-code-review \ --image gcr.io/PROJECT-ID/ai-code-review \ --platform managed \ --region us-central1 \ --allow-unauthenticated ``` ### Monitoring Setup #### Prometheus Configuration ```yaml global: scrape_interval: 15s scrape_configs: - job_name: 'ai-code-review' static_configs: - targets: ['main:8000', 'github:8001', 'analysis:8002'] metrics_path: /metrics scrape_interval: 30s ``` #### Grafana Dashboard - Import pre-built dashboard - Monitor response times - Track error rates - Resource usage graphs ### Backup Strategy #### Database Backup ```bash # PostgreSQL backup kubectl exec -it postgres-pod -- pg_dump -U ai_user ai_code_review > backup.sql # Redis backup kubectl exec -it redis-pod -- redis-cli BGSAVE ``` #### Configuration Backup ```bash # Backup ConfigMaps and Secrets kubectl get configmaps -o yaml > configmaps-backup.yaml kubectl get secrets -o yaml > secrets-backup.yaml ``` ### Security Hardening #### Network Policies ```yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ai-network-policy namespace: ai-code-review spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - namespaceSelector: matchLabels: name: ai-code-review egress: - to: [] ports: - protocol: TCP port: 53 - protocol: UDP port: 53 ``` #### Pod Security Policy ```yaml apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: ai-psp spec: privileged: false allowPrivilegeEscalation: false requiredDropCapabilities: - ALL volumes: - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'downwardAPI' - 'persistentVolumeClaim' runAsUser: rule: 'MustRunAsNonRoot' seLinux: rule: 'RunAsAny' fsGroup: rule: 'RunAsAny' ``` ### Troubleshooting #### Common Issues 1. **Service not starting**: Check port conflicts and environment variables 2. **GitHub webhooks failing**: Verify signature and URL configuration 3. **High memory usage**: Scale services and optimize caching 4. **Database connection errors**: Check network policies and credentials #### Debug Commands ```bash # Check pod status kubectl get pods -n ai-code-review # View service logs kubectl logs -f deployment/ai-main -n ai-code-review # Debug container kubectl exec -it -n ai-code-review -- /bin/bash # Check resource usage kubectl top pods -n ai-code-review ``` ### Maintenance #### Rolling Updates ```bash # Update deployment kubectl set image deployment/ai-main main=ai-code-review:v2 -n ai-code-review # Check rollout status kubectl rollout status deployment/ai-main -n ai-code-review ``` #### Scaling ```bash # Scale up for high load kubectl scale deployment ai-main --replicas=5 -n ai-code-review # Auto-scaling kubectl autoscale deployment ai-main --cpu-percent=70 --min=2 --max=10 -n ai-code-review ``` --- *For production deployment, ensure all security measures are in place and monitoring is properly configured.*