Self Hosting Guide
Reducto can be self-hosted for customers on our enterprise tier. Our deployment includes a HELM chart that allows for easy customization and deployment. By default the chart runs on CPU.
Prerequisites
- Kubernetes cluster
- Helm 3.x installed
- Access to Reducto image on a container registry
- S3 Bucket
- (Optional) PostgreSQL database (if not using the included PostgreSQL chart)
Configuration
-
Clone the Reducto Helm chart repository.
-
Create a custom
values.yaml
file for your environment. You can use the following as a starting point:image: repository: [YOUR_CONTAINER_REGISTRY]/reducto pullPolicy: Always tag: "latest" # Or specify a version env: NO_LOG: "1" # Turns off all external logging requests BUCKET: "[YOUR_S3_BUCKET_NAME]" postgres: enabled: true # Set to false if using external database keda: enabled: true # Enables autoscaling ingress: host: [YOUR_DOMAIN] tlsSecretName: [YOUR_TLS_SECRET_NAME] http: replicaCount: 2 resources: requests: cpu: 4 memory: 5Gi worker: scaling: minReplicaCount: 1 maxReplicaCount: 4 resources: requests: cpu: 4 memory: 5Gi
-
Customize the values according to your environment and requirements.
Deployment
-
Update dependencies:
helm dependency update
-
Install the Reducto chart:
helm install reducto . -f your-custom-values.yaml
-
To upgrade an existing installation:
helm upgrade reducto . -f your-custom-values.yaml
Autoscaling
Reducto uses KEDA (Kubernetes Event-driven Autoscaling) for efficient scaling of worker pods. This allows the system to automatically adjust the number of worker pods based on the current workload.
Configuration
The autoscaling behavior is defined in templates/scaling-worker.yaml
. Key configurations include:
minReplicaCount
: Minimum number of worker replicas (default: 1)maxReplicaCount
: Maximum number of worker replicas (default: 4)pollingInterval
: How often KEDA checks the metrics (default: 5 seconds)cooldownPeriod
: Time to wait before scaling down (default: 300 seconds)
You can adjust these values in your custom values.yaml
:
worker:
scaling:
minReplicaCount: 2
maxReplicaCount: 10
How it works
- KEDA polls the
/metrics
endpoint of the Reducto HTTP service every 5 seconds. - It checks the
queue_length
value in the JSON response. - If the queue length exceeds 2 (the
targetValue
), KEDA will scale up the number of worker pods. - If the queue length decreases, KEDA will scale down the number of worker pods after the cooldown period.
This ensures that your Reducto installation can handle varying loads efficiently, scaling up during peak times and scaling down during quiet periods to optimize resource usage.
Monitoring and Maintenance
-
Use
kubectl
to check the status of your pods:kubectl get pods
-
View logs for the HTTP service:
kubectl logs deployment/reducto-http
-
View logs for a worker pod:
kubectl logs deployment/reducto-worker
-
If you've enabled pgweb (for development/testing), you can port-forward to access the PostgreSQL web interface:
kubectl port-forward deployment/reducto-pgweb 8081:8081
Troubleshooting
-
If pods are not starting, check events:
kubectl get events --sort-by='.lastTimestamp'
-
For more detailed pod information:
kubectl describe pod [POD_NAME]
-
Ensure your ingress is properly configured and your domain is pointing to your Kubernetes cluster's ingress controller.
Remember to keep your Reducto installation updated with the latest version for bug fixes and new features. Regularly check for updates and use the helm upgrade
command to apply them.
Updated 4 months ago