diff --git a/.trunk/trunk.yaml b/.trunk/trunk.yaml
index efe2e815..e7ff0b07 100644
--- a/.trunk/trunk.yaml
+++ b/.trunk/trunk.yaml
@@ -38,6 +38,8 @@ lint:
paths:
- styles
- agents/available-connections.mdx
+ - dgraph/self-managed/*.mdx
+ - dgraph/self-hosted.mdx
- linters: [checkov]
paths:
- docs.json
diff --git a/dgraph/self-hosted.mdx b/dgraph/self-hosted.mdx
new file mode 100644
index 00000000..9d456966
--- /dev/null
+++ b/dgraph/self-hosted.mdx
@@ -0,0 +1,797 @@
+---
+title: "Self Hosting Dgraph Guide"
+description:
+ "Complete guide for migrating managed Dgraph clusters to self-hosted
+ infrastructure"
+sidebarTitle: "Self Hosting Guide"
+---
+
+## Overview
+
+This guide walks you through migrating your Dgraph database from managed cloud
+services to a self-hosted environment. It covers step-by-step instructions for
+deployment using various cloud providers and methods, supporting goals like cost
+savings, increased control, and compliance.
+
+
+ This guide supplements the [Dgraph self-managed documentation](/dgraph/self-managed/overview).
+ Be sure to refer to the [self-managed documentation](/dgraph/self-managed/overview)
+ for more information.
+
+
+## Deployment options
+
+When migrating to self-hosted Dgraph, your deployment choice depends on several
+key factors: data size, team expertise, budget constraints, and control
+requirements. Here's how these factors influence your deployment decision:
+
+**Data Size Considerations:**
+
+- **Under 100GB** Docker Compose or Linux are suitable options
+- **100GB to 1TB** Kubernetes or Linux can handle the load
+- **Over 1TB** Kubernetes is required for proper scaling and management
+
+**Team Expertise Factors:**
+
+- **High Kubernetes Experience**: Kubernetes deployment is recommended
+- **Limited Kubernetes Experience**: Docker Compose or Linux are more
+ approachable
+
+**Budget Constraints:**
+
+- **Cost Optimized**: Linux provides the most economical option
+- **Balanced**: Docker Compose offers a good middle ground
+- **Enterprise**: Kubernetes provides enterprise-grade features
+
+**Control Requirements:**
+
+- **Maximum Control**: Linux gives you full control over the environment
+- **Managed Infrastructure**: Kubernetes provides managed infrastructure
+ capabilities
+
+**Available Deployment Methods:**
+
+- **Kubernetes**: Best for large-scale deployments, enterprise environments, and
+ teams with K8s expertise
+- **Docker Compose**: Ideal for development, testing, and smaller production
+ workloads
+- **Linux**: Perfect for cost-conscious deployments and teams wanting
+ maximum control
+
+---
+
+## Prerequisites
+
+Before starting your migration, ensure you have the necessary tools, access, and
+resources.
+
+### Required tools
+
+
+
+ - `kubectl` (v1.24+)
+ - `helm` (v3.8+)
+ - `dgraph` CLI tool
+ - `curl` or similar HTTP client
+ - Cloud provider CLI tools
+
+
+
+ - Docker (for local testing)
+ - Git (for configuration management)
+ - Text editor or IDE
+ - SSH client
+
+
+
+### Access requirements
+
+
+
+ - **Dgraph Cloud**: Admin access to export data
+ - **Hypermode Graph**: Database access credentials
+ - **Network**: Ability to download large datasets
+
+
+
+ - **Cloud Provider**: Account with appropriate permissions
+ - **Kubernetes**: Cluster admin privileges (if using K8s)
+ - **SSL/TLS**: Certificate management capability
+ - **DNS**: Domain management for load balancers
+
+
+
+
+---
+
+## Data export from source systems
+
+The first step in migration is safely exporting your data from your current
+managed service. This section covers export procedures for both Dgraph Cloud and
+Hypermode Graphs.
+
+### Exporting from Dgraph Cloud
+
+Dgraph Cloud provides several methods for exporting your data, including admin
+API endpoints and the web interface.
+
+#### Method 1: Using the Web Interface
+
+
+
+ Log into your Dgraph Cloud dashboard and navigate to your cluster.
+ 
+
+
+
+ Click on the "Export" tab in your cluster management interface. 
+
+
+
+ Select your export format and destination.
+ Dgraph Cloud supports JSON or RDF.
+ 
+
+Click "Start Export" and monitor the progress. Large datasets may take several
+hours.
+
+ Click "Start Export" and monitor the progress. Large datasets may take several
+ hours.
+
+
+
+ Once complete, download your exported data files.
+ 
+
+
+
+
+#### Method 2: Using Admin API
+
+
+```bash Check Cluster Status
+curl -X POST https://your-cluster.grpc.cloud.dgraph.io/admin \
+ -H "Content-Type: application/json" \
+ -d '{"query": "{ state { groups { id members { id addr leader lastUpdate } } } }"}'
+```
+
+```bash Export Schema
+curl -X POST https://your-cluster.grpc.cloud.dgraph.io/admin \
+ -H "Content-Type: application/json" \
+ -d '{"query": "schema {}"}' > schema_backup.json
+```
+
+```bash Export Data (Small Datasets)
+curl -X POST https://your-cluster.grpc.cloud.dgraph.io/admin \
+ -H "Content-Type: application/json" \
+ -d '{"query": "{ backup(destination: \"s3://your-bucket/backup\") { response { message code } } }"}'
+```
+
+```bash Export Data (Alternative Method)
+dgraph export --alpha=your-cluster.grpc.cloud.dgraph.io:443 \
+ --output=/path/to/export \
+ --format=json
+```
+
+
+
+#### Method 3: Bulk export for large datasets
+
+For datasets larger than 10 GB, use the bulk export feature:
+
+
+```bash Request Bulk Export
+curl -X POST https://your-cluster.grpc.cloud.dgraph.io/admin \
+ -H "Content-Type: application/json" \
+ -d '{
+ "query": "mutation {
+ export(input: {
+ destination: \"s3://your-backup-bucket/$(date +%Y-%m-%d)\",
+ format: \"rdf\",
+ namespace: 0
+ }) {
+ response {
+ message
+ code
+ }
+ }
+ }"
+ }'
+```
+
+```bash Check Export Status
+curl -X POST https://your-cluster.grpc.cloud.dgraph.io/admin \
+ -H "Content-Type: application/json" \
+ -d '{"query": "{ state { ongoing } }"}'
+```
+
+
+
+### Exporting from Hypermode Graphs
+
+
+ For larger datasets please contact Hypermode Support to facilitate your graph
+ export.
+
+
+#### Using `admin` endpoint
+
+For smaller datasets you can use the `admin` endpoint to export your graph.
+
+
+ For larger datasets please contact Hypermode Support to facilitate your graph
+ export.
+
+
+```bash
+curl --location 'https://.hypermode.host/dgraph/admin' \
+--header 'Content-Type: application/json' \
+--header 'Dg-Auth: ••••••' \
+--data '{"query":"mutation {\n export(input: { format: \"rdf\" }) {\n response {\n message\n code\n }\n }\n}","variables":{}}'
+```
+
+### Export validation and preparation
+
+
+ Always validate your exported data before proceeding with the migration.
+
+
+#### Data integrity checks
+
+
+
+```bash Verify Export Completeness
+# Check file sizes and contents
+ls -lah exported_data/
+file exported_data/*
+
+# For RDF exports, count triples
+
+if [[-f "exported_data/g01.rdf.gz"]]; then zcat exported_data/g01.rdf.gz | wc -l
+fi
+
+# For JSON exports, validate structure
+
+if [[-f "exported_data/g01.json.gz"]]; then zcat exported_data/g01.json.gz | jq
+'.[] | keys' | head -10 fi
+
+```
+
+```bash Schema Validation
+# Validate schema syntax
+if [ -f "exported_data/g01.rdf.gz" ]; then zcat exported_data/g01.rdf.gz | wc -l
+fi
+
+# For JSON exports, validate structure
+
+if [ -f "exported_data/g01.json.gz" ]; then zcat exported_data/g01.json.gz | jq '.[] | keys' | head -10
+fi
+
+ # Basic GraphQL syntax check
+ node -e "
+ const fs = require('fs');
+ const schema = fs.readFileSync('schema.graphql', 'utf8');
+ console.log('Schema length:', schema.length);
+ console.log('Types defined:', (schema.match(/type \w+/g) || []).length);
+ "
+fi
+
+# Check for required predicates
+grep -E "(uid|dgraph\.|type)" schema_backup.json || echo "Warning: System predicates missing"
+```
+
+```bash Calculate Dataset Metrics
+# Estimate import time and resources needed
+echo "=== Dataset Analysis ==="
+echo "Total files: $(find exported_data/ -name "*.gz" | wc -l)"
+echo "Total size: $(du -sh exported_data/ | cut -f1)"
+echo "Largest file: $(find exported_data/ -name "*.gz" -exec ls -lah {} \; | sort -k5 -hr | head -1)"
+
+# Estimate nodes and edges
+if [[ -f "exported_data/g01.rdf.gz" ]]; then
+ echo "Estimated triples: $(zcat exported_data/g01.rdf.gz | wc -l)"
+fi
+```
+
+
+
+#### Prepare for transfer
+
+
+
+ ```bash
+ # Create organized directory structure
+ mkdir -p migration_data/{data,schema,acl,scripts}
+
+ # Move files to appropriate directories
+ mv exported_data/*.rdf.gz migration_data/data/
+ mv schema* migration_data/schema/
+ mv acl* migration_data/acl/
+ ```
+
+
+
+ ```bash # Generate checksums for integrity verification cd migration_data find
+ . -type f -name "*.gz" -exec sha256sum {} \; > checksums.txt find . -type f
+ ```bash
+ # Generate checksums for integrity verification
+ cd migration_data
+ find . -type f -name "*.gz" -exec sha256sum {} \; > checksums.txt
+ find . -type f -name "*.json" -exec sha256sum {} \; >> checksums.txt
+
+
+
+ ```bash
+ # Create final migration package
+ cd ..
+ tar -czf migration_package_$(date +%Y%m%d).tar.gz migration_data/
+
+ # Verify package
+ tar -tzf migration_package_*.tar.gz | head -10
+ ```
+
+
+
+---
+
+## Pre-migration planning
+
+Proper planning is crucial for a successful migration. This section helps you
+assess your current environment and plan the migration strategy.
+
+### 1. Assess current environment
+
+
+```bash Analyze Current Usage
+# For Dgraph Cloud
+curl -X POST https://your-cluster.grpc.cloud.dgraph.io/admin \
+ -H "Content-Type: application/json" \
+ -d '{"query": "{ state { groups { id checksum tablets { predicate space } } } }"}'
+
+````
+
+```bash Check Performance Metrics
+# Query response times
+time curl -X POST https://your-cluster.grpc.cloud.dgraph.io/query \
+ -H "Content-Type: application/json" \
+ -d '{"query": "{ nodeCount(func: has(_predicate_)) { count(uid) } }"}'
+
+# Memory and storage usage
+curl -X POST https://your-cluster.grpc.cloud.dgraph.io/admin \
+ -H "Content-Type: application/json" \
+ -d '{"query": "{ state { groups { id members { groupId addr } } } }"}'
+````
+
+
+
+### 2. Infrastructure sizing
+
+
+
+ **Alpha Nodes**: 2-4 cores per 1M edges
+ **Zero Nodes**: 1-2 cores (coordination only)
+ **Load Balancer**: 2-4 cores
+ **Monitoring**: 1-2 cores
+
+
+
+ **Alpha Nodes**: 4-8 GB base + 1 GB per 10M edges
+ **Zero Nodes**: 2-4 GB (metadata storage)
+ **Load Balancer**: 2-4 GB
+ **Monitoring**: 4-8 GB
+
+
+
+
+
+ **Data Volume**: 3-5x compressed export size **WAL Logs**: 20-50 GB per node
+ **Backup Space**: 2x data volume **Monitoring**: 50-100 GB
+
+
+
+ **Internal**: 1 Gbps minimum between nodes **External**: 100 Mbps minimum
+ for clients **Bandwidth**: Plan for 3x normal traffic during migration
+ **Latency**: <10 ms between data nodes
+
+
+
+## Data Migration and Import
+
+```mermaid
+sequenceDiagram
+ participant Source as Source System
+ participant Backup as Backup Storage
+ participant Target as Self-Hosted Cluster
+ participant App as Application
+
+ Note over Source,App: Migration Process
+
+ Source->>Backup: 1. Export schema
+ Source->>Backup: 2. Export data
+ Source->>Backup: 3. Export ACL config
+
+ Note over Target: 4. Deploy cluster
+
+ Backup->>Target: 5. Import schema
+ Backup->>Target: 6. Import data
+ Backup->>Target: 7. Import ACL config
+
+ Target->>App: 8. Update connection
+ App->>Target: 9. Verify connectivity
+```
+
+### 1. Verify Cluster Status
+
+
+```bash Check Pods (Kubernetes)
+kubectl get pods -n dgraph
+```
+
+```bash Check Containers (Docker Compose)
+docker-compose ps
+```
+
+```bash Check Services (Linux)
+systemctl status dgraph-alpha dgraph-zero
+```
+
+
+
+
+```bash Port Forward and Health Check (Kubernetes)
+kubectl port-forward -n dgraph svc/dgraph-dgraph-alpha 8080:8080 &
+curl http://localhost:8080/health
+```
+
+```bash Health Check (Docker Compose)
+curl http://localhost:8080/health
+```
+
+```bash Health Check (Linux VPS)
+curl http://10.0.1.10:8080/health
+```
+
+
+
+### 2. Import Schema
+
+
+
+
+
+```bash
+
+kubectl port-forward -n dgraph svc/dgraph-dgraph-alpha 8080:8080 &
+curl -X POST localhost:8080/admin/schema -H "Content-Type: application/json"
+\ -d @schema_backup.json
+curl -X POST localhost:8080/admin/schema \
+ -H "Content-Type: application/json" \
+ -d @schema_backup.json
+```
+
+
+
+
+
+```bash
+
+curl -X POST localhost:8080/admin/schema \ -H "Content-Type:
+application/json" \ -d @schema_backup.json
+
+```
+
+
+
+
+
+ ```bash
+
+ curl -X POST 10.0.1.10:8080/admin/schema \
+ -H "Content-Type: application/json" \
+ -d @schema_backup.json
+
+ ```
+
+
+
+
+### 3. Import Data
+
+
+ Refer to the [Dgraph bulk loader documentation](bulk-loader/) for efficiently
+ handling larger datasets.
+
+
+
+
+
+
+```bash
+
+kubectl run dgraph-live-loader \
+--image=dgraph/dgraph:v23.1.0 \
+--restart=Never \
+--namespace=dgraph \
+--command -- dgraph live \
+--files
+--files /data/export.rdf.gz \
+--alpha dgraph-dgraph-alpha:9080 \ --zero
+dgraph-dgraph-zero:5080
+```
+
+
+
+
+
+```bash
+
+# Copy data files to container docker cp exported_data.rdf.gz
+ dgraph-alpha-1:/dgraph/ # Run live loader docker exec dgraph-alpha-1 dgraph
+ live \ --files /dgraph/exported_data.rdf.gz \ --alpha localhost:9080 \ --zero
+ dgraph-zero-1:5080
+
+```
+
+
+
+
+
+```bash
+
+# Copy data to server scp exported_data.rdf.gz user@10.0.1.10:/tmp/
+# Run live loader ssh user@10.0.1.10 sudo -u dgraph dgraph live \ --files
+/tmp/exported_data.rdf.gz \ --alpha localhost:9080 \ --zero 10.0.1.10:5080
+```
+
+
+
+
+### 4. Restore ACL Configuration
+
+
+
+ ```bash
+ # Replace with your actual endpoint
+ DGRAPH_ENDPOINT="localhost:8080" # Adjust for your deployment
+
+curl -X POST $DGRAPH_ENDPOINT/admin \
+ -H "Content-Type: application/json" \
+ -d '{"query": "mutation { addUser(input: {name: \"admin\", password:
+ -H "Content-Type: application/json" \
+ -d '{"query": "mutation { addUser(input: {name: \"admin\", password: \"password\"}) { user { name } } }"}'
+
+````
+
+
+
+---
+
+## Post-Migration Verification
+
+
+- Count total nodes and compare with original - Verify specific data samples -
+Test query performance - Validate application connections
+
+
+### 1. Data Integrity Check
+
+
+```bash Count Nodes
+curl -X POST localhost:8080/query \
+-H "Content-Type: application/json" \
+-d '{"query": "{ nodeCount(func: has(_predicate_)) { count(uid) } }"}'
+````
+
+```bash Verify Sample Data
+curl -X POST localhost:8080/query \
+ -H "Content-Type: application/json" \
+ -d '{"query": "{ users(func: type(User), first: 10) { uid name email } }"}'
+```
+
+
+
+### 2. Performance Testing
+
+```bash
+time curl -X POST localhost:8080/query \
+ -H "Content-Type: application/json" \
+ -d '{"query": "{ users(func: allofterms(name, \"john\")) { name email } }"}'
+```
+
+
+---
+
+## Monitoring and Maintenance
+
+### 1. Setup Monitoring Stack
+
+```mermaid
+graph LR
+ A[Dgraph Metrics] --> B[Prometheus]
+ B --> C[Grafana Dashboard]
+ B --> D[AlertManager]
+ D --> E[Notifications]
+
+ subgraph "Monitoring Stack"
+ B
+ C
+ D
+ end
+```
+
+
+```bash Install Prometheus
+helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
+helm install monitoring prometheus-community/kube-prometheus-stack \
+ --namespace monitoring \
+ --create-namespace
+```
+
+```yaml Configure Service Monitor
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: dgraph-alpha
+ namespace: dgraph
+spec:
+ selector:
+ matchLabels:
+ app: dgraph-alpha
+ endpoints:
+ - port: http
+ path: /debug/prometheus_metrics
+```
+
+
+
+### 2. Backup Strategy
+
+Set up automated daily backups to ensure data protection.
+
+```bash
+kubectl create cronjob dgraph-backup \
+ --image=dgraph/dgraph:v23.1.0 \
+ --schedule="0 2 * * *" \
+ --restart=OnFailure \
+ --namespace=dgraph \
+ -- dgraph export \
+ --alpha dgraph-dgraph-alpha:9080 \
+ --destination s3://your-backup-bucket/$(date +%Y-%m-%d)
+```
+
+---
+
+## Troubleshooting
+
+
+
+
+ ```bash Check Pod Status
+ kubectl describe pod -n dgraph dgraph-dgraph-alpha-0
+ ```
+
+ ```bash Check Logs
+ kubectl logs -n dgraph dgraph-dgraph-alpha-0 --previous
+ ```
+
+
+
+
+
+ ```bash Check PVC Status
+ kubectl get pvc -n dgraph
+ ```
+
+ ```bash Expand Volume
+ kubectl patch pvc alpha-claim-dgraph-dgraph-alpha-0 \
+ -n dgraph \
+ -p '{"spec":{"resources":{"requests":{"storage":"1Ti"}}}}'
+ ```
+
+
+
+
+ ```bash Test Internal DNS
+ kubectl run debug --image=busybox -it --rm --restart=Never -- \
+ nslookup dgraph-dgraph-alpha.dgraph.svc.cluster.local
+ ```
+
+
+
+---
+
+## Additional Resources
+
+### Dgraph Operational Runbooks
+
+
+ The following runbooks provide operational guidance for various scenarios you
+ may encounter during and after migration.
+
+
+
+
+ Enable or disable high-availability (HA) for Dgraph clusters, scale replicas, and manage cluster topology.
+
+
+
+ Freshly rebuild Zero nodes to avoid idx issues and restore cluster stability.
+
+
+
+ Rebuild high-availability clusters using existing p directories while
+ preserving data.
+
+
+
+ Remove and re-add problematic Alpha nodes to RAFT groups for cluster health.
+
+
+
+ Convert sharded clusters back to non-sharded configuration safely.
+
+
+
+### Migration Validation Checklist
+
+
+
+Use this checklist to ensure your migration was successful:
+
+**Data Integrity**
+
+- [ ] Total node count matches source
+- [ ] Random data samples verified
+- [ ] Schema imported correctly
+- [ ] Indexes functioning properly
+
+**Performance**
+
+- [ ] Query response times acceptable
+- [ ] Throughput meets requirements
+- [ ] Resource utilization within limits
+- [ ] No memory leaks detected
+
+**Operations**
+
+- [ ] Monitoring and alerting active
+- [ ] Backup procedures tested
+- [ ] Scaling mechanisms verified
+- [ ] Security policies enforced
+
+**Application Integration**
+
+- [ ] All clients connecting successfully
+- [ ] Authentication working
+- [ ] API endpoints responding
+- [ ] Load balancing functional
+
+
+
+
+ This migration guide is a living document. Please contribute improvements,
+ report issues, or share your experiences to help the community. For additional
+ support, join the Dgraph community or consult the operational runbooks in the
+ Hypermode ops-runbooks repository.
+
diff --git a/dgraph/self-managed/aws.mdx b/dgraph/self-managed/aws.mdx
new file mode 100644
index 00000000..274ab2ed
--- /dev/null
+++ b/dgraph/self-managed/aws.mdx
@@ -0,0 +1,140 @@
+---
+title: "AWS Deployment"
+description:
+ "Deploy your self-hosted Dgraph cluster on Amazon Web Services using Elastic
+ Kubernetes Service (EKS)"
+unlisted: true
+unindexed: true
+---
+
+## AWS Deployment
+
+Deploy your self-hosted Dgraph cluster on Amazon Web Services using Elastic
+Kubernetes Service (EKS).
+
+```mermaid
+graph TB
+ subgraph "AWS Architecture"
+ A[Application Load Balancer] --> B[EKS Cluster]
+ B --> C[Dgraph Alpha Pods]
+ B --> D[Dgraph Zero Pods]
+ C --> E[EBS Volumes]
+ D --> F[EBS Volumes]
+
+ subgraph "EKS Cluster"
+ C
+ D
+ G[Monitoring]
+ H[Ingress Controller]
+ end
+
+ I[S3 Backup] --> C
+ J[CloudWatch] --> G
+ end
+```
+
+### 1. Infrastructure Setup
+
+#### EKS Cluster Creation
+
+
+```bash Create EKS Cluster
+aws eks create-cluster \
+ --name dgraph-cluster \
+ --version 1.28 \
+ --role-arn arn:aws:iam::ACCOUNT:role/eks-service-role \
+ --resources-vpc-config subnetIds=subnet-12345,securityGroupIds=sg-12345
+```
+
+```bash Update Kubeconfig
+aws eks update-kubeconfig --region us-west-2 --name dgraph-cluster
+```
+
+```bash Create Node Group
+aws eks create-nodegroup \
+ --cluster-name dgraph-cluster \
+ --nodegroup-name dgraph-nodes \
+ --instance-types t3.xlarge \
+ --ami-type AL2_x86_64 \
+ --capacity-type ON_DEMAND \
+ --scaling-config minSize=3,maxSize=9,desiredSize=6 \
+ --disk-size 100 \
+ --node-role arn:aws:iam::ACCOUNT:role/NodeInstanceRole
+```
+
+
+
+#### Storage Class Configuration
+
+```yaml aws-storage-class.yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: dgraph-storage
+provisioner: ebs.csi.aws.com
+parameters:
+ type: gp3
+ iops: "3000"
+ throughput: "125"
+volumeBindingMode: WaitForFirstConsumer
+allowVolumeExpansion: true
+```
+
+### 2. Dgraph Deployment on AWS
+
+
+
+ ```bash kubectl apply -f aws-storage-class.yaml ```
+
+
+
+ ```bash helm repo add dgraph https://charts.dgraph.io helm repo update ```
+
+
+
+ ```bash kubectl create namespace dgraph ```
+
+
+
+ ```bash
+ helm install dgraph dgraph/dgraph \
+ --namespace dgraph \
+ --set image.tag="v23.1.0" \
+ --set alpha.persistence.storageClass="dgraph-storage" \
+ --set alpha.persistence.size="500Gi" \
+ --set zero.persistence.storageClass="dgraph-storage" \
+ --set zero.persistence.size="100Gi" \
+ --set alpha.replicaCount=3 \
+ --set zero.replicaCount=3 \
+ --set alpha.resources.requests.memory="8Gi" \
+ --set alpha.resources.requests.cpu="2000m"
+ ```
+
+
+
+### 3. Load Balancer Configuration
+
+```yaml aws-ingress.yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: dgraph-ingress
+ namespace: dgraph
+ annotations:
+ kubernetes.io/ingress.class: alb
+ alb.ingress.kubernetes.io/scheme: internet-facing
+ alb.ingress.kubernetes.io/target-type: ip
+ alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:REGION:ACCOUNT:certificate/CERT-ID
+spec:
+ rules:
+ - host: dgraph.yourdomain.com
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: dgraph-dgraph-alpha
+ port:
+ number: 8080
+```
diff --git a/dgraph/self-managed/azure.mdx b/dgraph/self-managed/azure.mdx
new file mode 100644
index 00000000..4275522d
--- /dev/null
+++ b/dgraph/self-managed/azure.mdx
@@ -0,0 +1,88 @@
+---
+title: "Azure Deployment"
+description:
+ "Deploy your self-hosted Dgraph cluster on Microsoft Azure using Azure
+ Kubernetes Service (AKS)"
+unlisted: true
+unindexed: true
+---
+
+## Azure Deployment
+
+Deploy your self-hosted Dgraph cluster on Microsoft Azure using Azure Kubernetes
+Service (AKS).
+
+```mermaid
+graph TB
+ subgraph "Azure Architecture"
+ A[Application Gateway] --> B[AKS Cluster]
+ B --> C[Dgraph Alpha Pods]
+ B --> D[Dgraph Zero Pods]
+ C --> E[Azure Disks]
+ D --> F[Azure Disks]
+
+ subgraph "AKS Cluster"
+ C
+ D
+ G[Azure Monitor]
+ H[Ingress Controller]
+ end
+
+ I[Azure Storage] --> C
+ J[Azure Monitor] --> G
+ end
+```
+
+### 1. AKS Cluster Creation
+
+
+```bash Create Resource Group
+az group create --name dgraph-rg --location eastus
+```
+
+```bash Create AKS Cluster
+az aks create \
+ --resource-group dgraph-rg \
+ --name dgraph-cluster \
+ --node-count 3 \
+ --node-vm-size Standard_D4s_v3 \
+ --node-osdisk-size 100 \
+ --enable-addons monitoring \
+ --generate-ssh-keys
+```
+
+```bash Get Credentials
+az aks get-credentials --resource-group dgraph-rg --name dgraph-cluster
+```
+
+```bash Create Storage Class
+kubectl apply -f - <
+
+### 2. Deploy Dgraph on AKS
+
+```bash
+# Create namespace
+kubectl create namespace dgraph
+
+# Deploy with Helm
+helm install dgraph dgraph/dgraph \
+ --namespace dgraph \
+ --set alpha.persistence.storageClass="dgraph-storage" \
+ --set zero.persistence.storageClass="dgraph-storage" \
+ --set alpha.persistence.size="500Gi" \
+ --set zero.persistence.size="100Gi"
+```
diff --git a/dgraph/self-managed/digital-ocean.mdx b/dgraph/self-managed/digital-ocean.mdx
new file mode 100644
index 00000000..4cdf2d75
--- /dev/null
+++ b/dgraph/self-managed/digital-ocean.mdx
@@ -0,0 +1,74 @@
+---
+title: "Digital Ocean Deployment"
+description:
+ "Deploy your self-hosted Dgraph cluster on Digital Ocean using Digital Ocean
+ Kubernetes Service (DOKS)"
+unlisted: true
+unindexed: true
+---
+
+## Digital Ocean Deployment
+
+### Kubernetes Deployment (DOKS)
+
+```mermaid
+graph TB
+ subgraph "Digital Ocean Kubernetes Architecture"
+ A[Load Balancer] --> B[DOKS Cluster]
+ B --> C[Dgraph Alpha Pods]
+ B --> D[Dgraph Zero Pods]
+ C --> E[Block Storage]
+ D --> F[Block Storage]
+
+ subgraph "DOKS Cluster"
+ C
+ D
+ G[DO Monitoring]
+ end
+
+ I[Spaces Storage] --> C
+ end
+```
+
+#### 1. DOKS Cluster Setup
+
+
+```bash Create Cluster
+doctl kubernetes cluster create dgraph-cluster \
+ --region nyc1 \
+ --version 1.28.2-do.0 \
+ --node-pool="name=worker-pool;size=s-4vcpu-8gb;count=3;auto-scale=true;min-nodes=3;max-nodes=9"
+```
+
+```bash Get Kubeconfig
+doctl kubernetes cluster kubeconfig save dgraph-cluster
+```
+
+```bash Apply Storage Class
+kubectl apply -f - <
+
+#### 2. Deploy Dgraph on DOKS
+
+```bash
+# Create namespace
+kubectl create namespace dgraph
+
+# Deploy with Helm
+helm install dgraph dgraph/dgraph \
+ --namespace dgraph \
+ --set alpha.persistence.storageClass="dgraph-storage" \
+ --set zero.persistence.storageClass="dgraph-storage" \
+ --set alpha.persistence.size="500Gi" \
+ --set zero.persistence.size="100Gi"
+```
diff --git a/dgraph/self-managed/docker-compose.mdx b/dgraph/self-managed/docker-compose.mdx
new file mode 100644
index 00000000..a3cae071
--- /dev/null
+++ b/dgraph/self-managed/docker-compose.mdx
@@ -0,0 +1,329 @@
+---
+title: "Docker Compose Deployment"
+description:
+ "Deploy your self-hosted Dgraph cluster using Docker Compose for development
+ and testing environments"
+unlisted: true
+unindexed: true
+---
+
+### Docker Compose Deployment
+
+```mermaid
+graph TB
+ subgraph "Docker Compose Architecture"
+ A[Nginx Load Balancer] --> B[Dgraph Alpha 1]
+ A --> C[Dgraph Alpha 2]
+ A --> D[Dgraph Alpha 3]
+
+ B --> E[Dgraph Zero 1]
+ C --> F[Dgraph Zero 2]
+ D --> G[Dgraph Zero 3]
+
+ subgraph "Shared Storage"
+ H[Docker Volumes]
+ I[Host Directories]
+ end
+
+ B --> H
+ C --> H
+ D --> H
+ E --> I
+ F --> I
+ G --> I
+
+ J[Backup Scripts] --> H
+ K[Monitoring] --> B
+ K --> C
+ K --> D
+ end
+```
+
+#### 1. Prepare Docker Compose Environment
+
+
+
+ ```bash
+ mkdir -p dgraph-compose/{data,config,backups,nginx}
+ cd dgraph-compose
+ ```
+
+
+
+ ```yaml docker-compose.yml
+ version: '3.8'
+
+ services:
+ # Dgraph Zero nodes
+ dgraph-zero-1:
+ image: dgraph/dgraph:v23.1.0
+ container_name: dgraph-zero-1
+ ports:
+ - "5080:5080"
+ - "6080:6080"
+ volumes:
+ - ./data/zero1:/dgraph
+ command: dgraph zero --my=dgraph-zero-1:5080 --replicas=3 --idx=1
+ restart: unless-stopped
+ networks:
+ - dgraph-network
+
+ dgraph-zero-2:
+ image: dgraph/dgraph:v23.1.0
+ container_name: dgraph-zero-2
+ ports:
+ - "5081:5080"
+ - "6081:6080"
+ volumes:
+ - ./data/zero2:/dgraph
+ command: dgraph zero --my=dgraph-zero-2:5080 --replicas=3 --peer=dgraph-zero-1:5080 --idx=2
+ restart: unless-stopped
+ networks:
+ - dgraph-network
+ depends_on:
+ - dgraph-zero-1
+
+ dgraph-zero-3:
+ image: dgraph/dgraph:v23.1.0
+ container_name: dgraph-zero-3
+ ports:
+ - "5082:5080"
+ - "6082:6080"
+ volumes:
+ - ./data/zero3:/dgraph
+ command: dgraph zero --my=dgraph-zero-3:5080 --replicas=3 --peer=dgraph-zero-1:5080 --idx=3
+ restart: unless-stopped
+ networks:
+ - dgraph-network
+ depends_on:
+ - dgraph-zero-1
+
+ # Dgraph Alpha nodes
+ dgraph-alpha-1:
+ image: dgraph/dgraph:v23.1.0
+ container_name: dgraph-alpha-1
+ ports:
+ - "8080:8080"
+ - "9080:9080"
+ volumes:
+ - ./data/alpha1:/dgraph
+ command: dgraph alpha --my=dgraph-alpha-1:7080 --zero=dgraph-zero-1:5080,dgraph-zero-2:5080,dgraph-zero-3:5080 --security whitelist=0.0.0.0/0
+ restart: unless-stopped
+ networks:
+ - dgraph-network
+ depends_on:
+ - dgraph-zero-1
+ - dgraph-zero-2
+ - dgraph-zero-3
+
+ dgraph-alpha-2:
+ image: dgraph/dgraph:v23.1.0
+ container_name: dgraph-alpha-2
+ ports:
+ - "8081:8080"
+ - "9081:9080"
+ volumes:
+ - ./data/alpha2:/dgraph
+ command: dgraph alpha --my=dgraph-alpha-2:7080 --zero=dgraph-zero-1:5080,dgraph-zero-2:5080,dgraph-zero-3:5080 --security whitelist=0.0.0.0/0
+ restart: unless-stopped
+ networks:
+ - dgraph-network
+ depends_on:
+ - dgraph-zero-1
+ - dgraph-zero-2
+ - dgraph-zero-3
+
+ dgraph-alpha-3:
+ image: dgraph/dgraph:v23.1.0
+ container_name: dgraph-alpha-3
+ ports:
+ - "8082:8080"
+ - "9082:9080"
+ volumes:
+ - ./data/alpha3:/dgraph
+ command: dgraph alpha --my=dgraph-alpha-3:7080 --zero=dgraph-zero-1:5080,dgraph-zero-2:5080,dgraph-zero-3:5080 --security whitelist=0.0.0.0/0
+ restart: unless-stopped
+ networks:
+ - dgraph-network
+ depends_on:
+ - dgraph-zero-1
+ - dgraph-zero-2
+ - dgraph-zero-3
+
+ # Load Balancer
+ nginx:
+ image: nginx:alpine
+ container_name: dgraph-nginx
+ ports:
+ - "80:80"
+ - "443:443"
+ volumes:
+ - ./nginx/nginx.conf:/etc/nginx/nginx.conf
+ - ./nginx/ssl:/etc/nginx/ssl
+ restart: unless-stopped
+ networks:
+ - dgraph-network
+ depends_on:
+ - dgraph-alpha-1
+ - dgraph-alpha-2
+ - dgraph-alpha-3
+
+ # Monitoring
+ prometheus:
+ image: prom/prometheus:latest
+ container_name: dgraph-prometheus
+ ports:
+ - "9090:9090"
+ volumes:
+ - ./config/prometheus.yml:/etc/prometheus/prometheus.yml
+ restart: unless-stopped
+ networks:
+ - dgraph-network
+
+ grafana:
+ image: grafana/grafana:latest
+ container_name: dgraph-grafana
+ ports:
+ - "3000:3000"
+ volumes:
+ - ./data/grafana:/var/lib/grafana
+ environment:
+ - GF_SECURITY_ADMIN_PASSWORD=admin
+ restart: unless-stopped
+ networks:
+ - dgraph-network
+
+ networks:
+ dgraph-network:
+ driver: bridge
+
+ volumes:
+ dgraph-data:
+ ```
+
+
+
+ ```nginx nginx/nginx.conf
+ events {
+ worker_connections 1024;
+ }
+
+ http {
+ upstream dgraph_alpha {
+ least_conn;
+ server dgraph-alpha-1:8080;
+ server dgraph-alpha-2:8080;
+ server dgraph-alpha-3:8080;
+ }
+
+ upstream dgraph_grpc {
+ least_conn;
+ server dgraph-alpha-1:9080;
+ server dgraph-alpha-2:9080;
+ server dgraph-alpha-3:9080;
+ }
+
+ server {
+ listen 80;
+ server_name localhost;
+
+ location / {
+ proxy_pass http://dgraph_alpha;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ }
+ }
+
+ # HTTPS configuration (uncomment and configure as needed)
+ # server {
+ # listen 443 ssl http2;
+ # server_name your-domain.com;
+ #
+ # ssl_certificate /etc/nginx/ssl/cert.pem;
+ # ssl_certificate_key /etc/nginx/ssl/key.pem;
+ #
+ # location / {
+ # proxy_pass http://dgraph_alpha;
+ # proxy_set_header Host $host;
+ # proxy_set_header X-Real-IP $remote_addr;
+ # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ # proxy_set_header X-Forwarded-Proto $scheme;
+ # }
+ # }
+ }
+ ```
+
+
+
+ ```yaml config/prometheus.yml
+ global:
+ scrape_interval: 15s
+
+ scrape_configs:
+ - job_name: 'dgraph-alpha'
+ static_configs:
+ - targets:
+ - 'dgraph-alpha-1:8080'
+ - 'dgraph-alpha-2:8080'
+ - 'dgraph-alpha-3:8080'
+ metrics_path: '/debug/prometheus_metrics'
+
+ - job_name: 'dgraph-zero'
+ static_configs:
+ - targets:
+ - 'dgraph-zero-1:6080'
+ - 'dgraph-zero-2:6080'
+ - 'dgraph-zero-3:6080'
+ metrics_path: '/debug/prometheus_metrics'
+ ```
+
+
+
+#### 2. Deploy and Manage Docker Compose Cluster
+
+
+```bash Start Cluster
+# Start the entire cluster
+docker-compose up -d
+
+# Check status
+
+docker-compose ps
+
+# View logs
+
+docker-compose logs -f dgraph-alpha-1
+
+````
+
+```bash Scale Cluster
+# Add more alpha replicas
+docker-compose up -d --scale dgraph-alpha=5
+
+# Remove specific services
+docker-compose stop dgraph-alpha-3
+docker-compose rm dgraph-alpha-3
+````
+
+```bash Backup and Restore
+# Create backup script
+cat > backup.sh << 'EOF'
+#!/bin/bash
+DATE=$(date +%Y%m%d_%H%M%S)
+BACKUP_DIR="./backups/$DATE"
+mkdir -p $BACKUP_DIR
+
+# Export data
+docker exec dgraph-alpha-1 dgraph export --alpha localhost:9080 --destination /dgraph/export
+
+# Copy to backup directory
+docker cp dgraph-alpha-1:/dgraph/export $BACKUP_DIR/
+echo "Backup completed: $BACKUP_DIR"
+EOF
+
+chmod +x backup.sh
+./backup.sh
+```
+
+
diff --git a/dgraph/self-managed/gcp.mdx b/dgraph/self-managed/gcp.mdx
new file mode 100644
index 00000000..9a9afe5d
--- /dev/null
+++ b/dgraph/self-managed/gcp.mdx
@@ -0,0 +1,114 @@
+---
+title: "Google Cloud Platform Deployment"
+description:
+ "Deploy your self-hosted Dgraph cluster on Google Cloud Platform using Google
+ Kubernetes Engine (GKE)"
+unlisted: true
+unindexed: true
+---
+
+## Google Cloud Platform Deployment
+
+Deploy your self-hosted Dgraph cluster on Google Cloud Platform using Google
+Kubernetes Engine (GKE).
+
+```mermaid
+graph TB
+ subgraph "GCP Architecture"
+ A[Cloud Load Balancer] --> B[GKE Cluster]
+ B --> C[Dgraph Alpha Pods]
+ B --> D[Dgraph Zero Pods]
+ C --> E[Persistent Disks]
+ D --> F[Persistent Disks]
+
+ subgraph "GKE Cluster"
+ C
+ D
+ G[GKE Monitoring]
+ H[Ingress]
+ end
+
+ I[Cloud Storage] --> C
+ J[Cloud Monitoring] --> G
+ end
+```
+
+### 1. GKE Cluster Setup
+
+
+```bash Create GKE Cluster
+gcloud container clusters create dgraph-cluster \
+ --zone=us-central1-a \
+ --machine-type=e2-standard-4 \
+ --num-nodes=3 \
+ --disk-size=100GB \
+ --disk-type=pd-ssd \
+ --enable-autoscaling \
+ --min-nodes=3 \
+ --max-nodes=9 \
+ --enable-autorepair \
+ --enable-autoupgrade
+```
+
+```bash Get Credentials
+gcloud container clusters get-credentials dgraph-cluster --zone=us-central1-a
+```
+
+```bash Create Storage Class
+kubectl apply -f - <
+
+### 2. Deploy Dgraph on GKE
+
+```bash
+# Create namespace
+kubectl create namespace dgraph
+
+# Deploy with Helm
+helm install dgraph dgraph/dgraph \
+ --namespace dgraph \
+ --set alpha.persistence.storageClass="dgraph-storage" \
+ --set zero.persistence.storageClass="dgraph-storage" \
+ --set alpha.persistence.size="500Gi" \
+ --set zero.persistence.size="100Gi" \
+ --set alpha.replicaCount=3 \
+ --set zero.replicaCount=3
+```
+
+### 3. Load Balancer Setup
+
+```yaml gcp-ingress.yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: dgraph-ingress
+ namespace: dgraph
+ annotations:
+ kubernetes.io/ingress.global-static-ip-name: dgraph-ip
+ networking.gke.io/managed-certificates: dgraph-ssl-cert
+spec:
+ rules:
+ - host: dgraph.yourdomain.com
+ http:
+ paths:
+ - path: /*
+ pathType: ImplementationSpecific
+ backend:
+ service:
+ name: dgraph-dgraph-alpha
+ port:
+ number: 8080
+```
diff --git a/dgraph/self-managed/linux.mdx b/dgraph/self-managed/linux.mdx
new file mode 100644
index 00000000..716270ac
--- /dev/null
+++ b/dgraph/self-managed/linux.mdx
@@ -0,0 +1,265 @@
+---
+title: "Linux Deployment"
+description:
+ "Deploy your self-hosted Dgraph cluster on Linux Virtual Private Servers (VPS)
+ using systemd services"
+unlisted: true
+unindexed: true
+---
+
+## Linux deployment
+
+```mermaid
+graph TB
+ subgraph "Linux VPS Architecture"
+ A[Load Balancer VPS] --> B[Dgraph Node 1]
+ A --> C[Dgraph Node 2]
+ A --> D[Dgraph Node 3]
+
+ subgraph "Node 1 (10.0.1.10)"
+ B1[Dgraph Alpha]
+ B2[Dgraph Zero]
+ B3[Local Storage]
+ end
+
+ subgraph "Node 2 (10.0.1.11)"
+ C1[Dgraph Alpha]
+ C2[Dgraph Zero]
+ C3[Local Storage]
+ end
+
+ subgraph "Node 3 (10.0.1.12)"
+ D1[Dgraph Alpha]
+ D2[Dgraph Zero]
+ D3[Local Storage]
+ end
+
+ B1 --> B2
+ C1 --> C2
+ D1 --> D2
+
+ B2 -.->|Raft| C2
+ C2 -.->|Raft| D2
+ D2 -.->|Raft| B2
+
+ E[Backup Server] --> B3
+ E --> C3
+ E --> D3
+
+ F[Monitoring Server] --> B1
+ F --> C1
+ F --> D1
+ end
+```
+
+#### 1. VPS Infrastructure Setup
+
+
+
+ Create 3-5 VPS instances with the following specifications: - **CPU**: 4-8
+ cores - **RAM**: 16-32GB - **Storage**: 500GB+ SSD - **OS**: Ubuntu 22.04
+ LTS - **Network**: Private networking enabled
+
+
+
+ ```bash # Update system (run on all nodes) sudo apt update && sudo apt upgrade
+ -y # Install required packages sudo apt install -y curl wget unzip htop iotop
+ # Configure firewall sudo ufw allow ssh sudo ufw allow 8080 # Dgraph Alpha
+ HTTP sudo ufw allow 9080 # Dgraph Alpha gRPC sudo ufw allow 5080 # Dgraph Zero
+ sudo ufw allow 6080 # Dgraph Zero HTTP sudo ufw enable # Set up swap (if
+ needed) sudo fallocate -l 4G /swapfile sudo chmod 600 /swapfile sudo mkswap
+ /swapfile sudo swapon /swapfile echo '/swapfile none swap sw 0 0' | sudo tee
+ -a /etc/fstab ```
+
+
+
+ ```bash
+ # Download and install Dgraph (run on all nodes)
+ curl -sSf https://get.dgraph.io | bash
+
+ # Move to system path
+ sudo mv dgraph /usr/local/bin/
+
+ # Create dgraph user
+ sudo useradd -r -s /bin/false dgraph
+
+ # Create directories
+ sudo mkdir -p /opt/dgraph/{data,logs}
+ sudo chown -R dgraph:dgraph /opt/dgraph
+ ```
+
+
+
+#### 2. Configure Dgraph Services
+
+
+
+ ```bash
+ # Create systemd service for Zero
+ sudo tee /etc/systemd/system/dgraph-zero.service << 'EOF'
+ [Unit]
+ Description=Dgraph Zero
+ After=network.target
+
+ [Service]
+ Type=simple
+ User=dgraph
+ Group=dgraph
+ ExecStart=/usr/local/bin/dgraph zero --my=10.0.1.10:5080 --replicas=3 --idx=1 --wal=/opt/dgraph/data/zw --bindall
+ WorkingDirectory=/opt/dgraph
+ Restart=always
+ RestartSec=5
+ StandardOutput=journal
+ StandardError=journal
+ SyslogIdentifier=dgraph-zero
+
+ [Install]
+ WantedBy=multi-user.target
+ EOF
+
+ # Create systemd service for Alpha
+ sudo tee /etc/systemd/system/dgraph-alpha.service << 'EOF'
+ [Unit]
+ Description=Dgraph Alpha
+ After=network.target dgraph-zero.service
+ Requires=dgraph-zero.service
+
+ [Service]
+ Type=simple
+ User=dgraph
+ Group=dgraph
+ ExecStart=/usr/local/bin/dgraph alpha --my=10.0.1.10:7080 --zero=10.0.1.10:5080,10.0.1.11:5080,10.0.1.12:5080 --postings=/opt/dgraph/data/p --wal=/opt/dgraph/data/w --bindall --security whitelist=0.0.0.0/0
+ WorkingDirectory=/opt/dgraph
+ Restart=always
+ RestartSec=5
+ StandardOutput=journal
+ StandardError=journal
+ SyslogIdentifier=dgraph-alpha
+
+ [Install]
+ WantedBy=multi-user.target
+ EOF
+
+ # Enable and start services
+ sudo systemctl daemon-reload
+ sudo systemctl enable dgraph-zero dgraph-alpha
+ sudo systemctl start dgraph-zero
+ sleep 10
+ sudo systemctl start dgraph-alpha
+ ```
+
+
+
+ ```bash
+ # Create systemd service for Zero
+ sudo tee /etc/systemd/system/dgraph-zero.service << 'EOF'
+ [Unit]
+ Description=Dgraph Zero
+ After=network.target
+
+ [Service]
+ Type=simple
+ User=dgraph
+ Group=dgraph
+ ExecStart=/usr/local/bin/dgraph zero --my=10.0.1.11:5080 --replicas=3 --peer=10.0.1.10:5080 --idx=2 --wal=/opt/dgraph/data/zw --bindall
+ WorkingDirectory=/opt/dgraph
+ Restart=always
+ RestartSec=5
+ StandardOutput=journal
+ StandardError=journal
+ SyslogIdentifier=dgraph-zero
+
+ [Install]
+ WantedBy=multi-user.target
+ EOF
+
+ # Create systemd service for Alpha
+ sudo tee /etc/systemd/system/dgraph-alpha.service << 'EOF'
+ [Unit]
+ Description=Dgraph Alpha
+ After=network.target dgraph-zero.service
+ Requires=dgraph-zero.service
+
+ [Service]
+ Type=simple
+ User=dgraph
+ Group=dgraph
+ ExecStart=/usr/local/bin/dgraph alpha --my=10.0.1.11:7080 --zero=10.0.1.10:5080,10.0.1.11:5080,10.0.1.12:5080 --postings=/opt/dgraph/data/p --wal=/opt/dgraph/data/w --bindall --security whitelist=0.0.0.0/0
+ WorkingDirectory=/opt/dgraph
+ Restart=always
+ RestartSec=5
+ StandardOutput=journal
+ StandardError=journal
+ SyslogIdentifier=dgraph-alpha
+
+ [Install]
+ WantedBy=multi-user.target
+ EOF
+
+ # Enable and start services
+ sudo systemctl daemon-reload
+ sudo systemctl enable dgraph-zero dgraph-alpha
+ sudo systemctl start dgraph-zero
+ sleep 10
+ sudo systemctl start dgraph-alpha
+ ```
+
+
+
+ ```bash
+ # Create systemd service for Zero
+ sudo tee /etc/systemd/system/dgraph-zero.service << 'EOF'
+ [Unit]
+ Description=Dgraph Zero
+ After=network.target
+
+ [Service]
+ Type=simple
+ User=dgraph
+ Group=dgraph
+ ExecStart=/usr/local/bin/dgraph zero --my=10.0.1.12:5080 --replicas=3 --peer=10.0.1.10:5080 --idx=3 --wal=/opt/dgraph/data/zw --bindall
+ WorkingDirectory=/opt/dgraph
+ Restart=always
+ RestartSec=5
+ StandardOutput=journal
+ StandardError=journal
+ SyslogIdentifier=dgraph-zero
+
+ [Install]
+ WantedBy=multi-user.target
+ EOF
+
+ # Create systemd service for Alpha
+ sudo tee /etc/systemd/system/dgraph-alpha.service << 'EOF'
+ [Unit]
+ Description=Dgraph Alpha
+ After=network.target dgraph-zero.service
+ Requires=dgraph-zero.service
+
+ [Service]
+ Type=simple
+ User=dgraph
+ Group=dgraph
+ ExecStart=/usr/local/bin/dgraph alpha --my=10.0.1.12:7080 --zero=10.0.1.10:5080,10.0.1.11:5080,10.0.1.12:5080 --postings=/opt/dgraph/data/p --wal=/opt/dgraph/data/w --bindall --security whitelist=0.0.0.0/0
+ WorkingDirectory=/opt/dgraph
+ Restart=always
+ RestartSec=5
+ StandardOutput=journal
+ StandardError=journal
+ SyslogIdentifier=dgraph-alpha
+
+ [Install]
+ WantedBy=multi-user.target
+ EOF
+
+ # Enable and start services
+ sudo systemctl daemon-reload
+ sudo systemctl enable dgraph-zero dgraph-alpha
+ sudo systemctl start dgraph-zero
+ sleep 10
+ sudo systemctl start dgraph-alpha
+ ```
+
+
+
+---
diff --git a/docs.json b/docs.json
index 834cdb65..a65cbd66 100644
--- a/docs.json
+++ b/docs.json
@@ -246,7 +246,8 @@
"dgraph/why-dgraph",
"dgraph/quickstart",
"dgraph/guides",
- "dgraph/v25-preview"
+ "dgraph/v25-preview",
+ "dgraph/self-hosted"
]
},
{
diff --git a/images/dgraph/self-managed/dg-cloud-export-1.png b/images/dgraph/self-managed/dg-cloud-export-1.png
new file mode 100644
index 00000000..7a4b6b4c
Binary files /dev/null and b/images/dgraph/self-managed/dg-cloud-export-1.png differ
diff --git a/images/dgraph/self-managed/dg-cloud-export-2.png b/images/dgraph/self-managed/dg-cloud-export-2.png
new file mode 100644
index 00000000..bfac4785
Binary files /dev/null and b/images/dgraph/self-managed/dg-cloud-export-2.png differ
diff --git a/images/dgraph/self-managed/dg-cloud-export-3.png b/images/dgraph/self-managed/dg-cloud-export-3.png
new file mode 100644
index 00000000..05477a1d
Binary files /dev/null and b/images/dgraph/self-managed/dg-cloud-export-3.png differ
diff --git a/images/dgraph/self-managed/dg-cloud-export-4.png b/images/dgraph/self-managed/dg-cloud-export-4.png
new file mode 100644
index 00000000..1d80e9e5
Binary files /dev/null and b/images/dgraph/self-managed/dg-cloud-export-4.png differ