Getting Started with Nstance
Nstance can be deployed on AWS, Google Cloud, or Proxmox - either as a single cloud deployment, or in complex multi-cloud and hybrid-cloud configurations.
To get started quickly, this page only covers the common single-cloud deployment approach.
Note that we use the OpenTofu tofu command below, but you can easily substitute that for terraform if required.
AWS
Create a new Nstance cluster in AWS using OpenTofu:
export AWS_PROFILE=
export AWS_REGION=us-west-2
curl -O https://raw.githubusercontent.com/nstance-dev/terraform-aws-nstance/refs/heads/main/examples/single-shard/main.tf
tofu init
tofu apply -var="profile=${AWS_PROFILE}" -var="region=${AWS_REGION}" -var="cluster_id=test"Destroy:
export AWS_PROFILE=
export AWS_REGION=us-west-2
# 1. Destroy the nstance-server ASG to stop it from managing instances
tofu destroy -var="profile=${AWS_PROFILE}" -var="region=${AWS_REGION}" -var="cluster_id=test" -target=module.shard.aws_autoscaling_group.server
# 2. Terminate all Nstance-managed instances to avoid orphaned resources or subnet deletion failures
INSTANCE_IDS=$(aws --profile="${AWS_PROFILE}" --region="${AWS_REGION}" ec2 describe-instances \
--filters "Name=tag:nstance:managed,Values=true" "Name=tag:nstance:cluster-id,Values=test" "Name=instance-state-name,Values=running,stopped,pending" \
--query 'Reservations[].Instances[].InstanceId' --output text)
if [ -n "$INSTANCE_IDS" ]; then
aws --profile="${AWS_PROFILE}" --region="${AWS_REGION}" ec2 terminate-instances --instance-ids $INSTANCE_IDS
aws --profile="${AWS_PROFILE}" --region="${AWS_REGION}" ec2 wait instance-terminated --instance-ids $INSTANCE_IDS
fi
# 3. Force-delete the S3 bucket holding nstance cluster state
export BUCKET_NAME=$(tofu state show 'module.cluster.aws_s3_bucket.nstance[0]' | awk -F'"' '/^[[:space:]]*bucket[[:space:]]*=/ { print $2 }')
aws --profile="${AWS_PROFILE}" s3 rb "s3://${BUCKET_NAME}" --force
tofu state rm 'module.cluster.aws_s3_bucket.nstance[0]'
# 4. Destroy all remaining Nstance cluster resources
tofu destroy -var="profile=${AWS_PROFILE}" -var="region=${AWS_REGION}" -var="cluster_id=test"See the OpenTofu/Terraform reference for full module documentation and advanced configurations.
Google Cloud
Create a new Nstance cluster in Google Cloud using OpenTofu:
export GCP_PROJECT=
export GCP_REGION=us-central1
gcloud auth application-default login # or set GOOGLE_APPLICATION_CREDENTIALS
curl -O https://raw.githubusercontent.com/nstance-dev/terraform-gcp-nstance/refs/heads/main/examples/single-shard/main.tf
tofu init
tofu apply -var="project=${GCP_PROJECT}" -var="region=${GCP_REGION}" -var="cluster_id=test"Destroy:
export GCP_PROJECT=
export GCP_REGION=us-central1
gcloud auth application-default login # or set GOOGLE_APPLICATION_CREDENTIALS
# 1. Destroy the nstance-server instance group manager to stop it from managing instances
tofu destroy -var="project=${GCP_PROJECT}" -var="region=${GCP_REGION}" -var="cluster_id=test" -target=module.shard.google_compute_instance_group_manager.server
# 2. Delete all Nstance-managed instances to avoid orphaned resources or subnet deletion failures
gcloud compute instances list \
--project="${GCP_PROJECT}" \
--filter="labels.nstance-managed=true AND labels.nstance-cluster-id=test" \
--format="value(name,zone)" | while read NAME ZONE; do
gcloud compute instances delete "$NAME" --zone="$ZONE" --project="${GCP_PROJECT}" --quiet
done
# 3. Force-delete the GCS bucket holding nstance cluster state
export BUCKET_NAME=$(tofu state show 'module.cluster.google_storage_bucket.nstance[0]' | awk -F'"' '/^[[:space:]]*name[[:space:]]*=/ { print $2 }')
gcloud storage rm -r "gs://${BUCKET_NAME}"
tofu state rm 'module.cluster.google_storage_bucket.nstance[0]'
# 4. Destroy all remaining Nstance cluster resources
tofu destroy -var="project=${GCP_PROJECT}" -var="region=${GCP_REGION}" -var="cluster_id=test"See the OpenTofu/Terraform reference for full module documentation and advanced configurations.
Proxmox
The easiest way to deploy Nstance on a Proxmox VE cluster is to use the provided proxmox bootstrap scripts. You can run them on your Proxmox nodes via the Proxmox Shell or SSH. There’s a set of commands to run once per cluster, and then a set of commands to run once per node.
Proxmox VE & Object Storage
To deploy Nstance on a Proxmox VE cluster, you will need to have an object storage solution, e.g:
Use a public cloud offering such as S3 or GCS
Use the built-in Proxmox Ceph with RGW for S3-compatibility
Run SeaweedFS on your proxmox cluster or adjacent servers
The instructions below demonstrate how to deploy a single SeaweedFS process for dev/testing setups, to allow you to get started quickly. For production setups, you’ll want to either deploy SeaweedFS in a HA configuration, or consider options 1 or 2 above. See the Nstance Proxmox docs for full integration details and requirements.
Run Once Per Proxmox VE Cluster
Set up object storage — use an existing S3-compatible backend, or for dev/test you can create a single-node SeaweedFS service with:
./seaweedfs-test-setup.sh --bucket nstanceCreate a Proxmox API token (save the token secret from the output):
pveum user add nstance@pve pveum aclmod / -user nstance@pve -role PVEVMAdmin,PVEDatastoreAdmin,PVEAuditor,PVESDNUser pveum user token add nstance@pve nstance-token --privsep 0 export PROXMOX_TOKEN_SECRET='<token-secret>' export PROXMOX_API_URL='https://localhost:8006/api2/json' # optional, this is the default export PROXMOX_TOKEN_ID='nstance@pve!nstance-token'Export credentials used by the remaining steps (example AWS credentials below work with SeaweedFS setup from above):
export AWS_ACCESS_KEY_ID=admin export AWS_SECRET_ACCESS_KEY=admin export AWS_ENDPOINT_URL=http://localhost:8333 export AWS_S3_USE_PATH_STYLE=trueGenerate and upload the shard config:
./create-shard-config.sh \ --vip 10.0.0.100 --shard dev --bucket nstance \ --s3-endpoint http://localhost:8333 \ --userdata ./my-userdata.sh # or https://example.com/userdata.shGenerate a shared encryption key (then copy to all nodes):
mkdir -p /etc/nstance openssl rand 32 > /etc/nstance/encryption.keySet up DHCP on the VM bridge for dev/test, if VMs are on a private VLAN without an existing DHCP server, run on one node only:
./dnsmasq-test-setup.sh --interface vmbr1
Run On Each Proxmox VE Node
Enable NAT for VM internet access (if VMs are on a private bridge without a gateway, replacing
<subnet-cidr>):sysctl -w net.ipv4.ip_forward=1 echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/99-nat-forward.conf iptables -t nat -A POSTROUTING -s <subnet-cidr> -o vmbr0 -j MASQUERADE apt-get install -y iptables-persistent && netfilter-persistent saveExport env vars from earlier (replace
<token-secret>with the value from step 2):export PROXMOX_TOKEN_SECRET='<token-secret>' export PROXMOX_API_URL='https://localhost:8006/api2/json' # optional, this is the default export PROXMOX_TOKEN_ID='nstance@pve!nstance-token' export AWS_ACCESS_KEY_ID=admin export AWS_SECRET_ACCESS_KEY=admin export AWS_ENDPOINT_URL=http://localhost:8333 export AWS_S3_USE_PATH_STYLE=trueCreate VM template on each node (can skip if using shared storage and already created):
./vm-template-setup.shInstall nstance-server using keepalived for a Virtual IP (replace
<virtual-ip>with a valid IP address):Note: in production, you may want a different solution to keepalived and VRRP. This is provided as a reference. Configuration of your network is not in scope here.
NSTANCE_VERSION=$(curl -sL https://api.github.com/repos/nstance-dev/nstance/releases/latest | grep -o '"tag_name": *"[^"]*"' | head -1 | cut -d'"' -f4) curl -fSL -o nstance-server.tar.gz "https://github.com/nstance-dev/nstance/releases/download/${NSTANCE_VERSION}/nstance-server_${NSTANCE_VERSION#v}_linux_amd64.tar.gz" && tar -xvzf nstance-server.tar.gz ./server-with-keepalived.sh \ --server-binary ./nstance-server \ --vip <virtual-ip> --shard dev --bucket nstanceVerify nstance-server is running
systemctl status nstance-server