Quick Start Guide
Deploy your first AI cluster on Nailabx infrastructure in minutes. Get up and running with GPU compute, storage, and networking.
Prerequisites
- Nailabx Account: Sign up at nailabx.com
- API Access: Generate your API key from the dashboard
- CLI Tool: Install Nailabx CLI (optional but recommended)
- SSH Key: Add your public SSH key for secure access
Install the CLI
curl -fsSL https://cli.nailabx.com/install.sh | bashiwr https://cli.nailabx.com/install.ps1 | iexAuthenticate
nailabx auth loginThis will open your browser to authenticate. Alternatively, use an API token:
export NAILABX_API_TOKEN="your-api-token-here"Deploy Your First Cluster
Create a GPU cluster with 2x NVIDIA H100 GPUs for AI training:
nailabx clusters create \
--name my-first-cluster \
--gpu-type h100 \
--gpu-count 2 \
--region us-east-1 \
--storage 500GBThis command provisions a cluster with 2x H100 GPUs, 500GB NVMe storage, and connects it to our Tier III+ datacenter in US-East-1.
Connect via SSH
Once your cluster is provisioned (usually takes 2-3 minutes), connect via SSH:
nailabx clusters ssh my-first-clusterOr use standard SSH with the provided IP address and your configured SSH key.
Run Your First AI Workload
Test your cluster with a simple PyTorch workload:
# Install PyTorch with CUDA support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
# Verify GPU access
python -c "import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))"
# Run a simple training script
python train.py --epochs 10 --batch-size 32Your cluster comes pre-configured with CUDA 12.1, cuDNN 8.9, and common ML frameworks.
Common Use Cases
LLM Fine-tuning
Fine-tune large language models using Hugging Face Transformers:
nailabx clusters create \
--gpu-type h100 \
--gpu-count 8 \
--storage 2TBInference API
Deploy a production inference endpoint:
nailabx deploy inference \
--model-path ./model.pt \
--port 8080 \
--replicas 3Distributed Training
Scale training across multiple nodes:
nailabx clusters create \
--gpu-type h100 \
--gpu-count 16 \
--nodes 4 \
--interconnect nvlinkJupyter Notebooks
Launch a GPU-enabled Jupyter environment:
nailabx notebooks create \
--gpu-type a100 \
--memory 64GB \
--storage 500GB