Understanding Templates

What Are NeevCloud Templates?

Templates are pre-configured, production-ready GPU environments designed for specific AI/ML workflows. Think of them as snapshots of perfectly configured systems that you can deploy instantly.

Each template includes:

  • Operating System: Optimized Linux distributions for GPU computing

  • GPU Drivers: Correct NVIDIA/AMD/Intel drivers for your hardware

  • CUDA Toolkit: Pre-installed CUDA libraries with correct versions

  • Deep Learning Frameworks: PyTorch, TensorFlow, JAX, or other frameworks fully configured

  • Development Tools: Jupyter Notebook, VS Code Server, or specialized interfaces

  • Python Environment: Pre-configured with common data science libraries (NumPy, Pandas, Matplotlib)

  • System Dependencies: All necessary system-level packages and libraries

Why Templates Transform Your Deployment Experience

Problem: Manual Environment Configuration

Without templates, you follow this tedious process:

# Install CUDA (15-20 minutes)
sudo apt-get update
sudo apt-get install cuda-toolkit-11-8

# Install cuDNN (5-10 minutes)
# Download cuDNN manually from NVIDIA
sudo dpkg -i libcudnn8_*

# Install Python and pip (2-3 minutes)
sudo apt-get install python3-pip

# Install PyTorch with correct CUDA version (10-15 minutes)
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

# Install additional libraries (5-10 minutes)
pip3 install transformers accelerate bitsandbytes

# Troubleshoot compatibility issues (???)

Total time: 40-60 minutes minimum, often longer with debugging.

Solution: Template-Based Deployment

With NeevCloud templates:

  1. Select "PyTorch 2.1" template

  2. Click "Deploy GPU"

  3. Wait 10-20 seconds

  4. Start training

Total time: 10-20 seconds.

Available Template Categories

NeevCloud offers 20+ templates across several categories:

Deep Learning Frameworks

  • PyTorch (multiple versions with different CUDA support)

  • TensorFlow (optimized for GPU training)

  • JAX (for high-performance numerical computing)

  • MXNet (for scalable deep learning)

Specialized AI Workloads

  • Stable Diffusion (for generative AI and image synthesis)

  • LLM Fine-tuning (with Transformers, PEFT, LoRA pre-installed)

  • Computer Vision (with OpenCV, YOLO, and detection frameworks)

  • NLP Pipelines (with spaCy, NLTK, Hugging Face tools)

Scientific Computing

  • RAPIDS (for GPU-accelerated data science)

  • CuPy (NumPy-compatible GPU arrays)

  • Numba (JIT compilation for Python)

Development Environments

  • JupyterLab (interactive notebook environment)

  • VS Code Server (full IDE in your browser)

  • ComfyUI (for diffusion model workflows)

How to Choose the Right Template

Your template choice depends on your specific workload:

For Model Training:

  • Choose templates matching your framework (PyTorch/TensorFlow/JAX)

  • Verify CUDA version compatibility with your code

  • Consider templates with training-specific tools (like Weights & Biases integration)

For Inference Deployment:

  • Select lighter templates with inference-optimized libraries

  • Look for templates with FastAPI or Flask pre-installed for serving models

  • Consider TensorRT-enabled templates for maximum inference speed

For Research and Experimentation:

  • Use JupyterLab templates for interactive development

  • Choose templates with comprehensive library sets

  • Consider templates with visualization tools included

For Generative AI:

  • Select Stable Diffusion templates for image generation

  • Use LLM-specific templates for language model work

  • Look for templates with model quantization tools (bitsandbytes, GPTQ)

Template Version Compatibility

Each template specifies its component versions clearly. For example: PyTorch 2.1 + CUDA 12.1 Template includes:

  • Ubuntu 22.04 LTS

  • CUDA 12.1

  • cuDNN 8.9

  • PyTorch 2.1.0

  • Python 3.10

  • Jupyter Notebook 6.5

This transparency helps you ensure compatibility with your existing code and dependencies.

Using Templates During Deployment

When you deploy a GPU instance, the template selection happens in Step 2 of configuration:

  1. Click Change Template button

  2. Browse or search for your desired template

  3. Review the template's included packages (click for details)

  4. Select the template that matches your requirements

  5. Continue with deployment

The template you choose automatically provisions an instance with all specified software pre-installed and configured.

Custom Dependencies on Top of Templates

Templates provide a strong foundation, but you can still install additional packages:

However, the template handles 90% of common requirements, so you only install truly custom dependencies.

Template Update and Maintenance

NeevCloud regularly updates templates to include:

  • Latest framework versions

  • Security patches

  • Performance optimizations

  • New libraries and tools

When you deploy an instance, you always get the most recent version of your selected template. This ensures you're working with up-to-date, secure software without manual maintenance.

Last updated