January 2026 Release

Release Date: January 2026

Release Type: First Public Preview Release

Release Version: v0.1.0


GPU AI Service

New Features

On demand GPU compute

Launch GPU instances when you need them and shut them down when your work is done. This helps you control cost while running training, fine tuning, or batch workloads.

Multiple GPU options

Choose from supported GPU types designed for AI and machine learning workloads. Each option is optimized for high performance compute tasks.

AI Templates

Each AI template launches within seconds, pre-configured with GPU drivers, CUDA libraries, notebooks, and web/ssh access - so engineers can immediately start training, fine-tuning, or serving models without touching the underlying infrastructure.

Fast and simple provisioning

Create GPU instances in a few steps without complex configuration. The platform handles the setup so you can start working quickly.

Secure access

Access GPU instances using SSH keys to ensure secure and controlled login.

Persistent Storage support

Attach persistent storage during GPU creation so your data is retained even if the GPU instance is deleted.

Usage based billing

You are billed only for the time your GPU is running, helping reduce unnecessary spend.

Known Issues

  • Payments are currently processed in USD, while the UI displays approximate amounts in local currencies. Expanded local currency support is planned for upcoming releases.

  • Users can currently remove themselves from an organization without administrator approval.

  • Users are charged from the time a deployment is created, not when it reaches the running state.

  • For OAuth users with only a first name, the last name is shown as "Last" in the UI.

  • GPU startup time may vary based on region and availability.

  • Detailed GPU health and performance telemetry is currently limited.

  • GPU resizing and live migration are not supported.


AI Inference

Model API

New Features

Instant model endpoints

Select a supported model and instantly get an API endpoint without managing infrastructure or deployments.

No setup required

There is no need to configure servers, containers, or scaling. The platform handles everything automatically.

Pay per request

Inference is billed based on actual usage, so you only pay for the requests you make.

Secure API access

Each endpoint is protected with API keys to control access and usage.

Known Issues

  • The model catalog is limited in the first release

  • Uploading custom models is not supported yet


Model Playground

New Features

Browser based model testing

Test models directly from the web interface without writing any code.

Real time prompt experimentation

Change prompts and parameters and see responses immediately to understand model behavior.

Quick exploration

Use the playground to evaluate models before integrating them into your application.

Known Issues

  • Advanced tuning options are not available

  • Playground sessions cannot be saved or shared

  • Streaming is not supported in Playground


Storage

New Features

Persistent block storage

Create storage volumes that can be attached to GPU instances for datasets, checkpoints, and results.

Attach during GPU creation

Storage can be added while launching a GPU instance to ensure data persistence from the start.

Data durability

Data remains available even if the GPU instance is deleted, reducing the risk of data loss.

Simple pricing model

Storage is priced based on the amount of space used, making costs easy to understand.

Known Issues

  • Storage size cannot be changed after creation


Platform Features

New Features

Refer & Earn

Refer new users and earn credits. Credits can be used for GPU, inference, and storage usage. Referrals are tracked automatically.

Support

In-app support for product and billing questions. Dedicated support to help users get started.


Notes

  • This is the first public preview release of the NeevCloud platform

  • Some features and APIs may evolve rapidly as we incorporate customer feedback.

  • New GPU types, regions and AI services will be added progressively.

  • Performance characteristics may change as the platform scales and optimizes.

  • Documentation and tooling will continue to improve over time.

  • We encourage users to share feedback to help shape the Product Roadmap.

Last updated