ScalewaySkip to loginSkip to main contentSkip to footer section

On Demand GPU Cluster

Boost your AI projects with on-demand access to a scalable GPU cluster

Flexible sizing

Scale your machine learning training with a flexible on-demand GPU cluster. Choose the exact capacity you need, from 2 nodes to 127, don’t overcommit, pay only for what you use.

Top-tier performance

Train your models with NVIDIA H100 Tensor Core GPUs and SpectrumX interconnects, ensuring seamless, high-performance distributed AI training with zero interruptions.

No long-term commitment

Use the cluster for as long as you need – from one week to a few months. You decide when to start and stop, without the burden of long-term contracts. Ideal for temporary or spike AI workloads.

  • EnvironmentalCommitments-Green-illustration.webp

    Boost Innovation Sustainably: 50% Less Power

    DC5, in PAR2 region, is one of Europe's greenest data centers, powered entirely by renewable wind and hydro energy (GO-certified) and cooled with ultra-efficient free and adiabatic cooling. With a PUE of 1.16 (vs. the 1.55 industry average), it slashes energy use by 30-50% compared to traditional data centers.

    Discover Scaleway's environmental commitments

Tech specs

From 16 to 504 GPUs to support your development

Reserve the Cluster fitted for your need — from 16 to 504 GPUs — to secure your access to efficient NVIDIA H100 Tensor Core GPUs.

Fast Networking and GPU-GPU communication for distributed training

NVIDIA HGX H100 with NVlink & Spectrum-X Network accelerates the key communication bottleneck between GPUs and is one of the top solution on the market to run distributed training.

Private and secured environment

NVIDIA Spectrum-X —the latest Networking technology developed by NVIDIA— enables us to build multi-tenant clusters hosted in the same adiabatic Data Center.

Rent an On Demand Cluster for a week to speed up training requiring multiple nodes of aggregate GPU memory

Tell us more about your need!

Use cases

Clusters are a big step? Maybe start with a GPU Instance

H100 PCIe GPU

€2.73/hour (~€1992.9/month)

Accelerate your model training and inference with the most high-end AI chip of the market!

Learn more

L40S GPU Instance

€1.4/hour (~€1,022/month)

Accelerate the next generation of AI-enabled applications with the universal L40S GPU Instance, faster than L4 and cheaper than H100 PCIe.

Learn more

L4 GPU Instance

€0.75/hour (~€548/month)

Optimize the costs of your AI infrastructure with a versatile entry-level GPU.

Learn more