Sesterce Cloud Doc
  • 👋Welcome on Sesterce Cloud
    • Glossary
    • Account creation
    • Manage your account
    • Payment & Billing
    • Invoicing
  • 🚀Compute instances
    • Configure your Compute Instance
      • SSH Keys management
      • Persistent storage (volumes)
    • Terminal connection
  • 💬AI Inference instances
    • Inference Instance configuration
      • Select your Flavor
      • Select your regions
      • Autoscaling limits
    • Edit an inference instance
    • Chat with Endpoint
  • ▶️Manage your instances
  • 🔗API Reference
    • Authentication
    • GPU Cloud instances
    • SSH Keys
    • Volumes
    • Inference Instances
  • 📗Tutorials
    • Which compute instance for AI models training and inference?
    • Expose AI model from Hugging Face using vLLM
Powered by GitBook
On this page
  • About Sesterce
  • Our Company
  • Our Mission
  • Our offer
  • Our GPUs
  • Our platform

Was this helpful?

Welcome on Sesterce Cloud

NextGlossary

Last updated 2 days ago

Was this helpful?

About Sesterce

Our Company

Sesterce shapes the future of AI with high-performance GPU clusters ranging from 100 to 15,000 GPUs, offering unmatched scalability and efficiency. Our flexible cloud and cluster solutions provide an optimal experience for AI builders.

Our Mission

We're building the future of AI on the largest computing platform available, ensuring sustainability through renewable energy, and providing AI builders the best experience.

Our offer

Our GPUs

Sesterce On-Demand cloud platform provides its users with a wide choice of GPUs, including A100 80G, H100, L40S, L40, A6000, A5000 and RTX6000ADA.

Find

GPU
Hourly Price
GPU
Hourly Price
GPU
Hourly Price
GPU
Hourly Price
GPU
Hourly Price
GPU
Hourly Price
GPU
Hourly Price
GPU
Hourly Price
GPU
Hourly Price
GPU
Hourly Price
GPU
Hourly Price

Our platform

We offer a smooth and secured GPU cloud on-demand platform, allowing our users to launch an instance in just a few clicks, with secure access to the Pod via SSH key. An all-in-one tool for training your AI model, with the ability to reuse your dataset ad infinitum thanks to the storage volumes they can create at the click of a button.

👋
here our tutorial to choose the instance that suits your AI workload use-case!

B200

$6,05

H200

$3.58

H100

$2.48

A100 80G

$1.65

A100

$1.29

A6000

$0.57

RTX4090

$0.66

L40s

$1.10

L40

$0.99

A5000

$0.41

RTX6000Ada

$0.97