Sesterce Cloud Doc
  • 👋Welcome on Sesterce Cloud
    • 🚀Get Started!
      • Account creation
      • Manage your account
      • Payment & Billing
        • Invoicing
  • 🚀Compute instances
    • Compute Instance configuration
      • Persistent storage (volumes)
      • SSH Keys
    • Terminal connection
  • 💬AI Inference instances
    • Inference Instance configuration
      • Select your Flavor
      • Select your regions
      • Autoscaling limits
    • Edit an inference instance
    • Chat with Endpoint
  • ▶️Manage your instances
  • 🔗API Reference
    • Authentication
    • GPU Cloud instances
    • SSH Keys
    • Volumes
    • Inference Instances
  • 📗Tutorials
    • Expose AI model from Hugging Face using vLLM
Powered by GitBook
On this page

Was this helpful?

  1. AI Inference instances
  2. Inference Instance configuration

Select your regions

PreviousSelect your FlavorNextAutoscaling limits

Last updated 2 months ago

Was this helpful?

Sesterce AI inference allows you to deploy your model close to your final users. According to the flavor chosen, you'll be able to select one or several regions.

Our edge nodes spread all over the world work as pick-up points to redirect your final users to the closest inference instance.

💬