Sesterce Cloud Doc
  • 👋Welcome on Sesterce Cloud
    • 🚀Get Started!
      • Account creation
      • Manage your account
      • Payment & Billing
        • Invoicing
  • 🚀Compute instances
    • Compute Instance configuration
      • Persistent storage (volumes)
      • SSH Keys
    • Terminal connection
  • 💬AI Inference instances
    • Inference Instance configuration
      • Select your Flavor
      • Select your regions
      • Autoscaling limits
    • Edit an inference instance
    • Chat with Endpoint
  • ▶️Manage your instances
  • 🔗API Reference
    • Authentication
    • GPU Cloud instances
    • SSH Keys
    • Volumes
    • Inference Instances
  • 📗Tutorials
    • Expose AI model from Hugging Face using vLLM
Powered by GitBook
On this page
  • Choose your VM offer
  • Set-up your instance
  • Connect to your instance
  • Use your model
  • Common errors
  • Instance RAM insufficient
  • Container not running yet
  • Port used or unavailable

Was this helpful?

  1. Tutorials

Expose AI model from Hugging Face using vLLM

You want to expose to the world an API endpoint to allow your end-users to access an AI Model from Hugging Face? This tutorial is for you!

PreviousInference Instances

Last updated 1 day ago

Was this helpful?

Choose your VM offer

The first step is to choose the VM offer tailored to your needs on . The choice of the instance will depend on several factors, such as the size of your model and the number of end-users.

You can consider following examples according to your use-case:

Small Scale (1-10 simultaneous users)

  • 7B params: L4 (24GB VRAM)

  • 13B params: L40S (48GB VRAM)

  • Example: Mistral 7B with 8 users in 4K context = 16GB + (8 × 2GB) = 32GB VRAM

Medium Scale (10-50 users)

  • 7B params : 2-4x L40S in parallel

  • 13B params: 4x L40S or H100

  • Recommended configuration: Load balancing between multiple GPUs

Large Scale (50+ users)

  • 7B params: 8x L40S or 2x H100

  • 13B+ params: H100 multi-GPU

  • Use quantization techniques (INT8/INT4)

Set-up your instance

Connect to your instance

When your VM instance is launched, you'll be able to get ssh command to connect into it, like ssh sesterce@<IP_MACHINE>.

Pay attention: the docker pull is running! Wait until it's finished to type the following command.

When the docker pull running is finished, use the following command:

docker run -d -e HF_TOKEN=<HUGGING_FACE_TOKEN> --runtime nvidia --gpus all --net=host --ipc=host vllm/vllm-openai --model <MODEL_ID>

Use your model

Well done! Once ce container is running you'll be able to use the model by typing the following command (container run on port 8000). Make sure you replace the variable in the example with your Model ID.

curl -X POST http://<IP_MACHINE>:8000/v1/completions -H "Content-Type: application/json" -d '{"model": "<MODEL_ID>","prompt": "Hello world","max_tokens": 50,"temperature": 0}'

Common errors

Instance RAM insufficient

Make sure you choose an instance with sufficient RAM to run your model.

Container not running yet

Wait before container is running to access the model. You can use following command to get container status:

// docker ps
  • If the list is empty, it means the launching failed

  • Otherwise, you should see the Container name, you can use it through the following command:

// docker logs <CONTAINER_NAME>

Port used or unavailable

You can check port status with following command:

// ss -tuln | grep <PORT>

Well done, you are now able to configure your VM! Please find here . In "Images" section, choose vLLM option. The instance launch usually takes around 5 minutes.

You can now fill your and Model ID.

Well done! the model is accessible from the following endpoint:

📗
the steps required to do it
Hugging Face Token
🚀
http://<IP_MACHINE>:8000/v1/models
Sesterce Cloud
Create token from Hugging Face
Get your Model ID from Hugging Face