Inference Instances
Last updated
Was this helpful?
Last updated
Was this helpful?
This endpoint allows you to view available AI models for deployment, aiding in selecting the right model for your needs.
You have several hardware options available through the AI inference feature of Sesterce Cloud (you can consult this section for more information). This endpoint allows you to explore options for deploying AI instances, which are crucial for planning resources and manage latency rate.
Identify available regions for deploying AI instances, important for compliance and latency considerations.
To manage your registries for storing and accessing AI models, use the following endpoint:
To modify registry details to ensure they meet current security and access needs, use the following endpoint:
The following endpoint allows to remove outdated or unused registries to maintain a clean environment.
Time has come! You can now deploy a new AI inference instance to scale your applications and services, or deploy in production an existing model! Use the following endpoint to perform this action.
This endpoint allows you to activate an AI inference instance to begin processing tasks and data.
Here is the endpoint to monitor your active AI instances to manage resources and performance.
Retrieve detailed information about a specific AI instance for management and troubleshooting.
This endpoints allows you to estimate costs for your running AI instances, helping in budget planning.
This endpoint allows you to modify existing AI instances to adapt to changing project needs. This is particularly useful is you need to update your hardware flavor and/or autoscaling limits according to the use of your dedicated endpoint :
If you need to pause an AI instance to conserve resources and manage costs, use the following endpoint:
A registry is necessary if you need to infere your own custom model, which is not publicly available. on Sesterce Cloud AI Inference service!
To create an inference instance, check that your credit balance is filled. Please our documentation to top up your balance.
The API Key secret should be sent through this header to authenticate the request.
The API Key secret should be sent through this header to authenticate the request.
The API Key secret should be sent through this header to authenticate the request.
The API Key secret should be sent through this header to authenticate the request.
The API Key secret should be sent through this header to authenticate the request.
No Content
The API Key secret should be sent through this header to authenticate the request.
No Content
The API Key secret should be sent through this header to authenticate the request.
The API Key secret should be sent through this header to authenticate the request.
The API Key secret should be sent through this header to authenticate the request.
No Content
The API Key secret should be sent through this header to authenticate the request.
docker.io/library/user/image:tag
someusername
securepassword
example-registry
The API Key secret should be sent through this header to authenticate the request.
docker.io/library/user/image:tag
someusername
securepassword
No Content
The API Key secret should be sent through this header to authenticate the request.
201a99c3-7cd4-4831-865e-b261082fda4b
80
example-inference-instance
example description.
120
{"PORT":"3333"}
59651ba4-657a-41d4-8c42-00f34f732375
npx create-llama
6721058be81810b9dd045f40
["6721058be81810b9dd045f40"]
The API Key secret should be sent through this header to authenticate the request.
59651ba4-657a-41d4-8c42-00f34f732375
The API Key secret should be sent through this header to authenticate the request.
201a99c3-7cd4-4831-865e-b261082fda4b
80
example description.
120
{"PORT":"3333"}
59651ba4-657a-41d4-8c42-00f34f732375
npx create-llama
6721058be81810b9dd045f40
["6721058be81810b9dd045f40"]