Inference Instance configuration
Last updated
Last updated
If you want to infere with one of the best-known models, you can select it in our catalog of pre-charged models list.
If you want to select a custom model that is publicly hosted on Docker Hub or Hugging Face, for example, select “Custom model”. In the text field below, paste the docker pull of your model with the tag you want.
If you want to deploy a private custom template, you'll need to create a registry. Click on “Add registry” in the view below.
To create a registry, perform the following steps: