Deployments CLI Reference
This page lists all the configuration options available in the CLI to configure your deployment.
app list
Use outerbounds app list to list all the deployments currently provisioned on the platform. 
app delete
Use outerbounds app delete --name <DEP_NAME> to delete the deployment. Replace <DEP_NAME> with the name of the deployment you'd like to delete. 
app deploy
outerbounds app deploy is the main command you will use to provision and manage your deployments.
We recommend using a config file to define your deployment configuration rather than passing all options via the CLI. Config files are easier to manage, version control, and share with your team.
The following sections document all the options available when deploying your app.
Basic Options
--name
- Description: The name of your deployment. This must be unique across your platform.
 - Required: Yes (either via CLI or config file)
 - Example: 
--name my-model-api - Config file equivalent: 
name: my-model-api 
--port
- Description: The port where your application listens for requests. This should match the port your service starts on.
 - Required: Yes (either via CLI or config file)
 - Example: 
--port 8000 - Config file equivalent: 
port: 8000 
--description
- Description: A human-readable description of your deployment for documentation purposes.
 - Example: 
--description "FastAPI service for sentiment analysis" - Config file equivalent: 
description: "FastAPI service for sentiment analysis" 
--app-type
- Description: A custom label to categorize your deployment. Used for organization and filtering.
 - Example: 
--app-type "LLM-Inference" - Config file equivalent: 
app_type: "LLM-Inference" 
Authentication
--auth-type
- Description: Controls how users authenticate to access your deployment.
 - Options:
API: Token-based authentication for programmatic access (cURL, Python scripts, etc.)Browser: SSO authentication via the Outerbounds UI
 - Default: 
Browser - Example: 
--auth-type API - Config file equivalent:
auth:
type: API 
--public-access / --private-access
- Description: Controls whether the deployment is accessible publicly or requires authentication.
 - Default: Public access is enabled
 - Example: 
--private-access - Config file equivalent:
auth:
public: false 
Resources
Configure the compute resources allocated to each worker in your deployment.
--cpu
- Description: CPU allocation per worker. Can be fractional (e.g., 
500m= 0.5 CPU cores). - Default: 
1 - Example: 
--cpu 2or--cpu 500m - Config file equivalent:
resources:
cpu: "2" 
--memory
- Description: Memory allocation per worker. Supports units like 
Mi(mebibytes) andGi(gibibytes). - Default: 
4Gi - Example: 
--memory 8Gi - Config file equivalent:
resources:
memory: "8Gi" 
--gpu
- Description: Number of GPUs to allocate per worker.
 - Example: 
--gpu 1 - Config file equivalent:
resources:
gpu: "1" 
--disk
- Description: Persistent disk storage per worker.
 - Default: 
20Gi - Example: 
--disk 100Gi - Config file equivalent:
resources:
disk: "100Gi" 
--shared-memory
- Description: Shared memory allocation, useful for applications that need inter-process communication or large in-memory datasets.
 - Example: 
--shared-memory 2Gi - Config file equivalent:
resources:
shared_memory: "2Gi" 
Scaling Configuration
--fixed-replicas
- Description: Deploy a fixed number of worker replicas. Cannot be used with 
--min-replicasand--max-replicas. - Example: 
--fixed-replicas 3 - Config file equivalent:
replicas:
fixed: 3 
--min-replicas and --max-replicas
- Description: Enable autoscaling by setting minimum and maximum replica counts. The platform will scale workers between these bounds based on traffic.
 - Example: 
--min-replicas 1 --max-replicas 10 - Config file equivalent:
replicas:
min: 1
max: 10 
--scaling-rpm
- Description: Requests per minute threshold that triggers scaling up. Only applies when using autoscaling (min/max replicas).
 - Default: 
60(when autoscaling is enabled) - Example: 
--scaling-rpm 100 - Config file equivalent:
replicas:
min: 1
max: 10
scaling_policy:
rpm: 100 
Dependencies
Dependencies can be managed in multiple ways. See the config file documentation for detailed examples and best practices.
--dep-from-requirements
- Description: Path to a 
requirements.txtfile containing your Python dependencies. - Example: 
--dep-from-requirements requirements.txt - Config file equivalent:
dependencies:
from_requirements_file: requirements.txt 
--dep-from-pyproject
- Description: Path to a 
pyproject.tomlfile for dependency management. - Example: 
--dep-from-pyproject pyproject.toml - Config file equivalent:
dependencies:
from_pyproject_toml: pyproject.toml 
--python
- Description: Specify the Python version to use.
 - Example: 
--python 3.11 - Config file equivalent:
dependencies:
python: "3.11" 
--pypi
- Description: Install specific PyPI packages directly via CLI. Format: 
package==versionor justpackagefor latest. - Example: 
--pypi numpy==1.24.0 --pypi pandas - Config file equivalent:
dependencies:
pypi:
numpy: "1.24.0"
pandas: "" 
--conda
- Description: Install specific Conda packages.
 - Example: 
--conda numpy==1.24.0 - Config file equivalent:
dependencies:
conda:
numpy: "1.24.0" 
Environment and Secrets
--env
- Description: Set environment variables for your deployment. Can be specified multiple times.
 - Example: 
--env DEBUG=true --env MODEL_PATH=/models/bert - Config file equivalent:
environment:
DEBUG: "true"
MODEL_PATH: "/models/bert" 
--secret
- Description: Attach Outerbounds secrets (like API tokens, credentials) to your deployment. These are securely managed integrations you've configured on the platform.
 - Example: 
--secret hf-token --secret openai-key - Config file equivalent:
secrets:
- hf-token
- openai-key 
Compute Pools
--compute-pools
- Description: Specify which compute pools your deployment can use. Your workers will be scheduled on one of these pools.
 - Example: 
--compute-pools gpu-pool --compute-pools fallback-pool - Config file equivalent:
compute_pools:
- gpu-pool
- fallback-pool 
Container and Packaging
--image
- Description: Use a custom Docker image instead of building one automatically. Useful when you have specialized container requirements.
 - Example: 
--image my-registry.com/my-custom-image:v1.0 - Config file equivalent:
image: my-registry.com/my-custom-image:v1.0 
--no-deps
- Description: Skip dependency installation and use the provided image as-is. Must be used with 
--image. - Example: 
--image python:3.11-slim --no-deps - Config file equivalent:
image: python:3.11-slim
no_deps: true 
--package-src-path
- Description: Directories to include in your deployment package. By default, the current directory is included.
 - Example: 
--package-src-path ./src --package-src-path ./config - Config file equivalent:
package:
src_paths:
- ./src
- ./config 
--package-suffixes
- Description: File extensions to include when packaging your code.
 - Example: 
--package-suffixes .py --package-suffixes .yaml - Config file equivalent:
package:
suffixes:
- .py
- .yaml 
Tags and Metadata
--tag
- Description: Add metadata tags to your deployment for organization and filtering. Tags follow the format 
key:value. - Example: 
--tag team:ml-platform --tag version:2.1 - Config file equivalent:
tags:
- team:ml-platform
- version:2.1 
Advanced Options
--config-file
- Description: Path to a YAML configuration file containing your deployment settings. See the config file documentation for detailed examples.
 - Example: 
--config-file deployment.yaml 
--force-upgrade
- Description: Force an upgrade even if a deployment is currently being updated. Use with caution.
 - Example: 
--force-upgrade - Config file equivalent:
force_upgrade: true 
--generate-static-url
- Description: Generate a predictable URL based on your deployment name instead of a random identifier.
 - Example: 
--generate-static-url - Config file equivalent:
generate_static_url: true 
Complete Example
Here's a complete example deploying a GPU-powered model inference API:
outerbounds app deploy \
  --name sentiment-model-api \
  --port 8000 \
  --auth-type API \
  --cpu 4 \
  --memory 16Gi \
  --gpu 1 \
  --min-replicas 1 \
  --max-replicas 5 \
  --scaling-rpm 120 \
  --dep-from-requirements requirements.txt \
  --secret hf-token \
  --compute-pools gpu-pool \
  --env MODEL_NAME=bert-sentiment \
  --tag team:nlp --tag environment:production
However, we strongly recommend using a config file for clarity:
outerbounds app deploy --config-file deployment.yaml
See the inference configs documentation for how to structure your config file.