Model Catalog
The Model Catalog (/) is the control surface for creating, organizing, importing, and reviewing deployable models.
Creating a model
Dashboard
- Navigate to Models in the sidebar.
- Click Create Model.
- Fill in:
- Name — unique identifier (e.g.,
sentiment-v1) - Framework —
pytorch,tensorflow,onnx,sklearn,jax,keras, orcustom - Use case —
image_classification,text_classification,object_detection,regression, etc. - Description — optional context for your team
- Name — unique identifier (e.g.,
Python SDK
from octomil import ModelRegistry
registry = ModelRegistry(api_key="edg_...")
model = registry.ensure_model(
name="sentiment-v1",
framework="pytorch",
use_case="text_classification",
description="Customer sentiment analysis"
)
print(f"Model ID: {model['id']}")
ensure_model is idempotent — it creates the model if it doesn't exist, or returns the existing one.
CLI
octomil push ./converted --model-id sentiment-v1 --version 1.0.0
This creates the model, uploads the first version, and converts locally via CLI in one command.
Uploading versions
Each model can have multiple versions. Upload a new version with the SDK:
result = registry.upload_version_from_path(
model_id=model["id"],
file_path="model.pt",
version="1.0.0",
description="Initial release",
formats="onnx,coreml,tflite", # Auto-convert for mobile
)
Or via the CLI:
octomil push ./converted --model-id sentiment-v1 --version 2.0.0
Version states
| State | Description |
|---|---|
uploading | File upload in progress |
converting | Local format conversion in progress |
draft | Upload complete, not yet published |
published | Available for deployment |
deprecated | Marked as outdated, still downloadable |
archived | Removed from active catalog |
Publish a draft version to make it available:
registry.publish_version(model_id=model["id"], version="1.0.0")
Import from Hugging Face
Import public or gated models directly from Hugging Face:
- Click Import from Hugging Face in the catalog.
- Search by repo ID (e.g.,
google/gemma-2b). - Select a revision or tag.
- Octomil downloads the model and registers it in your catalog.
For gated models, provide a Hugging Face token in Settings > Integrations.
Filtering and search
Filter catalog entries by:
| Filter | Options |
|---|---|
| Framework | PyTorch, TensorFlow, ONNX, scikit-learn, JAX, Keras |
| Use case | Image classification, text classification, object detection, etc. |
| Deployment state | Deployed, not deployed |
| Search | Free-text search across model names and descriptions |
Model detail view
Click a model card to see:
- Version history — all versions with state, size, and creation date
- Conversion artifacts — available formats (ONNX, CoreML, TFLite) per version
- Active rollouts — which versions are deployed and to what percentage
- Training history — federated training rounds linked to this model
- Metrics — inference latency, accuracy, and usage across devices
Downloading models
Pull a model version for local use:
octomil pull sentiment-v1 --version 1.0.0 --format coreml
model_bytes = client.pull_model(
model="sentiment-v1",
version="1.0.0",
format="pytorch"
)
Gotchas
ensure_modelis idempotent — calling it twice with the same name returns the existing model, it does not create a duplicate. Safe to call in automation scripts.- Format conversion is local — uploading with
formats="onnx,coreml,tflite"converts locally via CLI before uploading. Large models may take minutes to convert. - Draft versions are not deployable — you must call
publish_versionbefore a version can be included in a rollout. The dashboard shows draft versions grayed out. - Hugging Face gated models require a token — set your HF token in Settings > Integrations before importing gated repos like
meta-llama/Llama-3.2-3B. - Archived versions are still downloadable — archiving removes a version from the active catalog UI but does not delete the artifact. Use this for cleanup without breaking existing deployments.
Related
- Model Lifecycle — full version state machine
- Advanced FL Configuration — quantization and pruning before upload
- Rollouts — progressive deployment of versions
- Quickstart — end-to-end walkthrough