cortex engines
This command allows you to manage various engines available within Cortex.
Usage:
- MacOs/Linux
- Windows
cortex engines [options] [subcommand]
cortex.exe engines [options] [subcommand]
Options:
Option | Description | Required | Default value | Example |
---|---|---|---|---|
-h , --help | Display help information for the command. | No | - | -h |
Subcommands:
cortex engines list
info
This CLI command calls the following API endpoint:
This command lists all the Cortex's engines.
Usage:
- MacOs/Linux
- Windows
cortex engines list
cortex.exe engines list
For example, it returns the following:
+---+--------------+-------------------+---------+----------------------------+---------------+| # | Name | Supported Formats | Version | Variant | Status |+---+--------------+-------------------+---------+----------------------------+---------------+| 1 | onnxruntime | ONNX | | | Incompatible |+---+--------------+-------------------+---------+----------------------------+---------------+| 2 | llama-cpp | GGUF | 0.1.34 | linux-amd64-avx2-cuda-12-0 | Ready |+---+--------------+-------------------+---------+----------------------------+---------------+| 3 | tensorrt-llm | TensorRT Engines | | | Not Installed |+---+--------------+-------------------+---------+----------------------------+---------------+
cortex engines get
info
This CLI command calls the following API endpoint:
This command returns an engine detail defined by an engine engine_name
.
Usage:
- MacOs/Linux
- Windows
cortex engines get <engine_name>
cortex.exe engines get <engine_name>
For example, it returns the following:
+-----------+-------------------+---------+-----------+--------+| Name | Supported Formats | Version | Variant | Status |+-----------+-------------------+---------+-----------+--------+| llama-cpp | GGUF | 0.1.37 | mac-arm64 | Ready |+-----------+-------------------+---------+-----------+--------+
info
To get an engine name, run the engines list
command.
Options:
Option | Description | Required | Default value | Example |
---|---|---|---|---|
engine_name | The name of the engine that you want to retrieve. | Yes | - | llama-cpp |
-h , --help | Display help information for the command. | No | - | -h |
cortex engines install
info
This CLI command calls the following API endpoint:
This command downloads the required dependencies and installs the engine within Cortex. Currently, Cortex supports three engines:
llama-cpp
onnxruntime
tensorrt-llm
Usage:
- MacOs/Linux
- Windows
cortex engines install [options] <engine_name>
cortex.exe engines install [options] <engine_name>
Options:
Option | Description | Required | Default value | Example |
---|---|---|---|---|
engine_name | The name of the engine you want to install. | Yes | llama-cpp , onnxruntime , tensorrt-llm | - |
-h , --help | Display help for command. | No | - | -h |
cortex engines uninstall
This command uninstalls the engine within Cortex.
Usage:
- MacOs/Linux
- Windows
cortex engines uninstall [options] <engine_name>
cortex.exe engines uninstall [options] <engine_name>
For Example:
## Llama.cpp enginecortex engines uninstall llama-cpp
Options:
Option | Description | Required | Default value | Example |
---|---|---|---|---|
engine_name | The name of the engine you want to uninstall. | Yes | - | - |
-h , --help | Display help for command. | No | - | -h |