Command Line Interface (CLI)
QuantaLogic provides a powerful command-line interface for running AI agents and executing tasks directly from your terminal.
Installation
You can install QuantaLogic using any of these methods:
Bash | |
---|---|
Basic Usage
Bash | |
---|---|
Global Options
Option | Description | Default |
---|---|---|
--version |
Show version information | - |
--model-name |
Specify the model (litellm format, e.g., "openrouter/deepseek/deepseek-chat") | - |
--log |
Set logging level (info/debug/warning) | info |
--verbose |
Enable verbose output | False |
--mode |
Agent mode (code/search/full) | code |
--vision-model-name |
Specify the vision model (litellm format, e.g., "openrouter/A/gpt-4o-mini") | - |
--max-iterations |
Maximum iterations for task solving | 30 |
--help |
Show help message and exit | - |
Available Modes
code
: Full coding capabilities with advanced reasoningbasic
: Simple task execution without additional featuresinterpreter
: Interactive REPL mode for dynamic interactionfull
: All features enabled including advanced toolscode-basic
: Basic coding features without advanced capabilitiessearch
: Web search capabilities for information gatheringsearch-full
: Enhanced search features with comprehensive analysis
Commands
task
Execute a task with the QuantaLogic AI Assistant:
Bash | |
---|---|
Task-Specific Options
Option | Description | Default |
---|---|---|
--file |
Path to task file | - |
--model-name |
Specify the model (litellm format) | - |
--verbose |
Enable verbose output | False |
--mode |
Agent mode (code/search/full) | - |
--log |
Set logging level (info/debug/warning) | - |
--vision-model-name |
Specify the vision model (litellm format) | - |
--max-iterations |
Maximum iterations for task solving | 30 |
--no-stream |
Disable streaming output | False |
--help |
Show help message and exit | - |
Examples
Basic Task Execution
Best Practices
- Start Simple:
- Begin with
--mode basic
for straightforward tasks -
Gradually increase complexity as needed
-
Debugging:
- Use
--log debug
for troubleshooting - Enable
--verbose
for detailed execution information -
Disable streaming with
--no-stream
when needed for clearer output -
Model Selection:
- Choose models based on task complexity
-
Consider using specialized models for specific tasks (e.g., vision models for image analysis)
-
Task Management:
- Use task files for complex or repetitive tasks
- Adjust
--max-iterations
based on task complexity - Break down complex tasks into smaller subtasks
Security Considerations
- API keys should be set via environment variables (litellm will use env vars by default)
- Code execution is sandboxed by default for security
- Always review generated code before execution
- Use appropriate permissions when working with file system operations
Error Handling
- The CLI provides detailed error messages for common issues
- Check logs with
--log debug
for troubleshooting - Use
--verbose
for additional context when errors occur
Requirements
- Python 3.12+
- Docker (optional, required for code execution tools)
- Internet connection for model API access