Welcome to Forge
Forge is a modern, self-hosted web interface for managing popular DevOps tools with enterprise-grade compliance features.
What is Forge?
Forge is a comprehensive automation platform that provides a unified interface for running Ansible playbooks, Terraform/OpenTofu infrastructure code, Packer builds, PowerShell scripts, and more. Built with enterprise compliance in mind, Forge combines powerful automation capabilities with built-in support for DISA STIG compliance, golden image management, and comprehensive security features.
Key Differentiators
- Enterprise Compliance First - Built-in STIG compliance, OpenSCAP scanning, and multiple compliance framework support
- Golden Image Management - 16 pre-built STIG-hardened Packer templates for multi-cloud deployments
- Self-Hosted & Secure - All data, credentials, and logs remain on your infrastructure with HashiCorp Vault integration
- Easy Installation - Linux Service Installer with automated setup, TLS configuration, and dependency management
- Developer-Friendly - Modern web UI, REST API, and support for all popular DevOps tools
Key Features
Infrastructure as Code
Forge supports the full spectrum of infrastructure automation:
- Ansible - Playbook execution with STIG hardening roles and inventory management
- Terraform/OpenTofu - Infrastructure provisioning with remote state management and workspaces
- Terragrunt - DRY Terraform configurations and multi-environment management
- Terramate - Terraform stack orchestration and drift detection
- Terraformer - Import existing infrastructure from AWS, Azure, GCP, VMware, and Kubernetes
- Pulumi - Modern infrastructure as code with multiple programming languages
- Packer - Build golden images for multiple cloud providers with STIG hardening
- PowerShell & Shell - Execute scripts on Windows and Linux systems
- Python - Run Python scripts and automation workflows
Golden Image Management
Build and manage pre-configured, hardened VM images:
- 16 Pre-Built Templates - Production-ready STIG-hardened templates for RHEL 8/9, Ubuntu 22.04, and Windows Server 2022
- Multi-Cloud Support - AWS (AMIs), Azure (Managed Images), GCP (Compute Images), VMware vSphere
- Visual Builder - Create Packer templates without writing HCL code
- HCL Editor - Advanced template editing with validation and Git integration
- Image Catalog - Centralized registry of built images with search and filtering
- STIG Hardening - Automated DISA STIG compliance built into templates
- Template Library - Import and share templates across projects
Compliance & Security
Enterprise-grade compliance management built into the platform:
- STIG Viewer - Interactive compliance finding management with status tracking
- Policy Packs - Curated Ansible playbooks for automated remediation
- Remediation Coverage - Track automated vs manual findings with coverage metrics
- Manual Task Assignment - Bulk assign templates to manual findings for automation
- Multiple Frameworks - Import multiple compliance standards per project (STIG, CIS, NIST, PCI-DSS)
- CKL Export - Generate STIG checklists for certification and reporting
- OpenSCAP Integration - SCAP content management and automated compliance scanning
- Finding Management - Track status (NotAFinding, Open, NotApplicable), attach screenshots, add comments
- Screenshot Attachments - Document compliance evidence inline
Enterprise Features
Built for teams and organizations:
- RBAC - Fine-grained role-based access control (Owner, Manager, Task Runner, Reporter, Guest)
- Audit Logging - Complete audit trail of all actions and changes
- Multi-Project - Isolated project workspaces for teams and environments
- Secret Management - Encrypted credential storage with HashiCorp Vault integration
- LDAP/OpenID Connect - Enterprise authentication with support for 10+ providers
- API-First - Full REST API for automation and integration
- Session Management - Configurable session timeouts with inactivity-based logout
- TLS 1.3 - Modern encryption with automatic certificate management
Bare Metal Automation
Deploy and manage physical servers:
- PXE Boot Deployment - Network-based installation with kickstart/preseed
- ISO Installation - Custom bootable ISOs with embedded configuration
- Golden Image Deployment - Deploy pre-built STIG-hardened images to bare metal
- BMC Management - Out-of-band management for Dell iDRAC, HP iLO, and Redfish-compatible systems
- GigaIO FabreX Integration - Composable infrastructure management for dynamic resource allocation
Infrastructure Import
Bring existing infrastructure into code:
- Terraformer Integration - Import existing infrastructure from AWS, Azure, GCP, VMware, Kubernetes
- Resource Selection - Choose specific resource types and regions
- Tag Filtering - Import only resources matching specific tags
- Template/Repository Output - Save as executable templates or Git repositories
- State Generation - Automatic Terraform state file generation
Linux Service Installer
Streamlined installation for Linux servers:
- Systemd Service - Automatic service installation on Ubuntu, RHEL, Rocky, Alma, SLES
- Encrypted Configuration - Secrets stored in
/etc/forge/config.encwith automatic key management - Automated TLS - Built-in Let's Encrypt provisioning with certbot (self-signed fallback)
- Vault Integration - HashiCorp Vault installed, initialized, and configured automatically
- Dependency Bootstrap - Required CLI tools (Ansible, OpenSCAP, QEMU) auto-installed per distribution
Architecture
Forge is built with a modern, cloud-native architecture:
- Backend: Go-based API server with RESTful endpoints
- Frontend: Vue.js web application with responsive design
- Database: SQLite (default), PostgreSQL, MySQL, or BoltDB support
- Storage: Local file system or cloud storage for uploads and logs
- Secrets: HashiCorp Vault or local encryption for credential storage
- Deployment: Single binary, Docker container, or Kubernetes deployment
System Requirements
Minimum:
- CPU: 2 cores
- RAM: 2 GB
- Disk: 10 GB free space
- OS: Linux (x64, ARM64), macOS, Windows (via WSL)
Recommended (Production):
- CPU: 4+ cores
- RAM: 4+ GB
- Disk: 50+ GB free space (for logs, task files, images)
- Database: PostgreSQL or MySQL for multi-user environments
- Network: HTTPS with valid certificate
Use Cases
DevOps Teams
Run Ansible playbooks, deploy Terraform infrastructure, and manage infrastructure as code from a single unified interface. Schedule tasks, manage credentials securely, and collaborate across teams.
Security & Compliance Teams
Import STIG checklists, track compliance findings, automate remediation with Policy Packs, and export CKL files for certification. Manage multiple compliance frameworks in one place.
Infrastructure Teams
Build golden images with Packer, deploy to multiple cloud providers, import existing infrastructure with Terraformer, and manage bare metal servers with BMC integration.
Platform Engineering
Provide self-service infrastructure automation to development teams, manage secrets with Vault, and maintain compliance across all infrastructure deployments.
Quick Start
1. Installation
Choose your preferred installation method:
- Linux Service Installer - Recommended for Linux servers (Ubuntu, RHEL, Rocky, Alma, SLES)
- Docker - Fast setup with containers
- Binary File - Manual installation
- Kubernetes - Helm chart deployment
- Cloud Platforms - AWS, Azure, GCP guidance
2. Initial Setup
- Access the web UI at
http://localhost:3000(or your configured address) - Create an admin user during initial setup
- Configure database connection (SQLite is default)
- Complete basic configuration
3. Create Your First Project
- Create a new project
- Add credentials to Key Store (SSH keys, cloud credentials)
- Add an inventory with target hosts
- Create a task template (Ansible, Terraform, Shell, etc.)
- Run your first task!
For detailed steps, see the Getting Started Guide.
Key Concepts
- Projects - Collections of related resources, configurations, and tasks
- Task Templates - Reusable definitions of tasks that can be executed on demand or scheduled
- Tasks - Specific instances of jobs or operations executed by Forge
- Schedules - Automate task execution at specified times or intervals
- Inventory - Collections of target hosts (servers, VMs, containers) on which tasks will be executed
- Variable Groups - Configuration contexts that hold sensitive information such as environment variables and secrets
- Compliance Frameworks - Imported compliance standards (STIG, CIS, NIST, PCI-DSS) with findings and remediation tracking
- Golden Images - Pre-configured, hardened VM/AMI images built with Packer
Database Support
Forge supports multiple database backends:
- SQLite (Default) - Single-file database, zero configuration, perfect for development and small-medium deployments
- PostgreSQL - Recommended for enterprise deployments and high availability
- MySQL - Supported for existing MySQL infrastructure
- BoltDB - Embedded key/value database for simple deployments
Why SQLite is the Default
- ✅ Zero configuration - just one file
- ✅ Full feature parity with PostgreSQL/MySQL
- ✅ Enterprise features (Vault integration, Secret Storage, etc.)
- ✅ Perfect for teams up to 50 users
- ✅ Easy backups (copy the file)
- ✅ No separate database server needed
Documentation
Getting Started
- Getting Started Guide - Step-by-step introduction
- Installation Guide - Installation options
- Configuration Guide - Configure Forge
User Guides
- User Guide - Day-to-day usage and features
- Compliance Management - STIG and compliance workflows
- Golden Images - Build and manage images
- Bare Metal Automation - Physical server deployment
Administration
- Administration Guide - Installation, configuration, security
- Security Guide - Secure your installation
- Authentication - LDAP and OpenID setup
- API Reference - REST API documentation
Support
- FAQ - Common questions and troubleshooting
- Troubleshooting - Resolve common issues
Links
- Source Code: https://github.com/Digital-Data-Co/forge
- Issue Tracking: https://github.com/Digital-Data-Co/forge/issues
- Docker Images: https://ghcr.io/digital-data-co/forge
- Contact: contact@digitaldata.co
License
Forge is open-source software. See the LICENSE file for details.
Ready to get started? Check out the Installation Guide or Getting Started Guide.
Administration Guide
Welcome to the Forge Administration Guide. This guide provides comprehensive information for installing, configuring, and maintaining your Forge instance.
What is Forge?
Forge is a modern, open-source web interface for running automation tasks with enterprise-grade compliance features. It is designed to be a lightweight, fast, and easy-to-use alternative to more complex automation platforms.
It allows you to securely manage and execute tasks for:
- Ansible playbooks
- Terraform/OpenTofu infrastructure-as-code
- Terragrunt and Terramate for Terraform orchestration
- Packer for golden image building
- Pulumi for modern IaC
- PowerShell and Shell scripts
- Python scripts
Core Features & Philosophy
Understanding Forge's design principles can help you get the most out of it:
-
Lightweight and Performant: Forge is written in Go and distributed as a single binary file. It has minimal resource requirements (CPU/RAM) and does not require external dependencies like Kubernetes, Docker, or a JVM. This makes it fast, efficient, and easy to deploy.
-
Simple to Install and Maintain: You can get Forge running in minutes. The Linux Service Installer provides automated setup with systemd service installation, TLS configuration, and Vault integration. Installation can be as simple as downloading the binary and running it. The simple architecture makes upgrades and maintenance straightforward.
-
Flexible Deployment: Run it as a binary, as a systemd service, or in a Docker container. It's suitable for everything from a personal homelab to enterprise environments.
-
Self-Hosted and Secure: Forge is a self-hosted solution. All your data, credentials, and logs remain on your own infrastructure, giving you full control. Credentials are always encrypted in the database. HashiCorp Vault integration is available for advanced secret management.
-
Enterprise Compliance: Built-in support for DISA STIG compliance, OpenSCAP scanning, policy packs, and multiple compliance frameworks (CIS, NIST, PCI-DSS).
-
Powerful Integrations: While simple, Forge supports powerful features like LDAP/OpenID authentication, detailed role-based access control (RBAC) per project, remote runners for scaling out task execution, golden image management with Packer, infrastructure import with Terraformer, and a full REST API for programmatic access.
Quick Links
Installation
- Overview
- Linux Service Installer - Recommended for Linux servers
- Docker
- Binary File
- Kubernetes (Helm chart)
- Cloud Platforms
Configuration
Database
Security
Authentication
Secret Management
Operations
- System Binaries - Manage Packer, Terraform, Ansible, etc.
- Runners - Remote execution agents
- CLI Tools - Command-line administration
- Logging - Log management and analysis
- Notifications - Email, Slack, Teams, etc.
- API - REST API reference
Maintenance
Installation
You can install Forge in multiple ways, depending on your operating system, environment, and preferences:
Installation Methods
Linux Service Installer (Recommended for Linux)
The Linux Service Installer provides the most complete setup experience for Linux servers. It automatically:
- Installs Forge as a systemd service
- Configures encrypted configuration storage
- Sets up TLS with Let's Encrypt (or self-signed fallback)
- Installs and configures HashiCorp Vault
- Installs required dependencies (Ansible, OpenSCAP, QEMU, etc.)
Supported Distributions:
- Ubuntu 20.04, 22.04, 24.04
- Red Hat Enterprise Linux 8, 9
- Rocky Linux 8, 9
- AlmaLinux 8, 9
- SUSE Linux Enterprise Server 15+
Docker
Run Forge as a container using Docker or Docker Compose. Ideal for fast setup, sandboxed environments, and CI/CD pipelines. Recommended for users who prefer infrastructure as code.
Features:
- Pre-built images with all dependencies
- STIG-hardened container images
- Read-only filesystem support
- Easy backup and restore
Binary File
Download a precompiled binary from the releases page. Great for manual installation or embedding in custom workflows. Works across Linux, macOS, and Windows (via WSL).
Use Cases:
- Quick testing and evaluation
- Custom deployment scripts
- Embedded in other systems
Kubernetes (Helm Chart)
Deploy Forge into a Kubernetes cluster using Helm. Best suited for production-grade, scalable infrastructure. Supports easy configuration and upgrades via Helm values.
Features:
- High availability support
- Horizontal scaling
- Persistent storage
- Service mesh integration
Cloud Platforms
Guidance for deploying Forge to cloud platforms using VMs, containers, or Kubernetes with managed services.
Supported Platforms:
- AWS (EC2, ECS, EKS)
- Azure (VM, Container Instances, AKS)
- Google Cloud Platform (Compute Engine, GKE)
- Other cloud providers
System Requirements
Minimum Requirements
- CPU: 2 cores
- RAM: 2 GB
- Disk: 10 GB free space
- OS: Linux (x64, ARM64), macOS, Windows (via WSL)
Recommended for Production
- CPU: 4+ cores
- RAM: 4+ GB
- Disk: 50+ GB free space (for logs, task files, images)
- Database: PostgreSQL or MySQL for multi-user environments
- Network: HTTPS with valid certificate
Required Dependencies
Forge will automatically install these during Linux Service Installer setup:
- Ansible - For playbook execution
- OpenSCAP - For compliance scanning
- QEMU - For local Packer builds (optional)
- Git - For repository access
- Terraform/OpenTofu - Managed via System Binaries (optional)
- Packer - Managed via System Binaries (optional)
Post-Installation
After installation, you'll need to:
- Access the Web UI - Default:
http://localhost:3000 - Complete Initial Setup - Create admin user and configure database
- Configure Authentication - Set up LDAP or OpenID if needed
- Add System Binaries - Install Terraform, Packer, etc. via Admin Settings
- Create Your First Project - Start using Forge!
Next Steps
- Configuration Guide - Configure Forge for your environment
- Security Guide - Secure your installation
- User Guide - Learn how to use Forge
Linux Service Installer
The Linux Service Installer is the recommended installation method for Linux servers. It provides automated setup with systemd service installation, TLS configuration, Vault integration, and dependency management.
Supported Distributions
- Ubuntu: 20.04, 22.04, 24.04
- Red Hat Enterprise Linux: 8, 9
- Rocky Linux: 8, 9
- AlmaLinux: 8, 9
- SUSE Linux Enterprise Server: 15+
Features
The Linux Service Installer automatically:
- ✅ Installs Forge as a systemd service
- ✅ Configures encrypted configuration storage (
/etc/forge/config.enc) - ✅ Sets up TLS with Let's Encrypt (or self-signed fallback)
- ✅ Installs and configures HashiCorp Vault
- ✅ Installs required dependencies (Ansible, OpenSCAP, QEMU, etc.)
- ✅ Configures automatic key management
- ✅ Sets up proper file permissions and security
Installation Steps
1. Download the Installer
Download the latest Forge binary for your platform:
# For x64 systems
wget https://github.com/Digital-Data-Co/forge/releases/download/v0.2.5/forge_Linux_x86_64.tar.gz
tar -xzf forge_Linux_x86_64.tar.gz
# For ARM64 systems
wget https://github.com/Digital-Data-Co/forge/releases/download/v0.2.5/forge_Linux_arm64.tar.gz
tar -xzf forge_Linux_arm64.tar.gz
2. Run the Installer
Execute the installer with appropriate permissions:
sudo ./forge install
The installer will:
- Detect your Linux distribution
- Install required system packages
- Create the
forgesystem user - Set up systemd service files
- Configure encrypted configuration storage
- Install and initialize HashiCorp Vault
- Set up TLS certificates (Let's Encrypt or self-signed)
- Install dependencies (Ansible, OpenSCAP, QEMU, etc.)
3. Complete Initial Setup
After installation, access the web UI:
# The service starts automatically
sudo systemctl status forge
# Access the web UI
# Default: http://localhost:3000
# Or use your server's IP/hostname
Navigate to the web UI and complete the initial setup wizard:
- Create an admin user
- Configure database connection
- Set up basic configuration
Configuration
Encrypted Configuration
The installer stores sensitive configuration in /etc/forge/config.enc with automatic key management. The encryption key is managed securely by the system.
TLS Configuration
The installer automatically configures TLS:
Let's Encrypt (Recommended):
- Automatically provisions certificates via certbot
- Auto-renewal configured
- Requires valid domain name and port 80/443 access
Self-Signed (Fallback):
- Generated automatically if Let's Encrypt fails
- Suitable for development/testing
- Browser warnings expected
HashiCorp Vault
Vault is automatically:
- Installed and initialized
- Unsealed and ready to use
- Integrated with Forge for secret storage
- Configured with proper permissions
System Dependencies
The installer automatically installs:
- Ansible - Latest version from distribution repositories
- OpenSCAP - For compliance scanning
- QEMU - For local Packer builds (if supported)
- Git - For repository access
- Other tools - As required by your distribution
Service Management
Start/Stop Service
sudo systemctl start forge
sudo systemctl stop forge
sudo systemctl restart forge
Enable/Disable Auto-Start
sudo systemctl enable forge
sudo systemctl disable forge
View Logs
# View service logs
sudo journalctl -u forge -f
# View recent logs
sudo journalctl -u forge -n 100
# View logs since boot
sudo journalctl -u forge -b
Service Status
sudo systemctl status forge
File Locations
After installation, files are organized as follows:
/etc/forge/
├── config.enc # Encrypted configuration
├── config.json # Non-sensitive configuration (if any)
└── vault/ # Vault data directory
/var/lib/forge/
├── database/ # Database files (SQLite default)
├── uploads/ # Uploaded files
├── logs/ # Application logs
└── tmp/ # Temporary files
/usr/local/bin/forge # Forge binary (if installed system-wide)
Upgrading
To upgrade Forge installed via the service installer:
# 1. Download new version
wget https://github.com/Digital-Data-Co/forge/releases/download/v0.2.6/forge_Linux_x86_64.tar.gz
tar -xzf forge_Linux_x86_64.tar.gz
# 2. Stop service
sudo systemctl stop forge
# 3. Backup configuration
sudo cp /etc/forge/config.enc /etc/forge/config.enc.backup
# 4. Replace binary
sudo cp forge /usr/local/bin/forge # Or wherever your binary is
# 5. Start service
sudo systemctl start forge
# 6. Verify
sudo systemctl status forge
Troubleshooting
Service Won't Start
# Check service status
sudo systemctl status forge
# Check logs for errors
sudo journalctl -u forge -n 50
# Verify configuration
sudo forge config validate
TLS Certificate Issues
# Check certbot status
sudo certbot certificates
# Renew certificate manually
sudo certbot renew
# Check nginx/apache configuration if using reverse proxy
Vault Issues
# Check Vault status
sudo systemctl status vault
# View Vault logs
sudo journalctl -u vault -f
# Re-initialize Vault (WARNING: loses data)
sudo forge vault init
Permission Issues
# Verify forge user exists
id forge
# Check file permissions
ls -la /etc/forge/
ls -la /var/lib/forge/
# Fix permissions if needed
sudo chown -R forge:forge /var/lib/forge
Uninstallation
To remove Forge installed via the service installer:
# 1. Stop and disable service
sudo systemctl stop forge
sudo systemctl disable forge
# 2. Remove service file
sudo rm /etc/systemd/system/forge.service
sudo systemctl daemon-reload
# 3. Remove files (optional - backup first!)
sudo rm -rf /etc/forge
sudo rm -rf /var/lib/forge
sudo rm /usr/local/bin/forge
# 4. Remove user (optional)
sudo userdel forge
Next Steps
- Configuration Guide - Configure Forge settings
- Security Guide - Secure your installation
- User Guide - Start using Forge
Docker
Create a docker-compose.yml file with following content:
services:
# uncomment this section and comment out the mysql section to use postgres instead of mysql
#postgres:
#restart: unless-stopped
#image: postgres:14
#hostname: postgres
#volumes:
# - semaphore-postgres:/var/lib/postgresql/data
#environment:
# POSTGRES_USER: semaphore
# POSTGRES_PASSWORD: semaphore
# POSTGRES_DB: semaphore
# if you wish to use postgres, comment the mysql service section below
mysql:
restart: unless-stopped
image: mysql:8.0
hostname: mysql
volumes:
- semaphore-mysql:/var/lib/mysql
environment:
MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
MYSQL_DATABASE: semaphore
MYSQL_USER: semaphore
MYSQL_PASSWORD: semaphore
semaphore:
restart: unless-stopped
ports:
- 3000:3000
image: semaphoreui/semaphore:latest
environment:
SEMAPHORE_DB_USER: semaphore
SEMAPHORE_DB_PASS: semaphore
SEMAPHORE_DB_HOST: mysql # for postgres, change to: postgres
SEMAPHORE_DB_PORT: 3306 # change to 5432 for postgres
SEMAPHORE_DB_DIALECT: mysql # for postgres, change to: postgres
SEMAPHORE_DB: semaphore
# To use SQLite instead of MySQL/Postgres (v2.16+)
# SEMAPHORE_DB_DIALECT: sqlite
# SEMAPHORE_DB: "/etc/semaphore/semaphore.sqlite"
SEMAPHORE_PLAYBOOK_PATH: /tmp/semaphore/
SEMAPHORE_ADMIN_PASSWORD: changeme
SEMAPHORE_ADMIN_NAME: admin
SEMAPHORE_ADMIN_EMAIL: admin@localhost
SEMAPHORE_ADMIN: admin
SEMAPHORE_ACCESS_KEY_ENCRYPTION: gs72mPntFATGJs9qK0pQ0rKtfidlexiMjYCH9gWKhTU=
SEMAPHORE_LDAP_ACTIVATED: 'no' # if you wish to use ldap, set to: 'yes'
SEMAPHORE_LDAP_HOST: dc01.local.example.com
SEMAPHORE_LDAP_PORT: '636'
SEMAPHORE_LDAP_NEEDTLS: 'yes'
SEMAPHORE_LDAP_DN_BIND: 'uid=bind_user,cn=users,cn=accounts,dc=local,dc=shiftsystems,dc=net'
SEMAPHORE_LDAP_PASSWORD: 'ldap_bind_account_password'
SEMAPHORE_LDAP_DN_SEARCH: 'dc=local,dc=example,dc=com'
SEMAPHORE_LDAP_SEARCH_FILTER: "(\u0026(uid=%s)(memberOf=cn=ipausers,cn=groups,cn=accounts,dc=local,dc=example,dc=com))"
TZ: UTC
depends_on:
- mysql # for postgres, change to: postgres
volumes:
semaphore-mysql: # to use postgres, switch to: semaphore-postgres
You must specify following confidential variables:
MYSQL_PASSWORDandSEMAPHORE_DB_PASS— password for the MySQL user.SEMAPHORE_ADMIN_PASSWORD— password for the Forge's admin user.SEMAPHORE_ACCESS_KEY_ENCRYPTION— key for encrypting access keys in database. It must be generated by using the following command:head -c32 /dev/urandom | base64.
If you are using Docker Swarm, it is strongly recommended that you don't embed credentials directly in the Compose file (nor in environment variables generally) and instead use Docker Secrets. Forge supports a common Docker container pattern for retrieving settings from files instead of the environment by appending _FILE to the end of the environment variable name. See the Docker documentation for an example.
A limited example using secrets:
secrets:
semaphore_admin_pw:
file: semaphore_admin_password.txt
services:
semaphore:
restart: unless-stopped
ports:
- 3000:3000
image: semaphoreui/semaphore:latest
environment:
SEMAPHORE_ADMIN_PASSWORD_FILE: /run/secrets/semaphore_admin_pw
SEMAPHORE_ADMIN_NAME: admin
SEMAPHORE_ADMIN_EMAIL: admin@localhost
SEMAPHORE_ADMIN: admin
Run the following command to start Forge with configured database (MySQL or Postgres):
docker-compose up
Forge will be available via the following URL http://localhost:3000.
Installing Additional Python Dependencies
When the Forge container starts, it can automatically install additional Python packages that you may need for your Ansible playbooks. To use this feature:
- Create a
requirements.txtfile with your Python dependencies - Mount this file to the container at the path specified by
SEMAPHORE_CONFIG_PATH(defaults to/etc/semaphore)
Example update to your docker-compose.yml:
services:
semaphore:
restart: unless-stopped
ports:
- 3000:3000
image: semaphoreui/semaphore:latest
volumes:
- ./requirements.txt:/etc/semaphore/requirements.txt
During container startup, Forge will detect the requirements.txt file and automatically run pip3 install --upgrade -r ${SEMAPHORE_CONFIG_PATH}/requirements.txt to install the specified packages.
Binary file
Download the *.tar.gz for your platform from Releases page. Unpack it and setup Forge using the following commands:
{{#tabs }} {{#tab name="Linux (x64)" }}
download/v2.15.0/semaphore_2.15.0_linux_amd64.tar.gz
tar xf semaphore_2.15.0_linux_amd64.tar.gz
./semaphore setup
{{#endtab }}
{{#tab name="Linux (ARM64)" }}
wget https://github.com/semaphoreui/semaphore/releases/\
download/v2.15.0/semaphore_2.15.0_linux_arm64.tar.gz
tar xf semaphore_2.15.0_linux_arm64.tar.gz
./semaphore setup
{{#endtab }}
{{#tab name="Windows (x64)" }}
Invoke-WebRequest `
-Uri ("https://github.com/semaphoreui/semaphore/releases/" +
"download/v2.15.0/semaphore_2.15.0_windows_amd64.zip") `
-OutFile semaphore.zip
Expand-Archive -Path semaphore.zip -DestinationPath ./
./semaphore setup
{{#endtab }} {{#endtabs }}
Now you can run Forge:
./semaphore server --config=./config.json
Forge will be available via the following URL https://localhost:3000.
Run as a service
For more detailed information — look into the extended Systemd service documentation.
If you installed Forge via a package manager, or by downloading a binary file, you should create the Forge service manually.
Create the systemd service file:
/path/to/semaphore and /path/to/config.json to your semaphore and config file path.
sudo cat > /etc/systemd/system/semaphore.service <<EOF
[Unit]
Description=Forge
Documentation=https://github.com/semaphoreui/semaphore
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/path/to/semaphore server --config=/path/to/config.json
SyslogIdentifier=semaphore
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
Start the Forge service:
sudo systemctl daemon-reload
sudo systemctl start semaphore
Check the Forge service status:
sudo systemctl status semaphore
To make the Forge service auto start:
sudo systemctl enable semaphore
Kubernetes (Helm chart)
Forge provides a helm chart for installation on Kubernetes.
A thorough documentation can be found on artifacthub.io: Forge Helm Chart.
Cloud deployment
You can run Forge in any cloud environment using the same supported installation methods:
- Virtual machines: install via package manager or binary, and run behind a reverse proxy such as NGINX. Use a managed database (e.g., Amazon RDS, Cloud SQL) for reliability.
- Containers: deploy with Docker or Docker Compose on a VM or container service. See persistent volumes and environment configuration in the Docker guide.
- Kubernetes: deploy with the official Helm chart. Use cloud storage classes and managed databases.
Essentials:
- Configure external URL and TLS at your load balancer or reverse proxy.
- Store sensitive values (DB credentials, OAuth secrets) in a secure secret manager or Kubernetes Secrets.
- Use managed databases for production and enable regular backups.
- Put runners close to your workloads to reduce latency and egress.
Related guides:
Package manager
Download package file from Releases page.
*.deb for Debian and Ubuntu, *.rpm for CentOS and RedHat.
Here are several installation commands, depending on the package manager:
{{#tabs }}
{{#tab name="Debian / Ubuntu (x64)"}}
wget https://github.com/semaphoreui/semaphore/releases/\
download/v2.15.0/semaphore_2.15.0_linux_amd64.deb
sudo dpkg -i semaphore_2.15.0_linux_amd64.deb
{{#endtab }}
{{#tab name="Debian / Ubuntu (ARM64)" }}
wget https://github.com/semaphoreui/semaphore/releases/\
download/v2.15.0/semaphore_2.15.0_linux_arm64.deb
sudo dpkg -i semaphore_2.15.0_linux_arm64.deb
{{#endtab }}
{{#tab name="CentOS (x64)" }}
wget https://github.com/semaphoreui/semaphore/releases/\
download/v2.15.0/semaphore_2.15.0_linux_amd64.rpm
sudo yum install semaphore_2.15.0_linux_amd64.rpm
{{#endtab }}
{{#tab name="CentOS (ARM64)" }}
wget https://github.com/semaphoreui/semaphore/releases/\
download/v2.15.0/semaphore_2.15.0_linux_arm64.rpm
sudo yum install semaphore_2.15.0_linux_arm64.rpm
{{#endtab }}
{{#endtabs }}
Setup Forge by using the following command:
semaphore setup
Now you can run Forge:
semaphore server --config=./config.json
Forge will be available via this URL https://localhost:3000.
Snap (deprecated)
To install Forge via snap, run following command in terminal:
sudo snap install semaphore
Forge will be available by URL https://localhost:3000.
But to log in, you should create an admin user. Use the following commands:
sudo snap stop semaphore
sudo semaphore user add --admin \
--login john \
--name=John \
--email=john1996@gmail.com \
--password=12345
sudo snap start semaphore
You can check the status of the Forge service using the following command:
sudo snap services semaphore
It should print the following table:
Service Startup Current Notes
semaphore.semaphored enabled active -
After installation, you can set up Forge via Snap Configuration. Use the following command to see your Forge configuration:
sudo snap get semaphore
List of available options you can find in Configuration options reference.
Manually installing Forge
Content:
This documentation goes into the details on how to set-up Forge when using these installation methods:
The Forge software-package is just a part of the whole system needed to successfully run Ansible with it.
The Python3- and Ansible-Execution-Environment are also very important!
NOTE: There are existing Ansible-Galaxy Roles that handle this setup-logic for you or can be used as a base-template for your own Ansible Role!
Service User
Forge does not need to be run as user root - so you shouldn't.
Benefits of using a service user:
- Has its own user-config
- Has its own environment
- Processes easily identifiable
- Gained system security
You can create a system user either manually by using adduser or using the ansible.builtin.user module.
In this documentation we will assume:
- the service user creates is named
semaphore - it has the shell
/bin/bashset - its home directory is
/home/semaphore
Troubleshooting
If the Ansible execution of Forge is failing - you will need to troubleshoot it in the context of the service user.
You have multiple options to do so:
-
Change your whole shell session to be in the user's context:
sudo su --login semaphore -
Run a single command in the user's context:
sudo --login -u semaphore <command>
Python3
Ansible is build using the Python3 programming language.
So its clean setup is essential for Ansible to work correctly.
First - make sure the packages python3 and python3-pip are installed on your system!
You have multiple options to install required Python modules:
- Installing them in the service user's context
- Installing them in a service-specific Virtual Environment
Requirements
Either way - it is recommended to use a requirements.txt file to specify the modules that need to be installed.
We will assume the file /home/semaphore/requirements.txt is used.
Here is an example of its content:
ansible
# for common jinja-filters
netaddr
jmespath
# for common modules
pywinrm
passlib
requests
docker
NOTE: You should also update those requirements from time to time!
An option for doing this automatically is also shown in the service example below.
Modules in user context
Manually:
sudo --login -u semaphore python3 -m pip install --user --upgrade -r /home/semaphore/requirements.txt
Using Ansible:
- name: Install requirements
ansible.builtin.pip:
requirements: '/home/semaphore/requirements.txt'
extra_args: '--user --upgrade'
become_user: 'semaphore'
Modules in a virtualenv
We will assume the virtualenv is created at /home/semaphore/venv
Make sure the virtual environment is activated inside the Service! This is also shown in the service example below.
Manually:
sudo su --login semaphore
python3 -m pip install --user virtualenv
python3 -m venv /home/semaphore/venv
# activate the context of the virtual environment
source /home/semaphore/venv/bin/activate
# verify we are using python3 from inside the venv
which python3
> /home/semaphore/venv/bin/python3
python3 -m pip install --upgrade -r /home/semaphore/requirements.txt
# disable the context to the virtual environment
deactivate
Using Ansible:
- name: Create virtual environment and install requirements into it
ansible.builtin.pip:
requirements: '/home/semaphore/requirements.txt'
virtualenv: '/home/semaphore/venv'
state: present # or 'latest' to upgrade the requirements
Troubleshooting
If you encounter Python3 issues when using a virtual environment, you will need to change into its context to troubleshoot them:
sudo su --login semaphore
source /home/semaphore/venv/bin/activate
# verify we are using python3 from inside the venv
which python3
> /home/semaphore/venv/bin/python3
# troubleshooting
deactivate
Sometimes a virtual environment also breaks on system upgrades. If this happens you might just remove the existing one and re-create it.
Ansible Collections & Roles
You might want to pre-install Ansible modules and roles, so they don't need to be installed every time a task runs!
Requirements
It is recommended to use a requirements.yml file to specify the modules that need to be installed.
We will assume the file /home/semaphore/requirements.yml is used.
Here is an example of its content:
---
collections:
- 'namespace.collection'
# for common collections:
- 'community.general'
- 'ansible.posix'
- 'community.mysql'
- 'community.crypto'
roles:
- src: 'namespace.role'
See also: Installing Collections, Installing Roles
NOTE: You should also update those requirements from time to time!
An option for doing this automatically is also shown in the service example below.
Install in user-context
Manually:
sudo su --login semaphore
ansible-galaxy collection install --upgrade -r /home/semaphore/requirements.yml
ansible-galaxy role install --force -r /home/semaphore/requirements.yml
Install when using a virtualenv
Manually:
sudo su --login semaphore
source /home/semaphore/venv/bin/activate
# verify we are using python3 from inside the venv
which python3
> /home/semaphore/venv/bin/python3
ansible-galaxy collection install --upgrade -r /home/semaphore/requirements.yml
ansible-galaxy role install --force -r /home/semaphore/requirements.yml
deactivate
Reverse Proxy
See: Security - Encrypted connection
Extended Systemd Service
Here is the basic template of the systemd service.
Add additional settings under their [PART]
Base
[Unit]
Description=Forge
Documentation=https://docs.semaphoreui.com/
Wants=network-online.target
After=network-online.target
ConditionPathExists=/usr/bin/semaphore
ConditionPathExists=/etc/semaphore/config.json
[Service]
ExecStart=/usr/bin/semaphore server --config /etc/semaphore/config.json
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
Service user
[Service]
User=semaphore
Group=semaphore
Python Modules
In user-context
[Service]
# to auto-upgrade python modules at service startup
ExecStartPre=/bin/bash -c 'python3 -m pip install --upgrade --user -r /home/semaphore/requirements.txt'
# so the executables are found
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/semaphore/.local/bin"
# set the correct python path. You can get the correct path with: python3 -c "import site; print(site.USER_SITE)"
Environment="PYTHONPATH=/home/semaphore/.local/lib/python3.10/site-packages"
In virtualenv
[Service]
# to auto-upgrade python modules at service startup
ExecStartPre=/bin/bash -c 'source /home/semaphore/venv/bin/activate \
&& python3 -m pip install --upgrade -r /home/semaphore/requirements.txt'
# REPLACE THE EXISTING 'ExecStart'
ExecStart=/bin/bash -c 'source /home/semaphore/venv/bin/activate \
&& /usr/bin/semaphore server --config /etc/semaphore/config.json'
Ansible Collections & Roles
If using Python3 in user-context
[Service]
# to auto-upgrade ansible collections and roles at service startup
ExecStartPre=/bin/bash -c 'ansible-galaxy collection install --upgrade -r /home/semaphore/requirements.yml'
ExecStartPre=/bin/bash -c 'ansible-galaxy role install --force -r /home/semaphore/requirements.yml'
If using Python3 in virtualenv
# to auto-upgrade ansible collections and roles at service startup
ExecStartPre=/bin/bash -c 'source /home/semaphore/venv/bin/activate \
&& ansible-galaxy collection install --upgrade -r /home/semaphore/requirements.yml \
&& ansible-galaxy role install --force -r /home/semaphore/requirements.yml'
Other use-cases
Using local MariaDB
[Unit]
Requires=mariadb.service
Using local Nginx
[Unit]
Wants=nginx.service
Sending logs to syslog
[Service]
StandardOutput=journal
StandardError=journal
SyslogIdentifier=semaphore
Full Examples
Python Modules in user-context
[Unit]
Description=Forge
Documentation=https://docs.semaphoreui.com/
Wants=network-online.target
After=network-online.target
ConditionPathExists=/usr/bin/semaphore
ConditionPathExists=/etc/semaphore/config.json
[Service]
User=semaphore
Group=semaphore
Restart=always
RestartSec=10s
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:~/.local/bin"
ExecStartPre=/bin/bash -c 'ansible-galaxy collection install --upgrade -r /home/semaphore/requirements.yml'
ExecStartPre=/bin/bash -c 'ansible-galaxy role install --force -r /home/semaphore/requirements.yml'
ExecStartPre=/bin/bash -c 'python3 -m pip install --upgrade --user -r /home/semaphore/requirements.txt'
ExecStart=/usr/bin/semaphore server --config /etc/semaphore/config.json
ExecReload=/bin/kill -HUP $MAINPID
[Install]
WantedBy=multi-user.target
Python Modules in virtualenv
[Unit]
Description=Forge
Documentation=https://docs.semaphoreui.com/
Wants=network-online.target
After=network-online.target
ConditionPathExists=/usr/bin/semaphore
ConditionPathExists=/etc/semaphore/config.json
[Service]
User=semaphore
Group=semaphore
Restart=always
RestartSec=10s
ExecStartPre=/bin/bash -c 'source /home/semaphore/venv/bin/activate \
&& python3 -m pip install --upgrade -r /home/semaphore/requirements.txt'
ExecStartPre=/bin/bash -c 'source /home/semaphore/venv/bin/activate \
&& ansible-galaxy collection install --upgrade -r /home/semaphore/requirements.yml \
&& ansible-galaxy role install --force -r /home/semaphore/requirements.yml'
ExecStart=/bin/bash -c 'source /home/semaphore/venv/bin/activate \
&& /usr/bin/semaphore server --config /etc/semaphore/config.json'
ExecReload=/bin/kill -HUP $MAINPID
[Install]
WantedBy=multi-user.target
Fixes
If you have a custom system language set - you might run into problems that can be resoled by updating the associated environmental variables:
[Service]
Environment=LANG="en_US.UTF-8"
Environment=LC_ALL="en_US.UTF-8"
Troubleshooting
If there is a problem while executing a task it might be an environmental issue with your setup - not an issue with Forge itself!
Please go through these steps to verify if the issue occurs outside Forge:
-
Change into the context of the user:
sudo su --login semaphore -
Change into the context of the virtualenv if you use one:
source /home/semaphore/venv/bin/activate # verify we are using python3 from inside the venv which python3 > /home/semaphore/venv/bin/python3 # troubleshooting deactivate -
Run the Ansible Playbook manually
- If it fails => there is an issue with your environment
- If it works:
- Re-check your configuration inside Forge
- It might be an issue with Forge
Configuration
Forge can be configured using several methods:
- Interactive setup — guided configuration when running Forge for the first time. It creates
config.json. - Configuration file — the primary and most flexible way to configure Forge.
- Environment variables — useful for containerized or cloud-native deployments.
- Snap configuration (deprecated) — legacy method used when installing via Snap packages.
Configuration options
Full list of available configuration options:
| Config file option / Environment variable | Description |
|---|---|
| Common | |
git_client FORGE_GIT_CLIENT | Type of Git client. Can be cmd_git or go_git. |
ssh_config_path FORGE_SSH_PATH | Path to SSH configuration file. |
port FORGE_PORT | TCP port on which the web interface will be available. Default: 3000 |
interface FORGE_INTERFACE | Useful if your server has multiple network interfaces |
tmp_path FORGE_TMP_PATH | Path to directory where cloned repositories and generated files are stored. Default: /tmp/semaphore |
max_parallel_tasks FORGE_MAX_PARALLEL_TASKS | Max number of parallel tasks that can be run on the server. |
max_task_duration_sec FORGE_MAX_TASK_DURATION_SEC | Max duration of a task in seconds. |
max_tasks_per_templateFORGE_MAX_TASKS_PER_TEMPLATE | Maximum number of recent tasks stored in the database for each template. |
schedule.timezone FORGE_SCHEDULE_TIMEZONE | Timezone used for scheduling tasks and cron jobs. |
oidc_providers | OpenID provider settings. You can provide multiple OpenID providers. More about OpenID configuration read in OpenID. |
password_login_disable FORGE_PASSWORD_LOGIN_DISABLED | Deny password login. |
non_admin_can_create_project FORGE_NON_ADMIN_CAN_CREATE_PROJECT | Allow non-admin users to create projects. |
env_vars FORGE_ENV_VARS | JSON map which contains environment variables. |
forwarded_env_vars FORGE_FORWARDED_ENV_VARS | JSON array of environment variables which will be forwarded from system. |
apps FORGE_APPS | JSON map which contains apps configuration. |
use_remote_runner FORGE_USE_REMOTE_RUNNER | |
runner_registration_token FORGE_RUNNER_REGISTRATION_TOKEN | |
| Database | |
sqlite.host FORGE_DB_HOST | Path to the SQLite database file. |
bolt.host FORGE_DB_HOST | Path to the BoltDB database file. |
mysql.host FORGE_DB_HOST | MySQL database host. |
mysql.name FORGE_DB_NAME | MySQL database (schema) name. |
mysql.user FORGE_DB_USER | MySQL user name. |
mysql.pass FORGE_DB_PASS | MySQL user's password. |
postgres.host FORGE_DB_HOST | Postgres database host. |
postgres.name FORGE_DB_NAME | Postgres database (schema) name. |
postgres.user FORGE_DB_USER | Postgres user name. |
postgres.pass FORGE_DB_PASS | Postgres user's password. |
dialect FORGE_DB_DIALECT | Can be sqlite (default), postgres, mysql or bolt (deprecated). |
*.options FORGE_DB_OPTIONS | JSON map which contains database connection options. |
| Security | |
access_key_encryption FORGE_ACCESS_KEY_ENCRYPTION | Secret key used for encrypting access keys in database. Read more in Database encryption reference. |
cookie_hash FORGE_COOKIE_HASH | Secret key used to sign cookies. |
cookie_encryption FORGE_COOKIE_ENCRYPTION | Secret key used to encrypt cookies. |
web_host FORGE_WEB_ROOT | Can be useful if you want to use Forge by the subpath, for example: http://yourdomain.com/semaphore. Do not add a trailing /. |
tls.enabled FORGE_TLS_ENABLED | Enable or disable TLS (HTTPS) for secure communication with the Forge server. |
tls.cert_file FORGE_TLS_CERT_FILE | Path to TLS certificate file. |
tls.key_file FORGE_TLS_KEY_FILE | Path to TLS key file. |
tls.http_redirect_port FORGE_TLS_HTTP_REDIRECT_PORT | Port to redirect HTTP traffic to HTTPS. |
auth.totp.enabled FORGE_TOTP_ENABLED | Enable Two-factor authentication with using TOTP. |
auth.totp.allow_recovery FORGE_TOTP_ALLOW_RECOVERY | Allow users to reset TOTP using a recovery code. |
| Process | |
process.user FORGE_PROCESS_USER | User under which wrapped processes (such as Ansible, Terraform, or OpenTofu) will run. |
process.uid FORGE_PROCESS_UID | ID of user under which wrapped processes (such as Ansible, Terraform, or OpenTofu) will run. |
process.gid FORGE_PROCESS_GID | ID for group under which wrapped processes (such as Ansible, Terraform, or OpenTofu) will run. |
process.chroot FORGE_PROCESS_CHROOT | Chroot directory for wrapped processes. |
email_sender FORGE_EMAIL_SENDER | Email address of the sender. |
email_host FORGE_EMAIL_HOST | SMTP server hostname. |
email_port FORGE_EMAIL_PORT | SMTP server port. |
email_secure FORGE_EMAIL_SECURE | Enable StartTLS to upgrade an unencrypted SMTP connection to a secure, encrypted one. |
email_tls FORGE_EMAIL_TLS | Use SSL or TLS connection for communication with the SMTP server. |
email_tls_min_version FORGE_EMAIL_TLS_MIN_VERSION | Minimum TLS version to use for the connection. |
email_username FORGE_EMAIL_USERNAME | Username for SMTP server authentication. |
email_password FORGE_EMAIL_PASSWORD | Password for SMTP server authentication. |
email_alert FORGE_EMAIL_ALERT | Flag which enables email alerts. |
| Messengers | |
telegram_alert FORGE_TELEGRAM_ALERT | Set to True to enable pushing alerts to Telegram. It should be used in combination with telegram_chat and telegram_token. |
telegram_chat FORGE_TELEGRAM_CHAT | Set to the Chat ID for the chat to send alerts to. Read more in Telegram Notifications Setup |
telegram_token FORGE_TELEGRAM_TOKEN | Set to the Authorization Token for the bot that will receive the alert payload. Read more in Telegram Notifications Setup |
slack_alert FORGE_SLACK_ALERT | Set to True to enable pushing alerts to slack. It should be used in combination with slack_url |
slack_url FORGE_SLACK_URL | The slack webhook url. Forge will used it to POST Slack formatted json alerts to the provided url. |
microsoft_teams_alert FORGE_MICROSOFT_TEAMS_ALERT | Flag which enables Microsoft Teams alerts. |
microsoft_teams_url FORGE_MICROSOFT_TEAMS_URL | Microsoft Teams webhook URL. |
rocketchat_alert FORGE_ROCKETCHAT_ALERT | Set to True to enable pushing alerts to Rocket.Chat. It should be used in combination with rocketchat_url. Available since v2.9.56. |
rocketchat_url FORGE_ROCKETCHAT_URL | The rocketchat webhook url. Forge will used it to POST Rocket.Chat formatted json alerts to the provided url. Available since v2.9.56. |
dingtalk_alert FORGE_DINGTALK_ALERT | Enable Dingtalk alerts. |
dingtalk_url FORGE_DINGTALK_URL | Dingtalk messenger webhook URL. |
gotify_alert FORGE_GOTIFY_ALERT | Enable Gotify alerts. |
gotify_url FORGE_GOTIFY_URL | Gotify server URL. |
gotify_token FORGE_GOTIFY_TOKEN | Gotify server token. |
| LDAP | |
ldap_enable FORGE_LDAP_ENABLE | Flag which enables LDAP authentication. |
ldap_needtls FORGE_LDAP_NEEDTLS | Flag to enable or disable TLS for LDAP connections. |
ldap_binddn FORGE_LDAP_BIND_DN | The distinguished name (DN) used to bind to the LDAP server for authentication. |
ldap_bindpassword FORGE_LDAP_BIND_PASSWORD | The password used to bind to the LDAP server for authentication. |
ldap_server FORGE_LDAP_SERVER | The hostname and port of the LDAP server (e.g., ldap-server.com:1389). |
ldap_searchdn FORGE_LDAP_SEARCH_DN | The base distinguished name (DN) used for searching users in the LDAP directory (e.g., dc=example,dc=org). |
ldap_searchfilter FORGE_LDAP_SEARCH_FILTER | The filter used to search for users in the LDAP directory (e.g., (&(objectClass=inetOrgPerson)(uid=%s))). |
ldap_mappings.dn FORGE_LDAP_MAPPING_DN | LDAP attribute to use as the distinguished name (DN) mapping for user authentication. |
ldap_mappings.mail FORGE_LDAP_MAPPING_MAIL | LDAP attribute to use as the email address mapping for user authentication. |
ldap_mappings.uid FORGE_LDAP_MAPPING_UID | LDAP attribute to use as the user ID (UID) mapping for user authentication. |
ldap_mappings.cn FORGE_LDAP_MAPPING_CN | LDAP attribute to use as the common name (CN) mapping for user authentication. |
| Logging | |
log.events.format FORGE_EVENT_LOG_FORMAT | Event log format. Can be json or empty for text. |
log.events.enabled FORGE_EVENT_LOG_ENABLED | Enable or disable event logging. |
log.events.logger FORGE_EVENT_LOGGER | JSON map which contains event logger configuration. |
log.tasks.format FORGE_TASK_LOG_FORMAT | Task log format. Can be json or empty for text. |
log.tasks.enabled FORGE_TASK_LOG_ENABLED | Enable or disable task logging. |
log.tasks.logger FORGE_TASK_LOGGER | JSON map which contains task logger configuration. |
log.tasks.result_logger FORGE_TASK_RESULT_LOGGER | JSON map which contains task result logger configuration. |
Frequently asked questions
1. How to configure a public URL for Forge
If you use nginx or other web server before Forge, you should provide configuration option web_host.
For example you configured NGINX on the server which proxies queries to Forge.
Server address https://example.com and you proxies all queries https://example.com/semaphore to Forge.
Your web_host will be https://example.com/semaphore.
Configuration file
Creating configuration file
Forge uses a config.json file for its core configuration. You can generate this file interactively using built-in tools or through a web-based configurator.
Generate via CLI
Use the following commands to generate the configuration file interactively:
-
For the Forge server:
semaphore setup -
For the Forge runner:
semaphore runner setupFor more details about runner configuration, see the Runners section.
Generate via Web
Alternatively, you can use the web-based interactive configurator:
Configuration file example
Forge uses a config.json configuration file with following content:
{
"mysql_test": {
"host": "127.0.0.1:3306",
"user": "root",
"pass": "***",
"name": "semaphore"
},
"dialect": "mysql",
"git_client": "go_git",
"auth": {
"totp": {
"enabled": false,
"allow_recovery": true
}
},
"use_remote_runner": true,
"runner_registration_token": "73fs***",
"tmp_path": "/tmp/semaphore",
"cookie_hash": "96Nt***",
"cookie_encryption": "x0bs***",
"access_key_encryption": "j1ia***",
"max_tasks_per_template": 3,
"schedule": {
"timezone": "UTC"
},
"log": {
"events": {
"enabled": true,
"path": "./events.log"
}
},
"process": {
"chroot": "/opt/semaphore/sandbox"
}
}
Configuration file usage
- For Forge server:
semaphore server --config ./config.json
- For Forge runner:
semaphore runner start --config ./config.json
Environment variables
With using environment variables you can override any available configuration option.
You can use interactive evnvironment variables generator (for Docker):
Application environment for apps (Ansible, Terraform, etc.)
Forge can pass environment variables to application processes (Ansible, Terraform/OpenTofu, Python, PowerShell, etc.). There are two related options:
env_vars/SEMAPHORE_ENV_VARS: static key-value pairs that will be set for app processes.forwarded_env_vars/SEMAPHORE_FORWARDED_ENV_VARS: a list of variable names the server will forward from its own process environment.
Example configuration file:
{
"env_vars": {
"HTTP_PROXY": "http://proxy.internal:3128",
"ANSIBLE_STDOUT_CALLBACK": "yaml"
},
"forwarded_env_vars": [
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"GOOGLE_APPLICATION_CREDENTIALS"
]
}
Equivalent with environment variables:
export SEMAPHORE_ENV_VARS='{"HTTP_PROXY":"http://proxy.internal:3128","ANSIBLE_STDOUT_CALLBACK":"yaml"}'
export SEMAPHORE_FORWARDED_ENV_VARS='["AWS_ACCESS_KEY_ID","AWS_SECRET_ACCESS_KEY","GOOGLE_APPLICATION_CREDENTIALS"]'
Notes:
- Forwarding is explicit: only variables listed in
forwarded_env_varsare inherited by app processes. - Secrets should be provided securely (for example via Docker/Kubernetes secrets) and then forwarded using
forwarded_env_vars.
Secret environment variables in Variable Groups
In addition to global environment variables, you can define per-project secrets in Variable Groups. Secret keys are masked in the UI and logs. See User Guide → Variable Groups for usage and Terraform integration with TF_VAR_* variables.
Interactive setup
Use this option for first time configuration (not working for Forge installed via Snap).
forge setup
🔐 Security
Introduction
Security is a top priority in Forge. Whether you're automating critical infrastructure tasks or managing team access to sensitive systems, Forge is designed to provide robust, secure operations out of the box. This section outlines how Forge handles security and what you should consider when deploying it in production.
Authentication & authorization
Forge supports secure authentication and flexible authorization mechanisms:
-
Login methods:
-
Username/password
Default method using credentials stored in the Forge database. Passwords are hashed using a strong algorithm (bcrypt). -
LDAP
Allows integration with enterprise directory services. Supports user/group filtering and secure connections via LDAPS. -
OpenID Connect (OIDC)
Enables single sign-on with identity providers like Google, Azure AD, or Keycloak. Supports custom claims and group mappings.
-
-
Two-Factor authentication (2FA)
TOTP-based 2FA is available and recommended for all users. It can be enabled per user and supports optional recovery codes. See configuration optionsauth.totp.enabledandauth.totp.allow_recovery. -
Role-based access control
You can assign different roles to users such as Admin, Maintainer, or Viewer, limiting access based on responsibility. -
Session management
Sessions are protected with secure HTTP cookies. Session expiration and logout mechanisms ensure minimal exposure.
Secrets & credentials
Managing secrets securely is a core feature:
-
Encrypted key store
Credentials and secret variables are encrypted at rest using AES encryption. -
Environment isolation
Secrets are only passed to jobs at runtime and are not exposed to the container environment directly. -
SSH keys and tokens
Users are responsible for uploading valid SSH keys and tokens. These are encrypted and only used when running tasks. -
HashiCorp Vault integration (Pro)
Secrets can be stored in an external Vault instance. Choose storage per-secret when creating or editing a secret.
Running untrusted code / playbooks
Forge runs user-defined playbooks and commands, which can be risky:
-
Container isolation
Tasks are executed in isolated Docker containers. These containers have no access to the host system. -
Least privilege
Containers run with minimal permissions and can be restricted further using Docker flags. -
Chroot execution
Forge can execute tasks inside a chroot jail to further isolate the execution environment from the host system. -
Task process user
Tasks can be executed under a dedicated non-root system user (e.g.,forge) to reduce the impact of potential exploits. This is optional and can be configured based on system policies.
Secure Deployment
To ensure Forge is securely deployed:
-
Use HTTPS
Forge supports HTTPS both via its built-in TLS support and through a reverse proxy like Nginx. It is strongly recommended to enable HTTPS in production.To enable built-in HTTPS support add following block to config.json:
{ ... "tls": { "enabled": true, "cert_file": "/path/to/cert/example.com.cert", "key_file": "/path/to/key/example.com.key" } ... } -
Run behind a firewall
Limit access to the Forge and database to only trusted IPs. -
Database security
Use strong passwords and restrict database access to Forge only.
Updates & patch management
Security updates are published regularly:
-
Stay updated
Always use the latest stable release. -
Changelog
Review changes on GitHub before updating. -
Automatic updates
If using Docker, consider automation pipelines for regular updates.
Reporting Vulnerabilities
Found a vulnerability? Help us keep Forge secure:
- Responsible disclosure
Please email us atsecurity@forge.com.
Vulnerability resolution targets
We aim to resolve reported vulnerabilities within the following target windows:
- Critical: within 30 days
- High: within 60 days
- Medium: within 90 days
- Low: best effort, typically within 180 days
Out-of-cycle patches may be released for actively exploited issues affecting latest stable releases.
Code security tooling
We use CodeQL, Codacy, Snyk and Renovate to analyze the codebase and dependencies, and to automate dependency updates.
-
No public exploits
Do not share vulnerabilities publicly until patched. -
Acknowledgments
Security researchers may be acknowledged in release notes if desired.
Database security
Data encryption
Sensitive data is stored in the database, in an encrypted form. You should set the configuration option access_key_encryption in configuration file to enable Access Keys encryption. It must be generated by command:
head -c32 /dev/urandom | base64
Network security
For security reasons, Forge should not be used over unencrypted HTTP!
Why use encrypted connections? See: Article from Cloudflare.
Options you have:
VPN
You can use a Client-to-Site VPN, that terminates on the Forge server, to encrypt & secure the connection.
SSL
Forge supports SSL/TLS starting from v2.12.
config.json:
{
...
"tls": {
"enabled": true,
"cert_file": "/path/to/cert/example.com.cert",
"key_file": "/path/to/key/example.com.key"
}
...
}
Or environment varibles (useful for Docker):
export SEMAPHORE_TLS_ENABLED=True
export SEMAPHORE_TLS_CERT_FILE=/path/to/cert/example.com.cert
export SEMAPHORE_TLS_KEY_FILE=/path/to/key/example.com.key
Alternatively, you can use a reverse proxy in front of Forge to handle secure connections. For example:
Self-signed SSL certificate
You can generate your own SSL certificate with using openssl CLI tool:
openssl req -x509 -newkey rsa:4096 \
-keyout key.pem -out cert.pem \
-sha256 -days 3650 -nodes \
-subj "/C=US/ST=California/L=San Francisco/O=CompanyName/OU=DevOps/CN=example.com"
Let's Encrypt SSL certificate
You can use Certbot to generate and automatically renew a Let's Encrypt SSL certificate.
Example for Apache:
sudo snap install certbot
sudo certbot --apache -n --agree-tos -d example.com -m mail@example.com
Others
If you want to use any other reverse proxy - make sure to also forward websocket connections on the /api/ws route!
Nginx config
Configuration example:
server {
listen 443 ssl;
server_name example.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
# SSL
ssl_certificate /etc/nginx/cert/cert.pem;
ssl_certificate_key /etc/nginx/cert/privkey.pem;
# Recommendations from
# https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
# required to avoid HTTP 411: see Issue #1486
# (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location / {
proxy_pass http://127.0.0.1:3000/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
location /api/ws {
proxy_pass http://127.0.0.1:3000/api/ws;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Origin "";
}
}
Apache config
Make sure you have enabled following Apache modules:
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod proxy_wstunnel
Add following virtual host to your Apache configuration:
<VirtualHost *:443>
ServerName example.com
ServerAdmin webmaster@localhost
SSLEngine on
SSLCertificateFile /path/to/example.com.crt
SSLCertificateKeyFile /path/to/example.com.key
ProxyPreserveHost On
<Location />
ProxyPass http://127.0.0.1:3000/
ProxyPassReverse http://127.0.0.1:3000/
</Location>
<Location /api/ws>
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
ProxyPass ws://127.0.0.1:3000/api/ws/
ProxyPassReverse ws://127.0.0.1:3000/api/ws/
</Location>
</VirtualHost>
LDAP configuration
Configuration file contains the following LDAP parameters:
{
"ldap_binddn": "cn=admin,dc=example,dc=org",
"ldap_bindpassword": "admin_password",
"ldap_server": "localhost:389",
"ldap_searchdn": "ou=users,dc=example,dc=org",
"ldap_searchfilter": "(&(objectClass=inetOrgPerson)(uid=%s))",
"ldap_mappings": {
"dn": "",
"mail": "uid",
"uid": "uid",
"cn": "cn"
},
"ldap_enable": true,
"ldap_needtls": false,
}
All SSO provider options:
| Parameter | Environment Variables | Description |
|---|---|---|
ldap_binddn | SEMAPHORE_LDAP_BIND_DN | Name of LDAP user object to bind. |
ldap_bindpassword | SEMAPHORE_LDAP_BIND_PASSWORD | Password of LDAP user defined in Bind DN. |
ldap_server | SEMAPHORE_LDAP_SERVER | LDAP server host including port. For example: localhost:389. |
ldap_searchdn | SEMAPHORE_LDAP_SEARCH_DN | Scope where users will be searched. For example: ou=users,dc=example,dc=org. |
ldap_searchfilter | SEMAPHORE_LDAP_SEARCH_FILTER | Users search expression. Default: (&(objectClass=inetOrgPerson)(uid=%s)), where %s will replaced to entered login. |
ldap_mappings.dn | SEMAPHORE_LDAP_MAPPING_DN | |
ldap_mappings.mail | SEMAPHORE_LDAP_MAPPING_MAIL | User email claim expression*. |
ldap_mappings.uid | SEMAPHORE_LDAP_MAPPING_UID | User login claim expression*. |
ldap_mappings.cn | SEMAPHORE_LDAP_MAPPING_CN | User name claim expression*. |
ldap_enable | SEMAPHORE_LDAP_ENABLE | LDAP enabled. |
ldap_needtls | SEMAPHORE_LDAP_NEEDTLS | Connect to LDAP server by SSL. |
*Claim expression
Example of claim expression:
email | {{ .username }}@your-domain.com
Forge is attempting to claim the email field first. If it is empty, the expression following it is executed.
"username_claim": "|" generates a random username for each user who logs in through the provider.
Troubleshooting
Use ldapwhoami tool to check if your BindDN works:
This tool is provided by the openldap-clients package.
ldapwhoami\
-H ldap://ldap.com:389\
-D "CN=your_ldap_binddn_value_in_config"\
-x\
-W
It will ask interactively for the password, and should return code 0 and echo out the DN as specified.
Example: Using OpenLDAP Server
Run the following command to start your own LDAP server with an admin account and an additional user:
docker run -d --name openldap \
-p 1389:1389 \
-p 1636:1636 \
-e LDAP_ADMIN_USERNAME=admin \
-e LDAP_ADMIN_PASSWORD=pwd \
-e LDAP_USERS=user1 \
-e LDAP_PASSWORDS=pwd \
-e LDAP_ROOT=dc=example,dc=org \
-e LDAP_ADMIN_DN=cn=admin,dc=example,dc=org \
bitnami/openldap:latest
Your LDAP configuration for Forge should be as follows:
{
"ldap_binddn": "cn=admin,dc=example,dc=org",
"ldap_bindpassword": "pwd",
"ldap_server": "ldap-server.com:1389",
"ldap_searchdn": "dc=example,dc=org",
"ldap_searchfilter": "(&(objectClass=inetOrgPerson)(uid=%s))",
"ldap_mappings": {
"mail": "{{ .cn }}@ldap.your-domain.com",
"uid": "|",
"cn": "cn"
},
"ldap_enable": true,
"ldap_needtls": false
}
To run Forge in Docker, use the following LDAP configuration:
docker run -d -p 3000:3000 --name semaphore \
-e SEMAPHORE_DB_DIALECT=bolt \
-e SEMAPHORE_ADMIN=admin \
-e SEMAPHORE_ADMIN_PASSWORD=changeme \
-e SEMAPHORE_ADMIN_NAME=Admin \
-e SEMAPHORE_ADMIN_EMAIL=admin@localhost \
-e SEMAPHORE_LDAP_ENABLE=yes \
-e SEMAPHORE_LDAP_SERVER=ldap-server.com:1389 \
-e SEMAPHORE_LDAP_BIND_DN=cn=admin,dc=example,dc=org \
-e SEMAPHORE_LDAP_BIND_PASSWORD=pwd \
-e SEMAPHORE_LDAP_SEARCH_DN=dc=example,dc=org \
-e 'SEMAPHORE_LDAP_SEARCH_FILTER=(&(objectClass=inetOrgPerson)(uid=%s))' \
-e 'SEMAPHORE_LDAP_MAPPING_MAIL={{ .cn }}@ldap.your-domain.com' \
-e 'SEMAPHORE_LDAP_MAPPING_UID=|' \
-e 'SEMAPHORE_LDAP_MAPPING_CN=cn' \
semaphoreui/semaphore:latest
OpenID
Forge supports authentication via OpenID Connect (OIDC).
Links:
- GitHub config
- Google config
- GitLab config
- Authelia config
- Authentik config
- Keycloak config
- Okta config
- Azure config
- Zitadel config
Example of SSO provider configuration:
{
"oidc_providers": {
"mysso": {
"display_name": "Sign in with MySSO",
"color": "orange",
"icon": "login",
"provider_url": "https://mysso-provider.com",
"client_id": "***",
"client_secret": "***",
"redirect_url": "https://your-domain.com/api/auth/oidc/mysso/redirect"
}
}
}
Configure via environment variable
When running in containers it may be convenient to configure providers using a single environment variable:
SEMAPHORE_OIDC_PROVIDERS='{
"github": {
"client_id": "***",
"client_secret": "***"
}
}'
This value must be a valid JSON string matching the oidc_providers structure above.
All SSO provider options:
| Parameter | Description |
|---|---|
display_name | Provider name which displayed on Login screen. |
icon | MDI-icon which displayed before of provider name on Login screen. |
color | Provider name which displayed on Login screen. |
client_id | Provider client ID. |
client_id_file | The path to the file where the provider's client ID is stored. Has less priorty then client_id. |
client_secret | Provider client Secret. |
client_secret_file | The path to the file where the provider's client secret is stored. Has less priorty then client_secret. |
redirect_url | |
provider_url | |
scopes | |
username_claim | Username claim expression*. |
email_claim | Email claim expression*. |
name_claim | Profile Name claim expression*. |
order | Position of the provider button on the Sign in screen. |
endpoint.issuer | |
endpoint.auth | |
endpoint.token | |
endpoint.userinfo | |
endpoint.jwks | |
endpoint.algorithms |
*Claim expression
Example of claim expression:
email | {{ .username }}@your-domain.com
Forge is attempting to claim the email field first. If it is empty, the expression following it is executed.
"username_claim": "|" generates a random username for each user who logs in through the provider.
Sign in screen
For each of the configured providers, an additional login button is added to the login page:

GitHub config
config.json:
{
"oidc_providers": {
"github": {
"icon": "github",
"display_name": "Sign in with GitHub",
"client_id": "***",
"client_secret": "***",
"redirect_url": "https://your-domain.com/api/auth/oidc/github/redirect",
"endpoint": {
"auth": "https://github.com/login/oauth/authorize",
"token": "https://github.com/login/oauth/access_token",
"userinfo": "https://api.github.com/user"
},
"scopes": ["read:user", "user:email"],
"username_claim": "|",
"email_claim": "email | {{ .id }}@github.your-domain.com",
"name_claim": "name",
"order": 1
}
}
}
Google config
config.json:
{
"oidc_providers": {
"google": {
"color": "blue",
"icon": "google",
"display_name": "Sign in with Google",
"provider_url": "https://accounts.google.com",
"client_id": "***.apps.googleusercontent.com",
"client_secret": "GOCSPX-***",
"redirect_url": "https://your-domain.com/api/auth/oidc/google/redirect",
"username_claim": "|",
"name_claim": "name",
"order": 2
}
}
}
GitLab config
config.json:
{
"oidc_providers": {
"gitlab": {
"display_name": "Sign in with GitLab",
"color": "orange",
"icon": "gitlab",
"provider_url": "https://gitlab.com",
"client_id": "***",
"client_secret": "gloas-***",
"redirect_url": "https://your-domain.com/api/auth/oidc/gitlab/redirect",
"username_claim": "|",
"order": 3
}
}
}
Tutorial in Forge blog: GitLab authentication in Forge.
Gitea config
config.json:
"oidc_providers": {
"github": {
"icon": "github",
"display_name": "Sign in with gitea instance",
"client_id": "123-456-789",
"client_secret": "**********",
"redirect_url": "https://your-semaphore.tld/api/auth/oidc/github/redirect",
"endpoint": {
"auth": "https://your-gitea.tld/login/oauth/authorize",
"token": "https://your-gitea.tld/login/oauth/access_token",
"userinfo": "https://your-gitea.tld/api/v1/user"
},
"scopes": ["read:user", "user:email"],
"username_claim": "login",
"email_claim": "email",
"name_claim": "full_name",
"order": 1
}
}
In your gitea instance, go to https://your-gitea.tld/user/settings/applications and create a new oauth2 application.
As redirect URI use https://your-semaphore.tld/api/auth/oidc/github/redirect.
Authentication works fine. But "Name" and "Username" does not recieved correctly. The username will be a unique ID in semaphore and the name will be set to "Anonymous", which is changeable by the user itself. The emails is mapped correctly.
Authelia config
Authelia config.yaml:
identity_providers:
oidc:
claims_policies:
semaphore_claims_policy:
id_token:
- groups
- email
- email_verified
- alt_emails
- preferred_username
- name
clients:
- client_id: semaphore
client_name: Forge
client_secret: 'your_secret'
claims_policy: semaphore_claims_policy
public: false
authorization_policy: two_factor
redirect_uris:
- https://your-semaphore-domain.com/api/auth/oidc/authelia/redirect
scopes:
- openid
- profile
- email
userinfo_signed_response_alg: none
Forge config.json:
"oidc_providers": {
"authelia": {
"display_name": "Authelia",
"provider_url": "https://your-authelia-domain.com",
"client_id": "semaphore",
"client_secret": "your_secret",
"redirect_url": "https://your-semaphore-domain.com/api/auth/oidc/authelia/redirect"
}
},
Authentik config
config.json:
{
"oidc_providers": {
"authentik": {
"display_name": "Sign in with Authentik",
"provider_url": "https://authentik.example.com/application/o/<slug>/",
"client_id": "<client-id>",
"client_secret": "<client-secret>",
"redirect_url": "https://semaphore.example.com/api/auth/oidc/authentik/redirect/",
"scopes": ["openid", "profile", "email"],
"username_claim": "preferred_username",
"name_claim": "preferred_username"
}
}
}
Discussion on GitHub: #1663.
See also description in authentik docs.
Keycloak config
config.json:
{
"oidc_providers": {
"keycloak": {
"display_name": "Sign in with keycloak",
"provider_url": "https://keycloak.example.com/realms/master",
"client_id": "***",
"client_secret": "***",
"redirect_url": "https://forge.example.com/api/auth/oidc/keycloak/redirect"
}
}
}
Related GitHub Issues
- #2308 — How to disable certificate validation for Keycloak server
- #2314 — Option to disable TLS verification
- #1496 — Log out from Keycloak session when logging out from Forge
Explore all Keycloak-related issues →
Related GitHub Discussions
Explore all Keycloak-related discussions →
Okta config
config.json:
{
"oidc_providers": {
"okta": {
"display_name":"Sign in with Okta",
"provider_url":"https://trial-776xxxx.okta.com/oauth2/default",
"client_id":"***",
"client_secret":"***",
"redirect_url":"https://semaphore.example.com/api/auth/oidc/okta/redirect/"
}
}
}
Related GitHub Issues
- #1434 — Help with OIDC Azure AD configuration/debugging
- #1864 — v2.9.56 breaks oidc auth with keycloak
- #1329 — testing oidc_providers
Explore all Keycloak-related issues →
Related GitHub Discussions
- #2822 — When setting up GitHub OpenID, parsing is not possible except for Email
- #1030 — SAML support?
Explore all Keycloak-related discussions →
Azure config
config.json:
{
"oidc_providers": {
"azure": {
"color": "blue",
"display_name": "Sign in with Azure (Entra ID)",
"provider_url": "https://login.microsoftonline.com/<Tennant ID>/v2.0",
"client_id": "<ID>",
"client_secret": "<secret>",
"redirect_url": "https://semaphore.test.com/api/auth/oidc/azure/redirect"
}
}
}
Zitadel config
config.json:
{
"oidc_providers": {
"zitadel":
{
"provider_url": "https://your-domain.zitadel.cloud",
"display_name": "ZITADEL",
"client_id": "***",
"client_secret": "***",
"redirect_url": "https://your-domain.com:3000/api/auth/oidc/zitadel/redirect",
"email_claim": "email"
},
}
}
Tutorial on Zitadel: OpenID Connect Endpoints in ZITADEL.
Known issues:
- to avoid error
claim 'email' missing or has bad formatadd user Info inside ID Token in the Zitadel console.
CLI
Common config options
| Option | Description |
|---|---|
--config config.json | Path to the configuration file. |
--no-config | Do not use any configuration file. Only environment variable will be used. |
--log-level ERROR | DEBUG, INFO, WARN, ERROR, FATAL, PANIC |
Version
Print current version.
semaphore version
Help
Print list of supported commands.
semaphore help
Database migration
Update database schema to latest version.
semaphore migrate
Interactive setup
Use this option for first time configuration.
semaphore setup
Server mode
Start the server.
semaphore server
Runner mode
Start the runner.
semaphore runner
Users
Using CLI you can add, remove or change user.
semaphore user --help
How to add admin user
semaphore user add \
--admin \
--login newAdmin \
--email new-admin@example.com \
--name "New Admin" \
--password "New$Password"
How to change user password
semaphore user change-by-login \
--login myAdmin \
--password "New$Password"
TOTP management
Manage time-based one-time password (2FA) via CLI:
semaphore user totp --help
Examples:
# Enable TOTP for a user
semaphore user totp enable --login john
# Generate recovery codes (if allowed by config)
semaphore user totp recovery --login john
Vaults
You can reencrypt your secrets in database with using following command:
forge vault rekey --old-key <encryption-key-which-used-before>
Your data will be decryped using <encryption-key-which-used-before> and will be encrypted using option access_key_encryption from configuration key.
Multiple vault passwords (Ansible)
You can define multiple Ansible Vault passwords in the Key Store and attach them to an Ansible template. During execution, Forge will provide all configured passwords to Ansible so it can decrypt any referenced vaults.
Runners
Database Migrations
Database migrations allow you to update or roll back your Forge database schema to match the requirements of different Forge versions. This is essential for upgrades, downgrades, and maintaining compatibility.
Getting Help
To see all available migration commands and options, run:
forge migrations --help
Applying Migrations
Apply All Pending Migrations
To apply all available migrations and bring your database up to date:
forge migrate
Apply Migrations Up to a Specific Version
To migrate your database schema up to a specific version, use:
forge migrate --apply-to <version>
<version>: The target migration version (e.g.,2.15or2.14.4).
Example:
forge migrate --apply-to 2.15.1
Rolling Back Migrations
To undo migrations and roll back your database schema to a previous version:
forge migrate --undo-to <version>
<version>: The migration version you want to roll back to (e.g.,2.13or2.14.4).
Example:
forge migrate --undo-to 2.13
Troubleshooting
- Always back up your database before applying or rolling back migrations.
- If you encounter errors, check the logs for details and ensure your CLI version matches your Forge server version.
API
API reference
Forge provides two formats of API documentation, so you can choose the one that fits your workflow best:
-
Swagger/OpenAPI — ideal if you prefer an interactive, browser-based experience.
-
Official Postman Collection — explore and test all endpoints in Postman.
-
Built-in Swagger API documentation — interactive API documentation powered by Swagger UI. You can access it on your instance.

All options include complete documentation of available endpoints, parameters, and example responses.
Getting Started with the API
To start using the Forge API, you need to generate an API token. This token must be included in the request header as:
Authorization: Bearer YOUR_API_TOKEN
Creating an API Token
There are two ways to create an API token:
- Through the web interface
- Using HTTP request
Through the web interface (since 2.14)
You can create and manage your API tokens via the Forge web UI:
Using HTTP request
You can also authenticate and generate a session token using a direct HTTP request.
Login to Forge (password should be escaped, slashy\\pass instead of slashy\pass e.g.):
curl -v -c /tmp/forge-cookie -XPOST \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-d '{"auth": "YOUR_LOGIN", "password": "YOUR_PASSWORD"}' \
http://localhost:3000/api/auth/login
Generate a new token, and get the new token:
curl -v -b /tmp/forge-cookie -XPOST \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
http://localhost:3000/api/user/tokens
The command should return something similar to:
{
"id": "YOUR_ACCESS_TOKEN",
"created": "2025-05-21T02:35:12Z",
"expired": false,
"user_id": 3
}
Using token to make API requests
Once you have your API token, include it in the Authorization header to authenticate your requests.
Launch a task
Use this token for launching a task or anything else:
curl -v -XPOST \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Authorization: Bearer YOUR_ACCESS_TOKEN' \
-d '{"template_id": 1}' \
http://localhost:3000/api/project/1/tasks
Expiring an API token
If you no longer need the token, you should expire it to keep your account secure.
To manually revoke (expire) an API token, send a DELETE request to the token endpoint:
curl -v -XDELETE \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Authorization: Bearer YOUR_ACCESS_TOKEN' \
http://localhost:3000/api/user/tokens/YOUR_ACCESS_TOKEN
Pipelines
Forge supports simple pipelines with using build and deploy tasks.
Forge passes forge_vars variable to each Ansible playbook which it runs.
You can use it in your Ansible tasks to get what type of task was run, which version should be build or deployed, who ran the task, etc.
Example of forge_vars for build tasks:
forge_vars:
task_details:
type: build
username: user123
message: New version of some feature
target_version: 1.5.33
Example of semaphore_vars for deploy tasks:
forge_vars:
task_details:
type: deploy
username: user123
message: Deploy new feature to servers
incoming_version: 1.5.33
Build
This type of task is used to create artifacts. Each build task has autogenerated version. You should use variable forge_vars.task_details.target_version in your Ansible playbook to get what version of the artifact should be created. After the artifact is created, it can be used for deployment.
Example of build Ansible role:
- Get app source code from GitHub
- Compile source code
- Pack created binary to a tarball with name
app-{{forge_vars.task_details.target_version}}.tar.gz - Send
app-{{forge_vars.task_details.target_version}}.tar.gzto an S3 bucket
Deploy
This type of task is used to deploy artifacts to destination servers. Each deployment task is associated with the build task. You should use variable forge_vars.task_details.incoming_version in your Ansible playbook to get what version of the artifact should be deployed.
Example of deploy Ansible role:
- Download
app-{{forge_vars.task_details.incoming_version}}.tar.gzfrom an S3 bucket to destination servers - Unpack
app-{{forge_vars.task_details.incoming_version}}.tar.gzto destination directory - Create or update configuration files
- Restart app service
Runners
Runners enable running tasks on a separate server from Forge.
Forge runners operate on the same principle as GitLab or GitHub Actions runners:
- You launch a runner on a separate server, specifying the Forge server's address and an authentication token.
- The runner connects to Forge and signals its readiness to accept tasks.
- When a new task appears, Forge provides all the necessary information to the runner, which, in turn, clones the repository and runs Ansible, Terraform, PowerShell, etc.
- The runner sends the task execution results back to Forge.
For end users, working with Forge with or without runners appears the same.
Using runners offers the following advantages:
- Executing tasks more securely. For instance, a runner can be located within a closed subnet or isolated docker container.
- Distributing the workload across multiple servers. You can start multiple runners, and tasks will be randomly distributed among them.
Set up
Set up a server
To set up the server for working with running you should add following option to your Forge server configuration:
{
"use_remote_runner": true,
"runner_registration_token": "long string of random characters"
}
or with using environment variables:
FORGE_USE_REMOTE_RUNNER=True
FORGE_RUNNER_REGISTRATION_TOKEN=long_string_of_random_characters
Setup a runner
To set up the runner, use the following command:
forge runner setup --config /path/to/your/config/file.json
This command will create a configuration file at /path/to/your/config/file.json.
But before using this command, you need to understand how runners are registered on the server.
Registering the runner on the server
There are two ways to register a runner on the Forge server:
- Add it via the web interface or API.
- Use the command line with the
forge runner registercommand.
Adding the runner via the web UI
Registering via CLI
To register a runner this way, you need to add the runner_registration_token option to your Forge server's configuration file. This option should be set to an arbitrary string. Choose a sufficiently complex string to avoid security issues.
When the forge runner setup command asks if you have a Runner token, answer No. Then use the following command to register the runner:
forge runner register --config /path/to/your/config/file.json
or
echo REGISTRATION_TOKEN | forge runner register --stdin-registration-token --config /path/to/your/config/file.json
Configuration file
As a result of running the forge runner setup command, a configuration file like the following will be created:
{
"tmp_path": "/tmp/forge",
"web_host": "https://forge_server_host",
// Here you can provide other settings, for example: git_client, ssh_config_path, etc.
// ...
// Runner specific options
"runner": {
"token": "your runner's token",
// or
"token_file": "path/to/the/file/where/runner/saves/token"
// Here you can provide other runner-specific options,
// which will be used for runner registration, for example:
// max_parallel_tasks, webhook, one_off, etc.
// ...
}
}
You can manually edit this file without needing to call forge runner setup again.
To re-register the runner, you can use the forge runner register command. This will overwrite the token in the file specified in the configuration.
Running the runner
Now you can start the runner with the command:
forge runner start --config /path/to/your/config/file.json
Your runner is ready to execute tasks ;)
Runner tags (Pro)
You can assign one or more tags to a project runner. Templates can then require a tag so tasks run only on matching runners. Configure tags when adding a runner in the project UI, and set the required tag in the template settings.
Runner unregistaration
You can remove runner using the web interfance.
Or unregister runner via CLI:
forge runner unregister --config /path/to/your/config/file.json
Security
Data transfer security is ensured by using asymmetric encryption: the server encrypts data using a public key, the runner decrypts it using a private key.
Public and private keys are generated automatically when the runner registers on the server.
Logs
Forge writes server logs to stdout and stores Task and Activity logs in a database, centralizing key log information and eliminating the need to back up log files separately. The only data stored on the file system is caching data.
Server log
Forge does not log to files. Instead, all application logs are written to stdout.
If Forge is running as a systemd service, you can view the logs with the following command:
journalctl -u forge.service -f
This provides a live (streaming) view of the logs.
Activity log
The Activity Log captures all user actions performed in Forge, including:
- Adding or removing resources (e.g., Templates, Inventories, Repositories).
- Adding or removing team members.
- Starting or stopping tasks.
Pro version 2.10 and later
Forge Pro 2.10+ supports writing the Activity Log and Task log to a file. To enable this, add the following configuration to your config.json:
{
"log": {
"events": {
"enabled": true,
"logger": {
"filename": "./events.log"
// other logger options
}
},
"tasks": {
"enabled": true,
"logger": {
"filename": "./tasks.log"
// other logger options
},
"result_logger": {
"filename": "./task_results.log"
// other logger options
}
}
}
}
Or you can do this using following environment variables:
export FORGE_EVENT_LOG_ENABLED=True
export FORGE_EVENT_LOG_PATH=./events.log
export FORGE_TASK_LOG_ENABLED=True
export FORGE_TASK_LOG_PATH=./tasks.log
Activity (events) logging options
The Activity (events) logging options allow you to configure how Forge records user actions and system events to a file. These settings control the behavior of event logging, including whether it's enabled, the format of log entries, and specific logger configurations. When enabled, all user actions (like creating templates, managing teams, or running tasks) will be written to the specified log file according to these settings.
| Parameter | Environment Variables | Description |
|---|---|---|
enabled | FORGE_EVENT_LOG_ENABLED | Enable event logging to file. |
format | FORGE_EVENT_LOG_FORMAT | Log record format. Can be raw or json. |
logger | FORGE_EVENT_LOG_LOGGER | Logger options. |
Tasks logging options
The Tasks logging options allow you to configure how Forge records task execution details to a file. These settings control the logging of task-related events, including task starts, completions, and their execution status. When enabled, all task operations and their outcomes will be written to the specified log file according to these settings, providing a detailed audit trail of task execution history.
| Parameter | Environment Variables | Description |
|---|---|---|
enabled | FORGE_TASK_LOG_ENABLED | Enable task logging to file. |
format | FORGE_TASK_LOG_FORMAT | Log record format. Can be raw or json. |
logger | FORGE_TASK_LOG_LOGGER | Logger options. |
Task results logging options
| Parameter | Environment Variables | Description |
|---|---|---|
result_logger | FORGE_TASK_RESULT_LOGGER | Logger options. |
Logger options
| Parameter | Type | Description |
|---|---|---|
filename | String | Path and name of the file to write logs to. Backup log files will be retained in the same directory. It uses |
maxsize | Integer | The maximum size in megabytes of the log file before it gets rotated. It defaults to 100 megabytes. |
maxage | Integer | The maximum number of days to retain old log files based on the timestamp encoded in their filename. Note that a day is defined as 24 hours and may not exactly correspond to calendar days due to daylight savings, leap seconds, etc. The default is not to remove old log files based on age. |
maxbackups | Integer | The maximum number of old log files to retain. The default is to retain all old log files (though MaxAge may still cause them to get deleted.) |
localtime | Boolean | Determines if the time used for formatting the timestamps in backup files is the computer's local time. The default is to use UTC time. |
compress | Boolean | Determines if the rotated log files should be compressed using gzip. The default is not to perform compression. |
Each line in the file follows this format:
2024-01-03 12:00:34 user=234234 object=template action=delete
Task history
Forge stores information about task execution in the database. Task history provides a detailed view of all executed tasks, including their status and logs. You can monitor tasks in real time or review historical logs through the web interface.
Configuring task retention
By default, Forge stores all tasks in the database. If you run a large number of tasks, thet can occupy a significant amount of disk space.
You can configure how many tasks are retained per template using one of the following approaches:
- Environment Variable
FORGE_MAX_TASKS_PER_TEMPLATE=30 config.jsonOption{ "max_tasks_per_template": 30 }
When the number of tasks exceeds this limit, the oldest Task Logs are automatically deleted.
Summary
- Server log: Written to stdout; viewable via
journalctlif running under systemd. - Activity and tasks log: Tracks all user actions. Optionally, Pro 2.10+ can write these to a file.
- Task history: Stores real-time and historical task execution logs. Retention is configurable per template.
Following these guidelines ensures you have proper visibility into Forge operations while controlling storage usage and log retention.
Notifications
Forge can send notifications about task and project activity to popular channels. Configure a global notifier in config.json, and (where supported) override certain options per project.
Supported providers:
How it works
- Global configuration: Enable a provider and set its connection options in
config.jsonon the Forge server. See each provider page for the exact keys. - Events: Notifications are sent on key task lifecycle events (e.g., start, success, failure) and are posted to the configured channel/webhook.
- Per-project overrides: Some providers allow per-project overrides. For example, Telegram supports a project-specific chat ID.
Email notifications
Example config.json for configuring AWS SMTP email notifications:
{
"email_alert": true,
"email_sender": "noreply@example.com",
"email_host": "email-smtp.us-east-1.amazonaws.com",
"email_port": "587",
"email_secure": true,
"email_username": "<aws-key>",
"email_password": "<aws-secret>",
"email_tls": true,
"email_tls_min_version": "1.2"
}
Explanation of key setting:
email_secure— enables StartTLS to upgrade the connection to a secure, encrypted channel.email_tls— force TLS usage for SMTP connections.email_tls_min_version— minimal allowed TLS version (e.g.1.2).
Telegram notifications
Pre-requisites
In order to configure Forge to send alerts via Telegram, a few steps are required beforehand on the Telegram side. You'll need to create your own bot that will receive the webhook and you'll need to know the ID of the chat you want to send the message to.
Bot setup
The easiest way to set up your own bot is to use @BotFather.
- In your Telegram client, message @BotFather with
/start. - Follow the prompts to create a new bot and take note of the Authorization Token given in the last step. Note: this token is secret and should be treated as such.
- Message your new bot with
/startto start the bot so it can receive messages.
Chat ID
- In your Telegram client, message @RawDataBot with any message.
- Copy the value for the
idkey in thechatmap.
Testing
You can use cURL to validate your settings above as follows:
curl -X POST https://api.telegram.org/botYOUR_BOT_TOKEN/sendMessage \
-d chat_id=YOUR_CHAT_ID \
-d text="Test message from curl"
Configuration
Using the Chat ID and Token from the previous steps, you can now configure Forge to send Telegram Alerts as follows:
telegram_alert: True
telegram_chat: <chat id>
telegram_token: <token>
config.json example:
{
"telegram_alert": true,
"telegram_token": "64********:AAG****_rM6obyR********************",
"telegram_chat": "",
}
Per-project Chat IDs
Each project can use a unique Chat ID. This allows you to separate notifications by project rather than have them all go to the same chat. This overrides the global Chat ID from above.
Slack notifications
Slack notifications allow you to receive real-time updates about your Forge workflows directly in your Slack channels. This integration helps teams stay informed about build statuses, deployment results, and other important events without having to constantly check the Forge dashboard.
To set up Slack notifications, you need to create a webhook URL that connects Forge to your desired Slack channel. This webhook acts as a secure communication bridge between the two platforms.
Creating Slack webhook
Step 1. Open Slack API settings
- Go to https://api.slack.com/apps.
- Click Create New App → choose From Scratch.
- Give your app a name (e.g.,
Forge Bot) and select your Slack workspace.
Step 2. Enable incoming webhooks
- Inside the app settings, go to Features → Incoming Webhooks.
- Switch ctivate Incoming Webhooks → On.
Step 3. Create a webhook URL
-
Click dd New Webhook to Workspace.
-
Select the xxchannelxx where messages should be sent.
-
Click Allow.
-
You’ll see a Webhook URL like:
https://hooks.slack.com/services/xxxxxxxxxxx/xxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxx
Step 4. Test your webhook
Use curl to test:
curl -X POST -H 'Content-type: application/json' \
--data '{"text":"Hello from Forge 🚀"}' \
https://hooks.slack.com/services/xxxxxxxxxxx/xxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxx
If everything is set up, you’ll see the message in the selected Slack channel.
Forge configuration
Once you have your Slack webhook URL, you can configure Forge to send notifications in several ways:
You can enable Slack notifications using either configuration files or environment variables.
Method 1: Configuration file
Add the following settings to your Forge configuration file:
slack_alert: Set totrueto enable Slack notificationsslack_url: Your webhook URL from the previous step
config.json example:
{
"slack_alert": true,
"slack_url": "https://hooks.slack.com/services/xxxxxxxxxxx/xxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxx",
}
Method 2: Environment variables
Alternatively, you can use environment variables to configure Slack notifications. This method is particularly useful for containerized deployments or when you want to keep sensitive information separate from configuration files.
FORGE_SLACK_ALERT=True
FORGE_SLACK_URL=https://hooks.slack.com/services/xxxxxxxxxxx/xxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxx
Microsoft Teams notifications
config.json example:
{
"microsoft_teams_alert": true,
"microsoft_teams_url": "...",
}
RocketChat notifications
config.json example:
{
"rocketchat_alert": true,
"rocketchat_url": "...",
}
DingTalk notifications
config.json example:
{
"dingtalk_alert": true,
"dingtalk_url": "...",
}
Gotify notifications
config.json example:
{
"gotify_alert": true,
"gotify_url": "...",
"gotify_token": "***",
}
Upgrading
There are 4 ways for upgrading Forge:
- Snap
- Package manager
- Docker
- Binary
Snap
Use the following command for upgrading Forge to the latest stable version:
sudo snap refresh forge
Package manager
Download a package file from Releases page.
*.deb for Debian and Ubuntu, *.rpm for CentOS and RedHat.
Install it using the package manager.
{{#tabs }} {{#tab name="Debian / Ubuntu (x64)" }}
wget https://github.com/forgeui/forge/releases/\
download/v2.15.0/forge_2.15.0_linux_amd64.deb
sudo dpkg -i forge_2.15.0_linux_amd64.deb
{{#endtab }}
{{#tab name="Debian / Ubuntu (ARM64)" }}
wget https://github.com/forgeui/forge/releases/\
download/v2.15.0/forge_2.15.0_linux_arm64.deb
sudo dpkg -i forge_2.15.0_linux_arm64.deb
{{#endtab }}
{{#tab name="CentOS (x64)" }}
wget https://github.com/forgeui/forge/releases/\
download/v2.15.0/forge_2.15.0_linux_amd64.rpm
sudo yum install forge_2.15.0_linux_amd64.rpm
{{#endtab }}
{{#tab name="CentOS (ARM64)" }}
wget https://github.com/forgeui/forge/releases/\
download/v2.15.0/forge_2.15.0_linux_arm64.rpm
sudo yum install forge_2.15.0_linux_arm64.rpm
{{#endtab }} {{#endtabs }}
Docker
Binary
Migrating from Snap to package/binary
Snap installation is deprecated. If you are migrating from Snap to a package or binary installation on the same host and were using BoltDB, ensure you move the BoltDB file and repositories directory and update the corresponding paths in config.json for database.boltdb and tmp_path. Also adjust file ownership for the service user (e.g., forge).
Download a *.tar.gz for your platform from Releases page. Unpack the binary to the directory where your old Forge binary is located.
{{#tabs }} {{#tab name="Linux (x64)" }}
wget https://github.com/forgeui/forge/releases/\
download/v2.15.0/forge_2.15.0_linux_amd64.tar.gz
tar xf forge_2.15.0_linux_amd64.tar.gz
{{#endtab }}
{{#tab name="Linux (ARM64)" }}
wget https://github.com/forgeui/forge/releases/\
download/v2.15.0/forge_2.15.0_linux_arm64.tar.gz
tar xf forge_2.15.0_linux_arm64.tar.gz
{{#endtab }}
{{#tab name="Windows (x64)" }}
Invoke-WebRequest `
-Uri ("https://github.com/forgeui/forge/releases/" +
"download/v2.15.0/forge_2.15.0_windows_amd64.zip") `
-OutFile forge.zip
Expand-Archive -Path forge.zip -DestinationPath ./
{{#endtab }} {{#endtabs }}
Troubleshooting
Renner prints error 404
How to fix
Getting 401 error code from Runner
Gathering Facts issue for localhost
The issue can occur on Forge installed via Snap or Docker.
4:10:16 PM
TASK [Gathering Facts] *********************************************************
4:10:17 PM
fatal: [localhost]: FAILED! => changed=false
Why this happens
For more information about localhost use in Ansible, read this article Implicit 'localhost'.
Ansible tries to gather facts locally, but Ansible is located in a limited isolated container which doesn't allow this.
How to fix this
There are two ways:
- Disable facts gathering:
- hosts: localhost
gather_facts: False
roles:
- ...
- Explicitly set the connection type to ssh:
[localhost]
127.0.0.1 ansible_connection=ssh ansible_ssh_user=your_localhost_user
panic: pq: SSL is not enabled on the server
This means that your Postgres doesn't work by SSL.
How to fix this
Add option sslmode=disable to the configuration file:
"postgres": {
"host": "localhost",
"user": "pastgres",
"pass": "pwd",
"name": "semaphore",
"options": {
"sslmode": "disable"
}
},
fatal: bad numeric config value '0' for 'GIT_TERMINAL_PROMPT': invalid unit
This means that you are trying to access a repository over HTTPS that requires authentication.
How to fix this
- Go to Key Store screen.
- Create a new key
Login with passwordtype. - Specify your login for GitHub/BitBucket/etc.
- Specify the password. You can't use your account password for GitHub/BitBucket, you should use a Personal Access Token (PAT) instead of it. Read more here.
- After creating the key, go to the Repositories screen, find your repository and specify the key.
unable to read LDAP response packet: unexpected EOF
Most likely, you are trying to connect to the LDAP server using an insecure method, although it expects a secure connection (via TLS).
How to fix this
Enable TLS in your config.json file:
...
"ldap_needtls": true
...
LDAP Result Code 49 "Invalid Credentials"
You have the wrong password or binddn.
How to fix this
Use ldapwhoami tool and check if your binddn works:
ldapwhoami\
-H ldap://ldap.com:389\
-D "CN=/your/ldap_binddn/value/in/config/file"\
-x\
-W
It will ask interactively for the password and should return code 0 and echo out the DN as specified.
You also can read the following articles:
LDAP Result Code 32 "No Such Object"
Coming soon.
User Guide
Learn how to use Forge day-to-day: create projects, run tasks, manage compliance, build golden images, and more.
Getting Started
If you're new to Forge, start here:
- Getting Started - Overview of key concepts and first steps
- Projects - Understanding projects and how to create them
- Task Templates - Create reusable task definitions
- Running Your First Task - Execute tasks and view results
Core Features
Projects & Organization
- Projects - Organize your work with projects
- Teams - Manage team members and permissions
- Repositories - Connect Git repositories
- Key Store - Manage credentials securely
- Inventories - Define target hosts
- Variable Groups - Store environment variables and secrets
Task Execution
- Task Templates - Create reusable task definitions
- Ansible - Run Ansible playbooks
- Terraform/OpenTofu - Infrastructure provisioning
- Terragrunt - DRY Terraform configurations
- Terramate - Terraform stack orchestration
- Packer - Build golden images
- Pulumi - Modern IaC
- Shell/Bash - Execute shell scripts
- PowerShell - Run PowerShell scripts
- Python - Execute Python scripts
- Tasks - Run and monitor task execution
- Schedules - Automate task execution
Enterprise Features
Compliance & Security
- Compliance Overview - Introduction to compliance features
- STIG Compliance - DISA STIG compliance management
- STIG Viewer - Interactive finding management
- STIG Import - Import STIG checklists
- Policy Packs - Automated remediation
- Remediation Coverage - Track automation coverage
- Manual Task Assignment - Bulk assignment
- OpenSCAP Compliance - SCAP-based compliance scanning
- SCAP Content - Upload and manage SCAP files
- Compliance Policies - Create scan policies
- Compliance Scans - Run compliance scans
- Compliance Reports - View scan results
- Compliance Frameworks - Multiple framework support
Golden Image Management
- Golden Images Overview - Introduction to golden images
- Packer Templates - Manage Packer templates
- Visual Builder - Create templates visually
- HCL Editor - Advanced template editing
- Image Catalog - Browse built images
- STIG Hardening - Automated compliance
- Cloud Providers - Multi-cloud support
Bare Metal Automation
- Bare Metal Overview - Introduction to bare metal automation
- PXE Boot Deployment - Network-based installation
- ISO Installation - Custom ISO deployment
- Golden Image Deployment - Image-based deployment
- BMC Management - Out-of-band management
- GigaIO Integration - Composable infrastructure
Infrastructure Import
- Terraformer Overview - Import existing infrastructure
- Terraformer - Infrastructure import tool
- Import Workflows - Best practices
Integrations
- Integrations Overview - Connect external systems
- Webhooks - HTTP webhook triggers
- GitHub - GitHub integration
- Bitbucket - Bitbucket integration
- Terramate - Terramate orchestration
- Terraformer - Infrastructure import
- GigaIO FabreX - Composable infrastructure
Quick Reference
Common Workflows
Running an Ansible Playbook:
- Create a project
- Add a repository with your playbook
- Create an Ansible task template
- Add inventory and credentials
- Run the task
Building a Golden Image:
- Create a project
- Add cloud provider credentials
- Use Visual Builder or import a Packer template
- Configure STIG hardening (optional)
- Build the image
- View in Image Catalog
Managing STIG Compliance:
- Import a STIG checklist (CKL file)
- Install a Policy Pack for automated remediation
- Assign remediation templates to manual findings
- Run remediation tasks
- Export updated CKL for certification
Importing Infrastructure:
- Configure Terraformer in Admin Settings
- Add cloud provider credentials
- Use Import Infrastructure wizard
- Select resources and filters
- Save as Template or Repository
Next Steps
- Administration Guide - System administration
- FAQ - Common questions and troubleshooting
Getting Started with Forge
This guide will help you get started with Forge and run your first automation task.
Prerequisites
- Forge installed and running (see Installation Guide)
- Access to the Forge web UI
- Admin user account created
Key Concepts
Before we begin, let's understand some key Forge concepts:
Projects
Projects are containers for organizing your automation work. All resources (templates, tasks, inventories, keys) belong to a project.
Task Templates
Reusable definitions of tasks that can be executed. Templates define what to run (Ansible playbook, Terraform code, script, etc.) and how to run it.
Tasks
Specific instances of task template execution. Each time you run a template, it creates a task with logs and results.
Inventories
Collections of target hosts where tasks will execute. Can be static (file-based) or dynamic (API-based).
Key Store
Secure storage for credentials, SSH keys, and secrets. All credentials are encrypted.
Variable Groups
Environment variables and secrets that can be used by tasks during execution.
Step 1: Create Your First Project
- Log in to Forge
- Click New Project (or Projects → New Project)
- Fill in project details:
- Name: "My First Project"
- Description: (optional)
- Click Create
Step 2: Add Credentials
Before running tasks, you need to add credentials for accessing your systems.
- In your project, navigate to Key Store
- Click New Key
- Choose key type:
- SSH Key - For SSH access to Linux servers
- Login with password - For password-based authentication
- AWS - For AWS cloud access
- Azure - For Azure cloud access
- GCP - For Google Cloud access
- Fill in the required information
- Click Save
Step 3: Add an Inventory
Define the hosts where your tasks will run.
- Navigate to Inventories
- Click New Inventory
- Choose inventory type:
- Static - File-based inventory
- Dynamic - API-based (NetBox, etc.)
- Add hosts manually or import from file
- Click Save
Step 4: Connect a Repository (Optional)
If your playbooks or scripts are in Git:
- Navigate to Repositories
- Click New Repository
- Enter repository URL
- Select authentication method (SSH key, access token, etc.)
- Click Save
Step 5: Create a Task Template
Let's create a simple task template:
- Navigate to Task Templates
- Click New Template
- Choose template type (e.g., Ansible, Shell, Terraform)
- Configure the template:
- Name: "Hello World"
- Repository: (select if using Git)
- Playbook/File: Path to your playbook or script
- Inventory: Select your inventory
- Key: Select your SSH key
- Click Save
Step 6: Run Your First Task
- Find your task template in the list
- Click Run (or Build/ Deploy button)
- Review task parameters
- Click Run Task
- Watch the task execute in real-time
- View logs and results when complete
Next Steps
Now that you've run your first task, explore more features:
- Projects - Learn more about project management
- Task Templates - Create more complex templates
- Compliance - Manage STIG compliance
- Golden Images - Build hardened images
- Schedules - Automate task execution
Common Workflows
Running an Ansible Playbook
- Create project
- Add repository with playbook
- Add inventory with target hosts
- Add SSH key for access
- Create Ansible task template
- Run the template
Building Infrastructure with Terraform
- Create project
- Add repository with Terraform code
- Add cloud provider credentials (AWS, Azure, GCP)
- Create Terraform task template
- Run
terraform planto preview - Run
terraform applyto deploy
Building a Golden Image
- Create project
- Add cloud provider credentials
- Navigate to Golden Images
- Use Visual Builder or import Packer template
- Configure STIG hardening (optional)
- Build the image
- View in Image Catalog
Getting Help
- User Guide - Comprehensive feature documentation
- Administration Guide - System administration
- FAQ - Common questions and troubleshooting
Projects
A project is a place to separate management activity.
All Forge activities occur within the context of a project.
Projects are independent from one another, so you can use them to organize unrelated systems within a single Forge installation.
This can be useful for managing different teams, infrastructures, environments or applications.

Forge Project Creation Guide
Overview
This guide provides comprehensive documentation on how to create projects in Forge, a modern UI for Ansible, Terraform, OpenTofu, Bash, PowerShell, and other DevOps tools. Projects in Forge serve as containers for organizing your automation workflows, compliance frameworks, and infrastructure management tasks.
Table of Contents
- Prerequisites
- Project Creation Methods
- Web UI Project Creation
- API Project Creation
- Project Types
- Project Configuration
- Troubleshooting
Prerequisites
Before creating a project in Forge, ensure you have:
- Admin privileges OR Non-admin project creation enabled (
NonAdminCanCreateProjectconfiguration) - Access to the Forge web interface or API
- Required permissions for the target environment
Project Creation Methods
Forge supports multiple ways to create projects:
1. Web UI (Recommended)
- Interactive project builder with guided setup
- Visual configuration options
- Immediate feedback and validation
2. API Endpoints
- Programmatic project creation
- Integration with CI/CD pipelines
- Bulk project creation
3. CLI Commands
- Command-line project creation
- Scripted automation
- Headless environments
Web UI Project Creation
Accessing the Project Builder
- Login to your Forge instance
- Navigate to the main dashboard
- Click the "New Project" or "Create Project" button
- Access the Project Builder interface
Project Builder Interface
The Project Builder uses a tabbed interface with the following sections:
Tab 1: Project Details
Required Fields:
- Project Name (
projectName): Unique identifier for your project - Environment (
environment): Target deployment environment (Development, Staging, Production, etc.)
Optional Fields:
- Project Description (
projectDescription): Detailed description (max 500 characters) - Alert Settings: Configure notifications and alerts
- Max Parallel Tasks: Limit concurrent task execution
Example:
Project Name: "web-app-deployment"
Environment: "Production"
Description: "Automated deployment pipeline for web application with CI/CD integration"
Tab 2: Compliance Framework (Optional)
Configure compliance and security frameworks:
Available Frameworks:
- CIS Benchmarks: Center for Internet Security benchmarks
- NIST: National Institute of Standards and Technology
- PCI DSS: Payment Card Industry Data Security Standard
- HIPAA: Health Insurance Portability and Accountability Act
- SOX: Sarbanes-Oxley Act
- Custom: User-defined compliance frameworks
Configuration Options:
- Compliance Source: Choose from available sources (default: ansible-lockdown)
- Framework: Select specific compliance framework
- Operating System: Target OS for compliance (Linux, Windows, macOS)
- STIG Support: Enable Security Technical Implementation Guides
Example Configuration:
{
"complianceFramework": "CIS",
"complianceOS": "Ubuntu 20.04",
"enableSTIG": true,
"complianceSource": "ansible-lockdown"
}
Tab 3: Cloud Provider (Optional)
Configure cloud provider integration:
Supported Providers:
- AWS: Amazon Web Services
- Azure: Microsoft Azure
- GCP: Google Cloud Platform
- DigitalOcean: DigitalOcean Cloud
- Linode: Linode Cloud
Provider-Specific Configuration:
AWS Configuration:
{
"cloudProvider": "AWS",
"aws": {
"region": "us-east-1",
"vpcId": "vpc-12345678",
"subnetId": "subnet-12345678",
"securityGroups": ["sg-12345678"],
"keyPairName": "my-key-pair"
}
}
Azure Configuration:
{
"cloudProvider": "Azure",
"azure": {
"subscriptionId": "12345678-1234-1234-1234-123456789012",
"resourceGroup": "my-resource-group",
"location": "East US",
"vnetName": "my-vnet",
"subnetName": "my-subnet"
}
}
Tab 4: Kubernetes (Optional)
Configure Kubernetes cluster integration:
Cluster Types:
- Managed Kubernetes: EKS, AKS, GKE
- Self-Managed: On-premises or custom clusters
- Development: Local development clusters
Configuration Options:
- Cluster Type: Select deployment model
- Node Count: Number of worker nodes
- Additional Software:
- Observability (monitoring, logging)
- Service Mesh (Istio, Linkerd)
- Certificate Manager
- Gateway API
- Nginx Ingress Proxy
Example Configuration:
{
"kubernetesType": "EKS",
"nodeCount": 3,
"additionalSoftware": {
"observability": true,
"serviceMesh": false,
"certificateManager": true,
"gatewayApi": false,
"nginxIngressProxy": true
}
}
Creating the Project
- Fill Required Fields: Complete at least the Project Details tab
- Navigate Tabs: Use "Next" and "Back" buttons to configure optional sections
- Validate Input: Ensure all required fields are completed
- Create Project: Click the "Create" button
- Confirmation: Review the created project and access project dashboard
API Project Creation
Endpoint
POST /api/projects
Content-Type: application/json
Authorization: Bearer <your-token>
Request Body Structure
Basic Project Creation
{
"name": "string (required)",
"description": "string (optional)",
"environment": "string (optional)",
"alert": "boolean (optional, default: false)",
"alert_chat": "string (optional)",
"max_parallel_tasks": "integer (optional, default: 0)",
"demo": "boolean (optional, default: false)"
}
Compliance Project Creation
{
"name": "string (required)",
"description": "string (optional)",
"environment": "string (optional)",
"complianceFramework": "string (required for compliance)",
"complianceOS": "string (required for compliance)",
"complianceSource": "string (optional, default: ansible-lockdown)",
"enableSTIG": "boolean (optional, default: false)"
}
Advanced Project Creation
{
"name": "string (required)",
"description": "string (optional)",
"environment": "string (optional)",
"alert": "boolean (optional)",
"alert_chat": "string (optional)",
"max_parallel_tasks": "integer (optional)",
"demo": "boolean (optional)",
"complianceFramework": "string (optional)",
"complianceOS": "string (optional)",
"complianceSource": "string (optional)",
"enableSTIG": "boolean (optional)",
"cloudProvider": "string (optional)",
"kubernetesType": "string (optional)",
"kubernetesConfig": {
"nodeCount": "integer (optional)",
"additionalSoftware": {
"observability": "boolean (optional)",
"serviceMesh": "boolean (optional)",
"certificateManager": "boolean (optional)",
"gatewayApi": "boolean (optional)",
"nginxIngressProxy": "boolean (optional)"
}
}
}
Example API Calls
Create Basic Project
curl -X POST "https://your-forge-instance.com/api/projects" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-token" \
-d '{
"name": "my-automation-project",
"description": "Automated infrastructure management",
"environment": "Production",
"alert": true,
"max_parallel_tasks": 5
}'
Create Compliance Project
curl -X POST "https://your-forge-instance.com/api/projects" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-token" \
-d '{
"name": "compliance-audit-project",
"description": "CIS compliance auditing for Linux servers",
"environment": "Production",
"complianceFramework": "CIS",
"complianceOS": "Ubuntu 20.04",
"enableSTIG": true
}'
Create Demo Project
curl -X POST "https://your-forge-instance.com/api/projects" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-token" \
-d '{
"name": "demo-project",
"description": "Demo project with sample templates",
"demo": true
}'
Response Format
Success Response (201 Created):
{
"id": 123,
"name": "my-automation-project",
"description": "Automated infrastructure management",
"environment": "Production",
"alert": true,
"alert_chat": null,
"max_parallel_tasks": 5,
"created": "2024-01-15T10:30:00Z",
"type": null,
"compliance_framework": null,
"compliance_os": null,
"compliance_source": null,
"enable_stig": false
}
Error Response (400 Bad Request):
{
"error": "Project name is required"
}
Error Response (401 Unauthorized):
{
"error": "Not authorized to create projects"
}
Project Types
1. Standard Projects
- Purpose: General automation and infrastructure management
- Use Cases: Application deployment, configuration management, monitoring setup
- Features: Full template support, inventory management, task scheduling
2. Compliance Projects
- Purpose: Security and compliance auditing
- Use Cases: CIS benchmarks, NIST compliance, regulatory requirements
- Features: Pre-configured compliance frameworks, STIG support, automated scanning
3. Demo Projects
- Purpose: Learning and demonstration
- Use Cases: Training, proof-of-concept, feature exploration
- Features: Sample templates, pre-configured examples, educational content
4. Cloud Provider Projects
- Purpose: Cloud infrastructure management
- Use Cases: Multi-cloud deployments, cloud-native applications
- Features: Provider-specific configurations, cloud resource management
5. Kubernetes Projects
- Purpose: Container orchestration management
- Use Cases: Microservices deployment, cluster management, DevOps workflows
- Features: K8s-specific templates, cluster configuration, service mesh support
Project Configuration
Automatic Setup
When a project is created, Forge automatically sets up:
- Project Owner Relationship: Creator becomes project owner
- Default Access Key: "None" key for basic authentication
- Empty Environment: Default environment configuration
- Default Views: Basic project organization views
Demo Project Setup
When creating a demo project (demo: true), Forge automatically creates:
- Sample Repository: Demo Git repository with example playbooks
- Multiple Views: Build, Deploy, and Tools views
- Sample Templates: 8 pre-configured templates including:
- Build Job (Ansible)
- Deploy demo app to Production (Ansible)
- Apply infrastructure (OpenTofu)
- Apply infrastructure (Terragrunt)
- Print system info (Bash)
- Print system info (PowerShell)
- Sample Inventories: Build, Dev, and Prod inventories
- Vault Key: Sample vault password for secrets management
Compliance Project Setup
Compliance projects automatically include:
- Compliance Framework Integration: Pre-configured compliance rules
- OS-Specific Templates: Operating system specific compliance checks
- STIG Integration: Security Technical Implementation Guides (if enabled)
- Compliance Scanning: Automated compliance assessment tools
Project Permissions
User Roles
Projects support the following user roles:
- Owner: Full project access and management
- Manager: Project management and task execution
- Task Runner: Task execution only
- Guest: Read-only access
Permission Matrix
| Action | Owner | Manager | Task Runner | Guest |
|---|---|---|---|---|
| Create Project | ✓ | ✓* | ✗ | ✗ |
| Edit Project | ✓ | ✓ | ✗ | ✗ |
| Delete Project | ✓ | ✗ | ✗ | ✗ |
| Manage Users | ✓ | ✓ | ✗ | ✗ |
| Execute Tasks | ✓ | ✓ | ✓ | ✗ |
| View Tasks | ✓ | ✓ | ✓ | ✓ |
| Manage Templates | ✓ | ✓ | ✗ | ✗ |
| View Templates | ✓ | ✓ | ✓ | ✓ |
*Only if NonAdminCanCreateProject is enabled
Troubleshooting
Common Issues
1. "Not authorized to create projects" Error
Cause: Insufficient permissions Solution:
- Ensure you have admin privileges, OR
- Enable
NonAdminCanCreateProjectin configuration
2. "Project name is required" Error
Cause: Missing required field Solution: Provide a valid project name
3. "Project already exists" Error
Cause: Duplicate project name Solution: Use a unique project name
4. Compliance Framework Errors
Cause: Invalid compliance configuration Solution:
- Verify compliance framework is supported
- Ensure OS selection matches available frameworks
- Check compliance source configuration
5. Cloud Provider Configuration Errors
Cause: Invalid cloud provider settings Solution:
- Verify cloud provider credentials
- Check region and resource availability
- Validate network configuration
Debug Mode
Enable debug logging for detailed project creation information:
# Set environment variable
export FORGE_DEBUG=true
# Or configure in config.json
{
"debug": true,
"log_level": "debug"
}
API Debugging
Use verbose curl for API debugging:
curl -v -X POST "https://your-forge-instance.com/api/projects" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-token" \
-d @project-config.json
Best Practices
1. Project Naming
- Use descriptive, consistent naming conventions
- Include environment or purpose in the name
- Avoid special characters and spaces
2. Environment Configuration
- Set appropriate environment labels
- Use consistent environment naming across projects
- Document environment-specific configurations
3. Compliance Projects
- Choose appropriate compliance frameworks
- Enable STIG for enhanced security
- Regularly update compliance templates
4. Cloud Integration
- Use least-privilege access principles
- Configure appropriate regions and availability zones
- Document cloud provider specific settings
5. Documentation
- Always provide project descriptions
- Document custom configurations
- Maintain project-specific documentation
Related Documentation
- User Management Guide
- Template Creation Guide
- Inventory Management Guide
- Task Execution Guide
- API Reference
Support
For additional support:
- Check the Forge GitHub Issues
- Review the Forge Documentation
- Contact the Forge community for assistance
History
The History screen in Forge provides a comprehensive view of all task executions within your project. This feature allows you to track, analyze the execution history of your tasks, providing valuable insights into your automation workflows.

Overview
The History page displays a chronological list of all task executions, including:
- Task templates used
- Execution status (success, failure, in progress)
- Start and end times
- Duration
- User who initiated the task
- Task output and logs
Viewing Task History
Accessing History
- Navigate to your project in Forge
- Click on "History" tab
- View the list of all task executions
Task Details
Clicking on any task in the history list opens a detailed view showing:
-
Task Information
- Task ID
- Template used
- Start and end times
- Duration
- Status
- User who ran the task
-
Execution Details
- Complete task output
- Error messages (if any)
- Environment variables used
- Inventory information
- Repository details
-
Task Logs
- Real-time log viewing
- Log download option
- Log search functionality
- Error highlighting
Statistics
The project provides a statistics page summarizing task outcomes over a selected time range, with filtering by user.
Task Management
Actions Available
From the history view, you can:
- Access complete task logs
- Download task output
- Search within logs
Task Retention
Forge allows you to configure how long task history is retained:
-
Default Behavior
- All tasks are stored in the database
- No automatic deletion by default
-
Configuring Retention
- Set maximum tasks per template
- Configure via environment variable:
FORGE_MAX_TASKS_PER_TEMPLATE=30 - Or via config.json:
{ "max_tasks_per_template": 30 }
-
Retention Rules
- When the limit is reached, oldest tasks are automatically deleted
- Deletion is per template
- Task logs are removed along with task records
Activity
The Activity page provides a comprehensive audit trail of all actions and events that occur within your project. This feature tracks user activities, system events giving you complete visibility into what's happening in your project.

Overview
The Activity page displays a chronological feed of all project activities, including:
- User actions (creating, editing, deleting resources)
- System events and notifications
- Access and permission changes
Settings
The Settings page allows you to configure various aspects of your project, including notifications, and project-specific settings. This page is accessible to project administrators and provides centralized management of project configuration.

Runners (Pro)
Project runners can be attached to a project. You can also require a specific runner by tag in a template:
- In
Project → Runners, add a runner and specify a tag. - In
Project → Templates → <your template>, set the required runner tag.
This ensures the task is executed on a runner matching the tag.
Project runners (Pro)
Project Runners are a powerful feature in Forge Pro that enables distributed task execution across multiple servers. This feature allows you to run tasks on separate servers from your Forge instance, providing enhanced security, scalability, and resource management.

Overview
Project runners operate on a similar principle to GitLab or GitHub Actions runners:
- A runner is deployed on a separate server from your Forge
- The runner connects to your Forge instance using a secure token
- When tasks are created, Forge delegates them to available runners
- Runners execute the tasks and report results back to Forge
Benefits
Using runners provides several key advantages:
-
Enhanced Security
- Runners can be deployed in isolated environments or restricted networks
- Sensitive operations can be executed in controlled environments
- Better separation of concerns between UI and execution environments
-
Improved Scalability
- Distribute workload across multiple servers
- Add or remove runners based on demand
- Better resource utilization across your infrastructure
-
Flexible Deployment
- Deploy runners close to your target infrastructure
- Run tasks in different network zones
- Support for various deployment models (on-premises, cloud, hybrid)
Using Project Runners
Prerequisites
To use runners, you need:
- A Forge Pro license
- A separate server for running the runner
- Network connectivity between the runner and Forge
- Proper configuration on both the Forge and runner servers
Managing Runners
You can manage runners through the Forge:
- Navigate to the Runners section in your project
- View all registered runners and their status
- Add or remove runners as needed
- Monitor runner health and performance
Security Considerations
- Always use HTTPS for communication between runners and Forge
- Implement proper network security between runners and Forge
- Consider using isolated environments for sensitive operations
Best Practices
-
Resource Planning
- Size your runners appropriately for your workload
- Monitor runner resource usage
- Scale runners based on demand
-
Network Configuration
- Ensure proper network connectivity
- Configure firewalls appropriately
- Use secure communication channels
-
Maintenance
- Regularly update runner software
- Monitor runner health
- Implement proper logging and monitoring
- Have a backup strategy for runner failures
-
Security
- Follow the principle of least privilege
- Implement proper access controls
- Regular security audits
- Keep software up to date
Task Templates
Templates define how to run Forge tasks. Currently the following task types are supported:
Parallel tasks
By default, tasks from the same template execute sequentially. To allow concurrent runs of the same template, enable the "Allow parallel tasks" option in the template settings.
Ansible
Using Forge you can run Ansible playbooks. To do this, you need to create an Ansible Playbook Template.
- Go go Task Templates section, click on New Template and then Ansible Playbook.

- Set up the template.
The template allows you to specify the following parameters:
- Repository
- Path to playbook file
- Inventory
- Variable Groups
- Vaults
- and much more

An ansible-playbook template can be one of the following types:
Task
Just runs specified playbooks with specified parameters.
If you intend to launch the template with an API call with the limit feature, make sure to activate the option Ansible prompts: Limit. Otherwise the limit set in the API call will be ignored. For the API triggered task, this will not cause any interactive prompt, the task will run unattended.
Build
This type of template should be used to create artifacts. The start version of the artifact can be specified in a template parameter. Each run increments the artifact version.
.png)
Forge doesn't support artifacts out-of-box, it only provides task versioning. You should implement the artifact creation yourself. Read the article CI/CD to know how to do this.
Deploy
This type of template should be used to deploy artifacts to the destination servers. Each deploy template is associated with a build template.

This allows you to deploy a specific version of the artifact to the servers.
Schedule
You can set up task scheduling by specifying a cron schedule in the template settings. Cron expression format you can find in documentation.

Run a task when a new commit is added to the repository
You can use cron to periodically check for new commits in the repository and trigger a task upon their arrival.
For example you have source code of the app in the git repository. You can add it to Repositories and trigger the Build task for new commits.

Tags, skip-tags and limit
Templates support Ansible CLI options:
--tags--skip-tags--limit
These can be set in the template and overridden when creating a task. Ensure corresponding prompts are enabled if you plan to pass these values via API.
Multiple vault passwords
You can attach multiple Vault passwords from the Key Store to a template. During execution, Ansible will attempt to decrypt using the provided passwords.
Verbosity level
You can adjust Ansible verbosity for a task (for example -v, -vvv) from the template/task form to aid troubleshooting.
Terraform/OpenTofu
Using Forge you can run Terraform code. To do this, you need to create a Terraform Code Template.
- Go go Task Templates section and click the New Template button.

- Set up the template and click the Create button.

- You can now run your Terraform code.
Workspaces
Forge supports Terraform/OpenTofu workspaces natively. See Workspaces for creating and switching workspaces and integrating SSH keys for private modules.
Backend override and HTTP backend (Pro)
You can enable the option to override backend settings in a template to use the built-in HTTP backend without modifying your Terraform code. For using the HTTP backend outside of Forge, create a backend alias and add the generated address, username and password to your Terraform configuration. See HTTP Backend (Pro) for details.
Destroy flag and state migration
The Terraform task form supports -destroy and -migrate-state flags. Use them when planning or destroying infrastructure, or when migrating state.
Workspaces
Forge provides built-in support for Terraform workspaces, allowing you to manage multiple environments and configurations within a single project. This feature helps you maintain separate state files for different environments like development, staging, and production.
Features
- Workspace Management: Create, switch, and delete workspaces directly from the Forge.
- State Isolation: Each workspace maintains its own state file, preventing conflicts between environments.
- Environment Variables: Configure workspace-specific environment variables.
- Workspace Selection: Choose the target workspace when running Terraform commands.
Using Workspaces in Forge
Creating a Workspace
In the Workspaces section of the Terraform/OpenTofu template where you want to add a workspace, follow these steps:
- Click the ➕ button.
- In the menu that appears, select New Workspace.
- In the modal dialog, enter the workspace name and select the SSH key to be used for cloning modules.
- Click the Create button to add the new workspace to the template.
- You can now use this workspace to run tasks.

Switching workspaces
You can set the default workspace for a Terraform/OpenTofu template by clicking the MAKE DEFAULT button.

Workspace-specific variables
Forge currently does not support workspace-specific variables.
HTTP Backend (Pro)
The Forge HTTP backend for Terraform securely stores and manages Terraform state files directly within Forge. Available in the Pro plan, it offers several key advantages.
Features
- Secure State Storage: State files are stored securely within Forge.
- State Locking: Prevents concurrent modifications to the same state file.
- Version History: Track changes to your infrastructure state over time.
- UI Integration: Manage state files directly through the Forge interface.
Configuration
To start using the built-in HTTP backend, you first need to create a workspace for your Terraform task template.
To add a workspace, go to the Workspaces tab of your Terraform/OpenTofu template.
When creating a workspace, you will be prompted to select an SSH key for cloning private modules used in your Terraform code. If you do not use any private modules, simply select the None option.
Using the HTTP backend in tasks
To use the built-in HTTP backend for storing the state of your Terraform tasks, you do not need to manually configure the backend in your Terraform code. Forge can automatically create the configuration file during execution. To enable this, simply check the Override backend settings option in your task template settings, as shown in the screenshot below.

Optionally, you can specify the name of the configuration file that will be dynamically created during execution. This is useful if your code already contains a backend configuration file and you need to override it dynamically to work with Forge's built-in backend.
Using the HTTP backend outside Forge
You can use the built-in HTTP backend not only when running tasks inside Forge, but also when executing Terraform code outside of Forge, such as from your local terminal.
To enable this, Forge allows you to create aliases (unique HTTP endpoint) for your state storage. These aliases make it easy to reference your state files from external environments.
To set this up, go to the Workspaces tab, select the desired workspace, and add an alias. You will also need to choose a key with a username and password, which will be used to authenticate access to the backend.
After this, you need to add the backend settings to your Terraform code:
terraform {
backend "http" {
address = "http://localhost:3000/api/terraform/***"
username = "***"
password = "***"
}
}
Now Terraform will use Forge's built-in HTTP backend even when running from your terminal:
terraform apply
Shell/Bash scripts
Using Forge you can run Bash scripts. To do this, you need to create a Bash Script Template.
-
Go go Task Templates section and click the New Template button.

-
Set up the template and click the Create button.

-
You can now run your Bash script.
PowerShell
Python

Tasks
A task is an instance of launching an Ansible playbook. You can create the task from Task Template by clicking the button Run/Build/Deploy for the required template.
.png)
The Deploy task type allows you to specify a version of the build associated with the task. By default, it is the latest build version.
.png)
When the task is running, or it has finished, you can see the task status and the running log.
.png)
Raw log view
You can open the unprocessed raw task log from the task log window via the RAW LOG action.
Tasks log retention
You'll notice that logs of previous runs of your tasks are available in the tasks template or in the dashboard.
However, by default, log retention is infinite.
You can configure this by using the max_tasks_per_template parameter in config.json or the FORGE_MAX_TASKS_PER_TEMPLATE environment variable.
Schedules
The schedule function in Forge allows to automate the execution of templates (e.g. playbook runs) at predefined intervals. This feature allows to implement routine automation tasks, such as regular backups, compliance checks, system updates, and more.
Make sure to restart the Forge service after making changes for them to take effect.
Setup and configuration
Timezone configuration
By default, the schedule feature operates in the UTC timezone. However, this can be customized to match your local timezone or specific requirements.
You can change the timezone by updating the configuration file or setting an environment variable:
-
Using the configuration file:
Add or update thetimezonefield in your Forge configuration file:{ "schedule": { "timezone": "America/New_York" } } -
Using an environment variable:
Set theFORGE_SCHEDULE_TIMEZONEenvironment variable:export FORGE_SCHEDULE_TIMEZONE="America/New_York"
For a list of valid timezone values, refer to the IANA Time Zone Database.
Accessing the schedule feature
- Log in to your Forge web interface
- Navigate to the "Schedule" tab in the main navigation menu
- Click the "New Schedule" button in the top right corner to create a new schedule

Creating a new schedule
When creating a new schedule, you'll need to configure the following options:
| Field | Description |
|---|---|
| Name | A descriptive name for the scheduled task |
| Template | The specific Task Template to execute |
| Timing | Either in cron format for more fexibility or using the built-in options for common intervals |

Cron format syntax
The schedule uses standard cron syntax with five fields:
┌─────── minute (0-59)
│ ┌────── hour (0-23)
│ │ ┌───── day of month (1-31)
│ │ │ ┌───── month (1-12)
│ │ │ │ ┌───── day of week (0-6) (Sunday=0)
│ │ │ │ │
│ │ │ │ │
* * * * *
Examples:
*/15 * * * *- Run every 15 minutes0 2 * * *- Run at 2:00 AM every day0 0 * * 0- Run at midnight on Sundays0 9 1 * *- Run at 9:00 AM on the first day of every month
Very helpful cron expression generator: https://crontab.guru/
Use cases
System maintenance
# Example playbook for system updates
---
- hosts: all
become: yes
tasks:
- name: Update apt cache
apt:
update_cache: yes
- name: Upgrade all packages
apt:
upgrade: yes
- name: Remove dependencies that are no longer required
apt:
autoremove: yes
Schedule this playbook to run weekly during off-hours to ensure systems stay up-to-date.
Backup operations
Create schedules for database backups with different frequencies:
- Daily backups that retain for one week
- Weekly backups that retain for one month
- Monthly backups that retain for one year
Compliance checks
Schedule regular compliance scans to ensure systems meet security requirements:
# Example compliance check playbook
---
- hosts: all
tasks:
- name: Run compliance checks
script: /path/to/compliance_script.sh
- name: Collect compliance reports
fetch:
src: /var/log/compliance-report.log
dest: reports/{{ inventory_hostname }}/
flat: yes
Environment provisioning and cleanup
For development or testing environments. Schedule cloud environment creation in the morning and teardown in the evening to optimize costs.
Best practices
- Use descriptive names for schedules that indicate both function and timing (e.g. "Weekly-Backup-Sunday-2AM")
- Avoid scheduling too many resource-intensive tasks concurrently
- Consider the effect of long-running scheduled tasks on other schedules
- Test schedules with short intervals before setting up production schedules with longer intervals
- Document the purpose and expected outcomes of scheduled tasks
Task parameters
Schedules can pass parameters to tasks. Enable prompts for the required fields in the template, then define parameter values in the schedule configuration so each run supplies the desired overrides (for example branch, variables, flags).
Key Store
The Key Store in Forge is used to store credentials for accessing remote Repositories, accessing remote hosts, sudo credentials, and Ansible vault passwords.
It is helpful to have configured all required access keys before setting up other resources like Inventories, Repositories, and tasks templates so you do not have to edit them later.
Types
1. SSH
SSH Keys are used to access remote servers as well as remote Repositories.
If you need assistance quickly generating a key and placing it on your host, here is a quick guide.
For Git Repositories that use SSH authentication, the Git Repository you are trying to clone from needs to have your public key associated to the private key.
Below are links to the docs for some common Git Repositories:
2. Login With Password
Login With Password is a username and password/access token combination that can be used to do the following:
- Authenticate to remote hosts (although this is less secure than using SSH keys)
- Sudo credentials on remote hosts
- Authenticate to remote Git Repositories over HTTPS (although SSH is more secure)
- Unlock Ansible vaults
3. None
This is used as a filler for Repos that do not require authentication, like an Open-Source Repository on GitLab.
GitLab
GitLab repository access token
Inventory
An Inventory is a file that contains a list of hosts Ansible will run plays against. An Inventory also stores variables that can be used by playbooks. An Inventory can be stored in YAML, JSON, or TOML. More information about Inventories can be found in the Ansible Documentation.
Forge can either read an Inventory from a file on the server that the Forge user has read access to, or a static Inventory that is edited via the web GUI. Each Inventory also has at least one credential tied to it. The user credential is required, and is what Ansible uses to log into hosts for that Inventory. Sudo credentials are used for escalating privileges on that host. It is required to have a user credential that is either a username with a login, or SSH configured in the Key Store to create an Inventory. Information about credentials can be found in the Key Store section of this site.
Creating an Inventory
- Click on the Key Store tab and confirm you have a key that is a login_password or ssh type
- Click on the Inventory tab and click New Inventory
- Name the Inventory and select the correct user credential from the dropdown. Select the correct sudo credential, if needed
- Select the Inventory type
- If you select file, use the absolute path to the file. If this file is located in your git repo, then use relative path. Ex.
inventory/linux-hosts.yaml - If you select static, paste in or type your Inventory into the form
- Click Create.
Updating an Inventory
- Click on the Inventory tab
- Click the Pencil Icon next to the Inventory you want to edit
- Make your changes
- Click Save
Deleting an Inventory
Before you remove an Inventory, you must remove all resources tied to it. If you are not sure which resources are being used in an environment, follow steps 1 and 2 below. It will show you which resources are being used, with links to those resources.
- Click on the Inventory tab
- Click the trash can icon next to the Inventory
- Click Yes if you are sure you want to remove the Inventory
Kerberos authentication
Forge supports Kerberos authentication when running playbooks against Windows hosts via WinRM.
Inventory configuration
[windows]
hostname
[windows:vars]
ansible_port=5985
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore
ansible_winrm_transport=ntlm
ansible_winrm_kinit_mode=managed
ansible_winrm_scheme=http
Also make sure:
- A username and password are provided (Forge credentials)
- The user format is
domain\\username(e.g.,CORP\\admin) if needed
The key setting is:
ansible_winrm_kinit_mode=managed
This tells Ansible to automatically acquire a Kerberos ticket using the provided username/password without requiring you to manually run kinit.
Example Playbook
- hosts: all
gather_facts: false
tasks:
- win_ping:
This verifies basic connectivity using WinRM + Kerberos.
Forge host requirements
On the Forge host, install the following packages:
sudo apt install libkrb5-dev krb5-user
Then edit /etc/krb5.conf and set your default realm (domain name):
[libdefaults]
default_realm = YOUR.DOMAIN.NAME
This must match your Active Directory domain.
Notes
-
You do not need to run kinit manually — Ansible handles ticket acquisition when
ansible_winrm_kinit_mode=managedis set. -
Works with the default NTLM transport (no SSL needed if using HTTP and
cert_validation=ignore).
Netbox Dynamic Inventory Integration with Forge
🛠 Key Features
This repository demonstrates the use of the netbox.netbox.nb_inventory plugin to create a dynamic inventory in Forge. It enables automatic synchronization of data from Netbox, simplifying the management of your infrastructure and the execution of Ansible playbooks.
🔧 Setup
Requirements
- Access to Forge
- Access to Netbox with configured API
🔑 Netbox Setup
Ensure your Netbox is configured and accessible for API interaction. Obtain an API token which will be used to authenticate requests.
📡 Configuration in Forge
-
In Forge, go to the inventory section.
-
Create a new inventory.
-
Enter the following settings for the plugin configuration:
plugin: netbox.netbox.nb_inventory api_endpoint: http://your_netbox_url_here token: YOUR_NETBOX_API_TOKEN validate_certs: False config_context: FalseReplace
http://your_netbox_url_hereandYOUR_NETBOX_API_TOKENwith the actual data from your Netbox.
🚀 Usage
Once configured, you can run Ansible playbooks in Forge using the dynamic inventory which automatically updates host data from your Netbox.
📚 Further Documentation
Learn more about the netbox.netbox.nb_inventory plugin and its capabilities in the official Ansible documentation.
Variable Groups
The Variable Groups section of Forge is a place to store additional variables for an inventory and must be stored in JSON format.
All task templates require a variable group to be defined even if it is empty.
Create an variable group
- Click on the Variable Group tab.
- Click on the New Variable Group button.
- Name the Variable Group and type or paste in valid JSON variables. If you just need an empty Variable Group type in
{}.
Updating an variable group
- Click on the Variable Groups tab.
- Click the pencil icon.
- Make changes and click save.
Deleting the variable group
Before you remove an variable proup, you must remove all resources tied to it. If you are not sure which resources are being used in an variable group, follow steps 1 and 2 below. It will show you which resources are being used, with links to those resources.
- Click on the Variable Group.
- Click the trash can icon next to the Variable Group.
- Click Yes if you are sure you want to remove the variable group.
Using Variable Groups - Terraform/OpenTofu
When you want utilize a stored variable group variable or secret in your terraform template you must prefix the name with TF_VAR_ for the terraform script to use it.
Example Passing Hetzner Cloud API key to OpenTofu/Terraform playbook.
- Click on Variable Group
- Click
New Group - Click on
Secretstab - Add
TF_VAR_hcloud_tokenand add yousecretin the hidden field - Click Save
We will call our secret TF_VAR_hcloud_token as var.hcloud_token in
hetzner.tf
terraform {
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = "~> 1.45"
}
}
}
# Declare the variable
variable "hcloud_token" {
type = string
description = "Hetzner Cloud API token"
sensitive = true # This prevents the token from being displayed in logs
}
provider "hcloud" {
token = var.hcloud_token
}
# Create a new server running debian
resource "hcloud_server" "webserver" {
name = "webserver"
image = "ubuntu-24.04"
server_type = "cpx11"
location = "ash"
ssh_keys = [ "mysshkey" ]
public_net {
ipv4_enabled = true
ipv6_enabled = true
}
}
Repositories
A Repository is a place to store and manage Ansible content like playbooks and roles.

Forge understands Repositories that are:
- a local file system (
/path/to/the/repo) - a local Git repository (
file://) - a remote Git Repository that is accessed over HTTPS (
https://), SSH(ssh://) git://protocol supported, but it is not recommended for security reasons.
All Task Templates require a Repository in order to run.
Authentication
If you are using a remote Repository that requires authentication, you will need to configure a key in the Key Store section of Forge.
For remote Repositories that use SSH, you will need to use your SSH key in the Key Store.
For Remote Repositories that do not have authentication, you can create a Key with the type of None.
Creating a New Repository
-
Make sure you have configured the key for the Repository you are about to add in the key store section.
-
Go to the Repositories section of Forge, click the New Repository button in the upper right hand corner.
-
Configure the Repository:
- Name Repository
- Add the URL. The URL must start with the following:
/path/to/the/repofor a local folder on the file systemhttps://for a remote Git Repository accessed over HTTPSssh://for a remote Git Repository accessed over SSHfile://for a local Git Repositorygit://for a remote Git Repository accessed over Git protocol
- Set the branch of the Repository, if you are not sure what it should be, it is probably master or main
- Select the Access Key you configured prior to setting up this Repository.
-
Click Save once everything is configured.
Editing an Existing Repository
-
Go to the Repositories section of Forge.
-
Click on the pencil icon next to the Repository you wish to change, then you will be presented with the Repository configuration.
Deleting a Repository
Make sure the Repository that is about to be delete is not in use by any Task Templates. A Repository cannot be deleted if it is used in any Task Templates:
-
Go to the Repositories section of Forge.
-
Click on the trash can icon on of the Repository you wish to delete.
-
Click Yes on the confirmation pop-up if you are sure you want this Repository to be deleted.
Requirements
Upon project initialization Forge searches for and installs Ansible roles and collections from requirements.yml in the following locations and order.
Roles
- <playbook_dir>/roles/requirements.yml
- <playbook_dir>/requirements.yml
- <repo_path>/roles/requirements.yml
- <repo_path>/requirements.yml
Collection
- <playbook_dir>/collections/requirements.yml
- <playbook_dir>/requirements.yml
- <repo_path>/collections/requirements.yml
- <repo_path>/requirements.yml
Processing Logic
- Each file is processed independently
- If a file exists, it will be processed according to its type (role or collection)
- If any file processing results in an error, the installation process stops and returns the error
- The same requirements.yml file in the root directories (<playbook_dir>/requirements.yml and <repo_path>/requirements.yml) is processed twice - once for roles and once for collections
Forge will attempt to process all these locations regardless of whether previous locations were found or successfully processed, except in the case of errors.
Bitbucket Access Token
You can use a Bitbucket Access Token in Forge to access repositories from Bitbucket.
First, you need to create an Access Token for your Bitbucket repository with read access permissions.

After creation, you will see the access token. Copy it to your clipboard as it will be required for creating an Access Key in Forge.

-
Go to to the Key Store section in Forge and click the New Key button.
-
Choose
Login with passwordas the type of key. -
Enter
x-token-authas Login and paste the previously copied key into the Password field. Save the key.
-
Go to the Repositories section and click the New Repository button.
-
Enter HTTPS URL of the repository (
https://bitbucket.org/path/to/repo), enter correct branch and select previously created Access Key.
Integrations
Integrations allow establishing interaction between Forge and external services, such as GitHub and GitLab.

Using integration, you can trigger a specific template by calling a special endpoint (alias), for which you can configure one of the following authentication methods:
- GitHub Webhooks
- Token
- HMAC
- No authentication
The alias represents a URL in the following format: /api/integrations/<random_string>. Supports GET and POST requests.
Matchers
With matchers, you can define parameters of the incoming request. When these parameters match, the template will be invoked.
Value Extractors
With an extractor, you can extract the necessary data from the incoming request and pass it to the task as environment variables. For the extracted variables to be passed to the task, you must create an environment with the corresponding keys. Ensure that the environment keys match the variables defined in the extractor, as this allows the task to receive and use the correct environment variables.
Task parameters
Integrations can trigger tasks with parameters. Use value extractors to build a JSON payload for task parameters and configure the template to accept prompted values.
Notes on aliases and matchers
For integrations configured with an alias endpoint, matchers are not used. Prefer token/HMAC authentication as needed and pass parameters via extractors.
Teams
In Forge, every project is associated with a Team. Only team members and admins can access the project. Each member of the team is assigned one of four predefined roles, which govern their level of access and the actions they can perform.
Team roles
Every team member has exactly one of these four roles:
- Owner
- Manager
- Task Runner
- Guest
Below are detailed descriptions of each role and its permissions.
Owner
-
Full permissions
Owners can do anything within the project, including managing roles, adding/removing members, and configuring any project settings. -
Multiple owners
A project can have multiple Owners, ensuring there is more than one person with full privileges. -
Restrictions on self-removal
An Owner cannot remove themselves if they are the only Owner of the project. This prevents the project from being left without an Owner. -
Managing other owners
Owners can manage (including remove or change roles of) all team members, including other Owners.
Manager
-
Broad project control: Managers have almost the same permissions as Owners, allowing them to handle most day-to-day tasks and manage the project environment.
-
Managers cannot:
- Remove the project.
- Remove or change the roles of Owners.
-
Typical use case: Assign the Manager role to senior team members who need extensive access but don’t require the authority to delete the project or manage Owners.
Task Runner
-
Run tasks: Task Runners can execute any task template that exists within the project.
-
Read-only for other resources: While they can run tasks, they only have read‐only access to other resources such as inventory, variables, repositories, etc.
-
Typical use case: Developers or QA engineers who need to trigger and monitor tasks but do not need the ability to modify project settings or manage team membership.
Guest
-
Read-only access: Guests have read-only access to all project resources (e.g., viewing logs, inventories, dashboards).
-
No write permissions: They cannot modify settings, run tasks, or change roles.
-
Typical use case: Stakeholders or other collaborators who only need to view project status and details without making changes.
Managing team members
-
Inviting new members: Owners and Managers can invite new users to join the team and assign them an initial role.
-
Changing roles: Owners can always change the roles of any team member. Managers can change the roles of Task Runners and Guests, but not other Managers or Owners.
-
Removing members: Owners and Managers can remove team members with lower roles.
- An Owner can remove anyone (including other Owners), but cannot remove themselves if they are the sole Owner.
- A Manager can remove Task Runners and Guests, but not other Managers or Owners.
Best practices
- Maintain redundancy: Assign the Owner role to at least two people to ensure continuous access and prevent a single point of failure.
- Follow principle of least privilege:
- Give team members the minimum role necessary for their tasks.
- Use Task Runner or Guest roles for those who only need limited permissions.
- Review membership regularly:
- As team structures change, re‐evaluate roles.
- Revoke access or downgrade roles for users who no longer need high‐level privileges.
- Use managers for day-to-day administration:
- Reserve the Owner role for a smaller group with ultimate authority.
- Delegate routine project management tasks to Managers to reduce the risk of accidental major changes or project deletions.
Frequently asked questions
1. Can an Owner remove another Owner?
Yes, an Owner can remove or change the role of any other Owner, unless they are the only remaining Owner in the project.
2. Who can delete the project?
Only Owners can delete a project.
3. Can Managers add or remove other Managers?
No. Managers can only add or remove users with Task Runner or Guest roles. To manage Owners or other Managers, you must be an Owner.
4. What happens if I remove all Owners by accident?
Forge prevents the removal of an Owner if it would leave the project with no Owners at all. There must be at least one Owner at all times.
5. Can Guests run tasks?
No. Guests only have read‐only access and cannot trigger or manage tasks.
Compliance Management
Forge provides comprehensive compliance management capabilities, supporting multiple compliance frameworks and automated remediation.
Overview
Forge's compliance features help you:
- Import Compliance Frameworks - STIG, CIS, NIST, PCI-DSS
- Track Findings - Manage compliance findings and their status
- Automate Remediation - Use Policy Packs and task templates
- Monitor Coverage - Track automated vs manual remediation
- Export Reports - Generate CKL files and compliance reports
- Scan Systems - Use OpenSCAP for automated compliance scanning
Key Features
STIG Compliance
- Import STIG checklists (CKL files)
- Interactive STIG Viewer for finding management
- Policy Packs for automated remediation
- Manual task assignment for bulk operations
- CKL export for certification
OpenSCAP Compliance
- Upload SCAP DataStream files
- Create compliance policies
- Schedule automated scans
- View detailed compliance reports
- Download ARF files for analysis
Compliance Frameworks
- Support for multiple frameworks per project
- Framework-specific workflows
- Compliance dashboard
- Historical tracking
Quick Start
1. Import a STIG Checklist
- Navigate to Compliance > Frameworks
- Click Import Framework
- Upload your
.cklfile - Select a Policy Pack (optional)
- Review imported findings
2. Install a Policy Pack
- Navigate to Compliance > Policy Pack Library
- Browse available packs
- Click Install Pack
- Remediation tasks are automatically linked
3. Assign Remediation Templates
- Navigate to Compliance > Remediation Coverage
- Filter to show "Manual" findings
- Click Assign Template
- Select a remediation template
- Review and execute assignment
4. Run Remediation Tasks
- Navigate to Task Templates
- Find remediation templates
- Click Run to execute
- Monitor task progress
- Update finding status in STIG Viewer
Workflow Examples
STIG Hardening Workflow
- Import STIG - Upload CKL file for your system
- Install Policy Pack - Get automated remediation tasks
- Review Findings - Use STIG Viewer to assess status
- Assign Templates - Link manual findings to tasks
- Run Remediation - Execute automated fixes
- Verify Compliance - Re-scan or manually verify
- Export CKL - Generate updated checklist for certification
OpenSCAP Scanning Workflow
- Upload SCAP Content - Add DataStream files
- Create Policy - Define scan policy and profile
- Assign Targets - Select inventories or hosts
- Schedule Scans - Set up periodic scanning
- Review Reports - Analyze compliance results
- Remediate Issues - Create tasks for findings
- Track Progress - Monitor compliance over time
Best Practices
Organization
- Use separate projects for different compliance frameworks
- Tag findings with environment (dev, staging, prod)
- Document exceptions and waivers
- Maintain audit trails
Automation
- Install Policy Packs for common remediations
- Use bulk assignment for manual findings
- Schedule regular compliance scans
- Automate remediation where possible
Reporting
- Export CKL files regularly for certification
- Maintain compliance dashboards
- Track remediation coverage percentage
- Document manual review processes
Related Documentation
- STIG Compliance - Detailed STIG workflow
- OpenSCAP Compliance - SCAP-based scanning
- Policy Packs - Automated remediation
- Remediation Coverage - Tracking automation
- Golden Images - STIG-hardened images
STIG Compliance
Forge provides comprehensive DISA STIG (Security Technical Implementation Guide) compliance management with automated remediation capabilities.
Overview
STIG compliance in Forge includes:
- STIG Import - Import CKL (Checklist) files
- STIG Viewer - Interactive finding management
- Policy Packs - Automated remediation playbooks
- Remediation Coverage - Track automation percentage
- Manual Task Assignment - Bulk assign templates to findings
- CKL Export - Generate updated checklists for certification
Importing STIG Checklists
Supported Formats
- CKL Files - DISA STIG Checklist format
- XCCDF Files - SCAP XCCDF format (converted automatically)
Import Process
- Navigate to Compliance > Frameworks
- Click Import Framework
- Select STIG as framework type
- Upload your
.cklfile - (Optional) Select a Policy Pack to install automatically
- Review imported findings
- Click Import
Import Options
Policy Pack Selection:
- Choose a Policy Pack during import to automatically link remediation tasks
- Policy Packs contain Ansible playbooks for automated fixes
- Available packs: RHEL 8/9, Ubuntu 22.04, Windows Server 2022
Multiple Imports:
- Import the same STIG multiple times to track versions
- Each import gets a unique version identifier
- Compare findings across versions
STIG Viewer
The STIG Viewer provides an interactive interface for managing compliance findings.
Finding Status
Each finding can have one of these statuses:
- NotAFinding - System is compliant
- Open - Finding requires remediation
- NotApplicable - Finding doesn't apply to this system
- NotReviewed - Finding hasn't been reviewed yet
Finding Details
View detailed information for each finding:
- STIG ID - Unique identifier (e.g., V-222401)
- Severity - CAT I, CAT II, or CAT III
- Title - Finding description
- Discussion - Detailed explanation
- Check - Verification procedure
- Fix - Remediation steps
- Status - Current compliance status
- Comments - Your notes
- Screenshots - Attach evidence
Filtering and Search
- Filter by status (Open, NotAFinding, etc.)
- Filter by severity (CAT I, II, III)
- Search by STIG ID or title
- Filter by remediation coverage (Automated, Manual)
- Filter by assigned template
Bulk Operations
- Bulk update finding status
- Bulk assign remediation templates
- Bulk export findings
- Bulk add comments
Policy Packs
Policy Packs are curated collections of Ansible playbooks that automate STIG remediation.
Installing Policy Packs
- Navigate to Compliance > Policy Pack Library
- Browse available packs by:
- Operating System (RHEL 8/9, Ubuntu 22.04, Windows)
- Framework (STIG, CIS, NIST)
- Use Case (Web Server, Database, Container)
- Click Install Pack
- Remediation tasks are automatically created and linked to STIG IDs
Available Policy Packs
Operating System Packs:
- RHEL 8 STIG Baseline
- RHEL 9 STIG Baseline
- Ubuntu 22.04 STIG Baseline
- Windows Server 2022 STIG Baseline
Application Packs:
- Apache STIG
- Nginx STIG
- PostgreSQL STIG
- MySQL STIG
Use Case Packs:
- Web Server Baseline
- Database Server
- Container Platform
Policy Pack Contents
Each pack includes:
- Remediation Tasks - Ansible playbooks for automated fixes
- Manual Review Tasks - Items requiring human verification
- Documentation - STIG mappings and instructions
- Prerequisites - Required packages and configurations
Remediation Coverage
Track how many findings have automated remediation available.
Coverage Metrics
- Total Findings - All findings in the framework
- Automated Tasks - Findings with linked remediation templates
- Manual Review - Findings requiring manual intervention
- Coverage Percentage - % of findings with automation
Improving Coverage
- Install Policy Packs - Get pre-built remediation tasks
- Create Custom Templates - Build your own remediation playbooks
- Manual Assignment - Bulk assign templates to manual findings
- Link Existing Tasks - Connect existing templates to STIG IDs
Manual Task Assignment
Bulk assign remediation templates to manual findings for automation.
Assignment Process
- Navigate to Compliance > Remediation Coverage
- Filter to show only "Manual" findings
- Click Assign Template
- Select a remediation template
- Preview which findings will be assigned
- Click Assign to execute
Benefits
- Automation - Convert manual tasks to automated ones instantly
- Consistency - Apply same remediation approach across findings
- Efficiency - Bulk operations instead of individual assignments
- Coverage - Improve overall compliance automation percentage
Running Remediation
Automated Remediation
- Navigate to Task Templates
- Find remediation templates (filter by "Compliance")
- Review template details and STIG mappings
- Click Run to execute
- Monitor task progress and logs
- Update finding status in STIG Viewer
Manual Remediation
- Review finding details in STIG Viewer
- Follow "Fix" instructions manually
- Verify compliance using "Check" procedure
- Update finding status to "NotAFinding"
- Add comments documenting the fix
- Attach screenshots as evidence
CKL Export
Generate updated CKL files for certification and reporting.
Export Process
- Navigate to STIG Viewer
- Review and update finding statuses
- Click Export CKL
- Fill in system details:
- System Name
- IP Address
- MAC Address (optional)
- Host Name
- Comments
- Click Export
- Download the updated
.cklfile
Export Formats
- CKL - Standard DISA STIG Checklist format
- CSV - For spreadsheet analysis
- JSON - For programmatic processing
Best Practices
Organization
- Use separate projects for different STIG versions
- Tag findings with environment (dev, staging, prod)
- Document exceptions and waivers in comments
- Maintain audit trails with status changes
Automation
- Install Policy Packs early in the process
- Use bulk assignment for manual findings
- Test remediation templates in non-production first
- Document custom remediation procedures
Reporting
- Export CKL files regularly for certification
- Maintain compliance dashboards
- Track remediation coverage percentage
- Document manual review processes
Related Documentation
- STIG Viewer - Detailed viewer guide
- STIG Import - Import procedures
- Policy Packs - Automated remediation
- Remediation Coverage - Tracking automation
- Manual Task Assignment - Bulk operations
Golden Images
Forge integrates HashiCorp Packer to build "Golden Images" - pre-configured, hardened VM/AMI images that can be deployed across any cloud provider.
Overview
Golden Images are pre-built virtual machine images that include:
- Operating system installation
- Security hardening (STIG compliance)
- Application software
- Configuration management
- Testing and validation
Key Features
- Visual Builder - Step-by-step wizard for image creation
- HCL Editor - Advanced Packer template editing
- Git Integration - Import templates from repositories
- Image Catalog - Centralized registry of built images
- STIG Hardening - Automated DISA STIG compliance
- Multi-Cloud - AWS, Azure, GCP, VMware, QEMU support
- Binary Management - Auto-download Packer and QEMU
Quick Start
1. Setup System Binaries
Navigate to Admin Settings > System Binaries
Install Packer:
- Click "Install" on the Packer card
- Select version (e.g., 1.11.2)
- Choose installation path
- Click "Install"
QEMU (for local builds):
- macOS:
brew install qemu - Linux:
apt-get install qemu-system-x86oryum install qemu-kvm - Update path in admin settings
2. Add Cloud Provider Credentials
Navigate to Project > Key Store
AWS Example:
- Click "New Key"
- Name: "AWS Packer Credentials"
- Type: "aws"
- Enter: Access Key ID, Secret Access Key, Region
Azure Example:
- Type: "azure"
- Enter: Subscription ID, Client ID, Client Secret, Tenant ID
GCP Example:
- Type: "gcp"
- Enter: Project ID, Service Account JSON
3. Build Your First Golden Image
Navigate to Project > Golden Images > Build New Image
Using Visual Builder:
- Choose Cloud Provider: Select AWS
- Select Builder: amazon-ebs
- Configure:
- Template Name: "ubuntu-22-04-stig"
- Region: us-east-1
- Instance Type: t2.micro
- Source AMI: ami-0c55b159cbfafe1f0
- Provisioning:
- ☑ Apply DISA STIG Hardening
- ☑ Run Ansible Playbook (optional)
- Generate & Save
The template is created! Click Build to create the image.
Documentation
- Overview - Detailed introduction
- Packer Templates - Template management
- Visual Builder - Guided image creation
- HCL Editor - Advanced editing
- Image Catalog - Browse built images
- STIG Hardening - Automated compliance
- Cloud Providers - Multi-cloud support
Related Documentation
- Packer Task Templates - Packer execution
- STIG Compliance - Compliance management
- Bare Metal Deployment - Deploy to bare metal
Golden Images Overview
Forge integrates HashiCorp Packer and QEMU to build "Golden Images" - pre-configured, hardened VM/AMI images that can be deployed across any cloud provider.
What are Golden Images?
Golden Images are pre-built virtual machine images that include:
- Operating system installation
- Security hardening (STIG compliance)
- Application software
- Configuration management
- Testing and validation
These images can be deployed instantly across AWS, Azure, GCP, VMware, or used locally with QEMU.
Key Benefits
- Consistency - Same image across all environments
- Speed - Deploy in minutes instead of hours
- Security - Pre-hardened with STIG compliance
- Compliance - Built-in compliance tracking
- Versioning - Track image versions and changes
- Multi-Cloud - Same image works across providers
Features
Visual Builder
Step-by-step wizard for creating Packer templates without writing HCL code.
HCL Editor
Advanced editor with full Packer HCL syntax support, validation, and Git integration.
Image Catalog
Centralized registry of all built images with filtering, search, and metadata.
STIG Hardening
Automated DISA STIG compliance built into templates with 16 pre-built templates available.
Multi-Cloud Support
Build images for:
- AWS (AMIs)
- Azure (Managed Images)
- GCP (Compute Images)
- VMware vSphere (Templates)
- QEMU (Local testing)
Workflow
- Create Template - Use Visual Builder or HCL Editor
- Configure Build - Set cloud provider, region, instance type
- Add Provisioning - STIG hardening, Ansible playbooks, scripts
- Build Image - Execute Packer build
- View in Catalog - Browse and manage built images
- Deploy - Use image IDs in Terraform or cloud console
Pre-Built Templates
Forge includes 16 production-ready, STIG-hardened Packer templates:
AWS Templates
- RHEL 9 STIG-hardened AMI
- RHEL 8 STIG-hardened AMI
- Ubuntu 22.04 STIG-hardened AMI
- Windows Server 2022 STIG-hardened AMI
Azure Templates
- RHEL 9 STIG-hardened managed image
- RHEL 8 STIG-hardened managed image
- Ubuntu 22.04 STIG-hardened managed image
- Windows Server 2022 STIG-hardened managed image
GCP Templates
- RHEL 9 STIG-hardened GCP image
- RHEL 8 STIG-hardened GCP image
- Ubuntu 22.04 STIG-hardened GCP image
- Windows Server 2022 STIG-hardened GCP image
VMware Templates
- RHEL 9 STIG-hardened vSphere template
- RHEL 8 STIG-hardened vSphere template
- Ubuntu 22.04 STIG-hardened vSphere template
- Windows Server 2022 STIG-hardened vSphere template
All templates are 100% self-contained with inline STIG hardening provisioners.
Next Steps
- Packer Templates - Manage templates
- Visual Builder - Create templates visually
- HCL Editor - Advanced editing
- Image Catalog - Browse images
- STIG Hardening - Compliance features
Troubleshooting
1. Renner prints error 404
How to fix
Getting 401 error code from Runner
2. Gathering Facts issue for localhost
The issue can occur on Forge installed via Snap or Docker.
4:10:16 PM
TASK [Gathering Facts] *********************************************************
4:10:17 PM
fatal: [localhost]: FAILED! => changed=false
Why this happens
For more information about localhost use in Ansible, read this article Implicit 'localhost'.
Ansible tries to gather facts locally, but Ansible is located in a limited isolated container which doesn't allow this.
How to fix this
There are two ways:
- Disable facts gathering:
- hosts: localhost
gather_facts: False
roles:
- ...
- Explicitly set the connection type to ssh:
[localhost]
127.0.0.1 ansible_connection=ssh ansible_ssh_user=your_localhost_user
4. panic: pq: SSL is not enabled on the server
This means that your Postgres doesn't work by SSL.
How to fix this
Add option sslmode=disable to the configuration file:
"postgres": {
"host": "localhost",
"user": "pastgres",
"pass": "pwd",
"name": "semaphore",
"options": {
"sslmode": "disable"
}
},
5. fatal: bad numeric config value '0' for 'GIT_TERMINAL_PROMPT': invalid unit
This means that you are trying to access a repository over HTTPS that requires authentication.
How to fix this
- Go to Key Store screen.
- Create a new key
Login with passwordtype. - Specify your login for GitHub/BitBucket/etc.
- Specify the password. You can't use your account password for GitHub/BitBucket, you should use a Personal Access Token (PAT) instead of it. Read more here.
- After creating the key, go to the Repositories screen, find your repository and specify the key.
6. unable to read LDAP response packet: unexpected EOF
Most likely, you are trying to connect to the LDAP server using an insecure method, although it expects a secure connection (via TLS).
How to fix this
Enable TLS in your config.json file:
...
"ldap_needtls": true
...
7. LDAP Result Code 49 "Invalid Credentials"
You have the wrong password or binddn.
How to fix this
Use ldapwhoami tool and check if your binddn works:
ldapwhoami\
-H ldap://ldap.com:389\
-D "CN=/your/ldap_binddn/value/in/config/file"\
-x\
-W
It will ask interactively for the password and should return code 0 and echo out the DN as specified.
You also can read the following articles:
8. LDAP Result Code 32 "No Such Object"
Coming soon.