- Shell 50.7%
- Jinja 49.3%
| defaults | ||
| files | ||
| handlers | ||
| meta | ||
| tasks | ||
| templates | ||
| vars | ||
| .gitignore | ||
| README.md | ||
ansible-roles-containers
Base role shared but all the podman based container deployments
Deployment
Pods and Containers have have their respecive service files prefixed accordingly, pod- and container-
Sometimes you need to manually stop the running containers to get a clean run when re-deploying Services must be stopped as the respecitve user or another means to aquire the correct user scope for systemd
All containers in a pod are controlled through the pod service
systemctl --user stop pod-<service-name>.service
Stand alone containers can be controlled directly through the container service
systemctl --user stop container-<service-name>.service
Deployments can comprize combinations of roles
ansible-playbook -i hosts site.yml --tags=firewalld,traefik,portainer,uptime --limit=somehost
Removal of a deployment
Podman based deployments will leave behind their service files in the respective users home directory under ~/.config/systemd/user
ansible-playbook -i hosts site.yml --tags=uptime --extra-vars "container_state=absent" --limit=somehost
Reversing firewalld rules without statically defining a removal dict
ansible-playbook -i hosts site.yml --tags=firewalld,traefik --extra-vars "firewall_action=remove" --limit=somehost
Combined removal
ansible-playbook -i hosts site.yml --tags=firewalld,traefik,portainer,uptime --extra-vars "container_state=absent firewall_action=remove" --limit=somehost