Qrackmin container system deployed in rancher through the ThereminQ HELM definitions
Qrackmin is a minimalistic container system for Qrack, designed for both OpenCL and CUDA environments. It provides a set of Docker images and scripts to run quantum computing simulations and benchmarks. ThereminQ orchestrates a suite of best-of-class tools designed to control extend and visualize data emanating for Quantum circuits using Qrack ELK Tipsy Jupyter CUDA and OpenCL accelerators.
Here's an overview of the main directories in this repository:
benchmarks/: Contains benchmark files for testing quantum simulations.deploy-scripts/: Includes scripts for deploying and managing the containerized environment, especially for VCL clusters.dockerfiles/: A collection of Dockerfiles to build various container images tailored for different environments and purposes (e.g., AWS, CUDA, OpenCL, Python).run-scripts/: A variety of scripts to execute different quantum simulation tasks and benchmarks within the containers.
The dockerfiles/ directory contains a variety of Dockerfiles to build container images for different environments and purposes. Here's a breakdown of the most important ones:
| Dockerfile | Purpose |
|---|---|
Dockerfile |
The main Dockerfile for the :latest image, including CUDA and OpenCL support. |
Dockerfile-1804 |
Dockerfile for Ubuntu 18.04. |
Dockerfile-2004 |
Dockerfile for Ubuntu 20.04. |
Dockerfile-2204 |
Dockerfile for Ubuntu 22.04. |
Dockerfile-2204amd |
Dockerfile for Ubuntu 22.04 with AMD support. |
Dockerfile-arm |
Dockerfile for ARM architecture. |
Dockerfile-aws |
For Qrackmin:AWS, providing a binary runtime for Qrack as a Service on AWS. |
Dockerfile-braket |
For Qrackmin:BRAKET, providing a Python runtime for PyQrack as a ` |
Dockerfile-cluster |
Dockerfile for cluster support. |
Dockerfile-cluster-pocl |
Dockerfile for cluster support with POCL. |
Dockerfile-cuda |
Dockerfile with CUDA support. |
Dockerfile-mitiq |
Dockerfile for Mitiq. |
Dockerfile-pocl |
For Qrackmin:POCL, adding the generic OpenCL-ICD for CPU-only support. |
Dockerfile-pyqrack |
A Python runtime environment for running pyqrack tests. |
Dockerfile-qbdd |
A Python runtime environment for running qbdd benchmarks. |
Dockerfile-qbdd-sycamore-sleep |
Dockerfile for qbdd sycamore sleep. |
Dockerfile-qimcifa |
Dockerfile for Qimcifa. |
Dockerfile-sycamore-elidded |
For Qrackmin:elidded, for elidded and patched quadrant time tests. |
Dockerfile-vcl |
For Qrackmin:VCL, containing VCL binaries for VCL cluster support. |
Dockerfile-vcl-controller |
Dockerfile for VCL controller. |
Dockerfile-vcl-pocl |
Dockerfile for VCL with POCL support. |
Dockerfile-vcl-pocl-vpn |
Dockerfile for VCL with POCL and VPN support. |
The :latest container image is meant to be used on a single node with Nvidia-Docker2 and Linux support.
docker run --gpus all --device=/dev/kfd --device=/dev/dri:/dev/dri --privileged -d twobombs/qrackmin[:tag] [--memory 24G --memory-swap 250G]- To save measured results outside the container, use the volume flag:
-v /var/log/qrack:/var/log/qrack. - To get a shell inside the container:
docker exec -ti [containerID] bash- The ThereminQ repo with runfiles is checked out at
/root. - Windows users should install
WSL2,Docker Desktop,docker.io, andnvidia-docker2to run this (CUDAonly).
Qrackmin:AWS(:AWS): On-demand AWS template proposals for x86 and ARM - CUDA, OpenCL and CPU powered. Boilerplate binary runtime code for Qrack as a Service - QFT RND benchmarks output. Dockerfile-awsQrackmin:BRAKET(:BRAKET): Boilerplate python runtime code forPyQrackas a|BraKET>container service. Dockerfile-braket
Qrackmin:pyqrack: A python runtime environment to run tests for pyqrack.Qrackmin:qbdd: A python runtime environment to run benchmarks for qbdd.
Elidded and patched quadrant time tests.
docker run --gpus all --device=/dev/dri:/dev/dri -d twobombs/qrackmin:eliddedThe :pocl container image adds the generic OpenCL-ICD and is to be used with high memory & CPU count hosts for CPU-only support.
- Simulate performance and measured results on CPU.
- For validation before GPU cluster deployment.
- Dockerfile-pocl
The :vcl tag contains VCL binaries that are copyrighted by Amnon Barak to run VCL as a backend and host. See the VCL Cluster Setup section for more details. Dockerfile-vcl
The run-scripts/ directory contains various scripts to execute simulations and benchmarks. You can run these scripts from within the appropriate Docker container. For example, to run a qbdd benchmark:
- Start the
qbddcontainer. - Get a shell inside the container.
- Navigate to the
/root/thereminq/run-scriptsdirectory. - Execute the desired script, e.g.,
./run-qbdd.
The run-scripts/ directory contains various scripts to execute simulations and benchmarks. You can run these scripts from within the appropriate Docker container. For example, to run a qbdd benchmark:
| Script | Description |
|---|---|
run |
A general-purpose script to run simulations. |
run-arm |
Script for running simulations on ARM architecture. |
run-aws |
Script for running simulations on AWS. |
run-cluster |
Script for running simulations on a cluster. |
run-fqa-dask |
Script for running FQA Dask simulations. |
run-python |
Script for running Python-based simulations. |
run-qbdd |
A script to run qbdd benchmarks. |
run-qbdd-sycamore-sleep |
A script to run qbdd sycamore sleep benchmarks. |
run-rcs-nn |
A script to run RCS NN benchmarks. |
run-sycamore-patch-quadrant |
A script to run sycamore patch quadrant benchmarks. |
run-vcl |
Script for running VCL. |
run-vcl-controller |
Script for running VCL controller. |
run-vcl-vpn |
Script for running VCL with VPN. |
Please refer to the individual scripts for more details on their usage.
Qrackmin can be deployed as a VCL cluster for distributed computing.
sudo mkdir -p /var/log/vcl/etc/vcl/ /var/log/vcl/etc/init.d /var/log/vcl/usr/bin /var/log/vcl/etc/rc0.d /var/log/vcl/etc/rc1.d /var/log/vcl/etc/rc2.d /var/log/vcl/etc/rc3.d /var/log/vcl/etc/rc4.d /var/log/vcl/etc/rc5.d /var/log/vcl/etc/rc6.dRun the bash script in /run-scripts/ in this repository called ./1-run-nodes. You will be asked two questions:
- The amount of virtual nodes you want to create.
- The NVIDIA devices you want to expose (often 'all' will suffice, otherwise use the device number).
Other OpenCL device types such as an Intel IGP that are also recognised will also be taken into the cluster.
When you've deployed enough backend containers to your liking, you can start ./2-run-host.
- The nodes' IPs will be scraped.
- The host container will be started and will initialize.
- You'll drop into the host-container's bash.
- Then run workloads through
./vcl-1.25/vclrun [command].
A Full VDI host experience is available in ThereminQ:vcl-controller
For multi-host setup please look Docker Swarm and Zerotier.
- Dan Strano for creating Qrack and Qimcifa.
- Some rights are reserved regarding code and functionality for the
Amazon AWS |BraKET>container images; they are in the git repo checkouts for the:awsand:braketcontainer images and theaws toolsfor Amazon AWS as well.
This project is licensed under the GPL-3.0+ License. See the LICENSE file for details.
