Gpugrid enables users to easily run containerized AI workloads on a decentralized GPU network, where anyone can get paid by connecting compute nodes to the network and running container jobs. Users can easily run jobs such as Stable Diffusion XL and cutting-edge open source LLMs, supporting on-chain execution, CLI execution, and execution via Gpugrid AI Studio on the web.
Visit the Gpugrid documentation site for a more comprehensive getting started overview, including the Quick Start Guide.
Jobs (containers) can be run on Gpugrid using the installable CLI, or installed via the Go toolchain. After setting up the necessary prerequisites, the CLI enables users to run jobs as follows:
grid run cowsay:v0.0.4 -i Message="moo"
The current list of modules can be found in the following repositories:
Containerized job modules can be built and added to the available module list; more details can be found in the Building Jobs documentation. If you'd like to contribute, please open a pull request on this repository to add your link to the list above.
As a distributed network, Gpugrid also brings the ability to run as a node and contribute GPU and compute power. See the Running Nodes documentation for more detailed instructions and an overview of the setup.
The Gpugrid team holds the copyright on all BOINC Gpugrid code. By submitting contributions to the Gpugrid code, you irrevocably assign all right, title, and interest, including copyright and all copyright rights, in such contributions to The Regents of the Gpugrid team, who may then use the code for any purpose that it desires.
Gpugrid is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.