GPUBox Starter is responsible for managing components of GPUBox in Windows operating system.
Infrastructure running under the GPUBox software must meet certain hardware and software requirements.
|64bit Operating system|
|Windows 7, Windows 8, Windows 10, Windows Server 2008, Windows Server 2012|
CUDA driver version 6.0 or higher.
Required CUDA libraries are included in graphics card driver, download NVidia driver.
For InfiniBand support
InfiniBand is optional and requires additional hardware and software.
InfiniBand based on RDMA technology that features very high throughput and very low latency. It allows to configure up to 100Gb/s as native InfiniBand and/or Ethernet network.InfiniBand cards
Visit Mellanox website to install required drivers
From console window issue command
ibstatto verify if the hardware and software are installed properly.
GPU Deployment Kit
In order to read information about a GPU’s temperature and fan speed.
GPUServer requires library nvml.dll to extract the measurements. Library's default location:
The GPUBox software does not support 32-bit operating systems.
For the better utilization of the GPUBox software, we recommend the following system setups:
|Processor||64-bit CPU with at least 4 cores Depends on the number of Clients.|
|Network||At least 1Gb/s TCP/IP network. In the GPUBox infrastructure, OServer is the least network-consuming component.|
|Processor||64-bit CPU with at least 4 cores Communication with Clients and copying the data between the system memory means that the GPU memory can be CPU-intensive. The TCP/IP network adapter without the offload engine (TCP Offload Engine - TOE) can consume a high volume of the CPU cycles during transmission.|
|Memory||Minimum 16GB, but it is a good practice to keep it twice as large as the total amount of the GPU’s memory. For example, GPUServer providing four GPUs installed with 3GB of GPU memory for each should
have at least 24GB of RAM, which is
|Network||At least 10Gb/s, network adapter with an offload engine or InfiniBand. We recommend InfiniBand communication between GPUServer and Client as it offers up to 100Gb/s throughput with very low-latency and low CPU-intensive operations.|
|Client and GPUBox Starter|
|Processor||64-bit CPU with at least 4 cores (virtual or physical)|
|Network||At least 10Gb/s, network adapter with an offload engine or InfiniBand. When Client is installed on a virtual system, network adapters (Ethernet and/or InfiniBand) should be installed via PCI-passthrough or SR-IOV technology.|
For the entire installation process and configuration, we advise the use of full absolute paths for directories and files.
In order to start, download installation package:
GPUBox requires access to internet and ports
53to be opened.
Please be aware that web browser may block the files download and installation materials may need further assistance.
Windows, via smart screen, may also ask you if run the application. Click
More info to show additional information and then click
Run anyway button to continue installation.
GPUBox installer requires to stop OServer, GPUServer and GPUBox Starter if they are already running.
During the installation process, it is worth to pay attention on step with components selection.
Installer will install all GPUBox components i.e. OServer, GPUServer and GPUBox Client only InfiniBand component is optional.
In most cases you will leave
Base Components always selected.
InfiniBand Support is a component responsible for communication over native InfiniBand protocol. OFED for Windows is required to install this component.
If you do not have OFED for Windows installed in your system do not select
InfiniBand Supportotherwise system will be notifying you about missing libraries.
Verify if the OFED for Windows is installed in Control Panel or issue command
ibstat. If the command is not found, likely, the required InfiniBand drivers are not installed in your system.
After installation completion, we highly recommend to run GPUBox Starter and configure GPUBox's components each time.
Setup Wizard is always available from Tools option in main menu.
GPUBox has three main components:
Wizard will help you to configure all components or just some of them:
If you wish to use client and connect only to GPUBox infrastructure you can quit Wizard and follow the instructions from Login to GPUBox Infrastructure
Be aware that antivirus or firewall program can prevent from:
starting connecting or configuringentire or some parts of the GPUBox software. It is highly recommend to disable antivirus and firewall at the time of installation process.
You have a choice to connect to already running OServer or start a new instance in your local computer.
If you only wish to connect to already running OServer you can skip this step and go to Find OServer step.
Start OServer to start OServer.
After a few seconds OServer will be ready to receive connections.
In next step you will connect to remote running or to local running OServer.
To connect to OServer you will require to enter a full HTTP or HTTPS address with port number.
By default OServer's interface is HTTP only and it is bound to all available IP interfaces. Default port is
You have two options to enter OServer's address:
OServer's discovery mechanism based on UDP broadcast protocol. This type of communication and also multicast protocol can be disabled. Please verify this with your administrator or service provider.
Discovery protocol works only with local network and the same subnet.
You can use any available IP interface to communicate with OServer however we highly recommend to use the same IP interface as for GPUServer bindings.
After successfully connected to OServer, indicator will turn green.
In the next step select IP interface with the best performance possible.
Use loopback only when you are interested in using GPUBox on your local computer.
We recommend to use faster network than 1Gb/s.
If you want to work with remote desktop, only protocols like VNC are compatible with CUDA driver.
RDP (Remote Desktop Protocol) does not use GPU, in such a case GPUServer will display message:
GBSC-SC-95A Cannot initialize CUDA environment: 100
If all GPUs from your system are visible you can be sure that GPUServer is initialized successfully.
At this point two main components of GPUBox should be fully initialize.
Within the very first start of OServer, it creates the security database with a single, already enabled superuser with UserID
GPUBox Administratorand password
gpubox. The database is newly created, this way, each time the path to the security database is changed in the
oserver_security_pluginconfiguration parameters or if the indicated file is deleted.
Important part of installation process is coping DLL library
nvcuda.dll into directory where you have CUDA-enabled software.
For example, if you copy library into
C:\Program Files\Blender Foundation\Blender, Blender will be able to use GPUs from GPUBox infrastructure.
copy nvcuda button will copy
nvcuda.dll library into clipboard and open Windows Explorer then you have to simply paste (ctrl+v) the library into the desire directory.
The very last step you have to take in order to use GPUBox infrastructure is to allocate GPU(s) i.e. assign GPU to your CUDA-enabled program.
For more information visit Allocate and drop GPU.
Red- server is not running
Orange- server is starting or stopping
Green- server is up and running
For client, only red and green colors are valid:
Red- client is logged out
Green- client is logged in
|Link to GPUBox Web Console|
|RESTful OServer's address|
|Status shows if entered address is valid and connect to OServer:
connected- address is valid
not connected- address is invalid
Open discovery dialog and find OServer automatically
|Refresh everything except servers' status|
Panel shows if user is logged into GPUBox infrastructure.
|When user is logged in, it shows OServer's address. Click it to open GPUBox Web Console.|
|Shows the user name and has link to user's details or link to login panel|
It shows list of currently allocated GPUs.Panel's column correspond to command
gpubox listgpubox list.
|Local identification number of the GPU that is used for basic user operations on allocations. ID is generated automatically during the GPU allocation and may indicate the order in which the GPUs were added.|
|Name of the allocated GPU device.|
|PCI address format |
|The IP address of the Client that was used to allocate the GPU. The user can use the command |
|The status of a particular GPU. It can be |
|Timestamp indicating when the GPU was allocated. Format: |
It shows list of currently available GPUs, ready to allocate.Panel's column correspond to command
gpubox freegpubox free.
|Device ID indicating the type of GPU that can be allocated.|
|Name of the free GPU device.|
|GPU memory expressed in gigabytes.|
|Number of free GPUs of a particular type.|
|When the checkbox is selected, GPU will be allocated to user exclusively otherwise device is shared between other users.|
Panel shows the current status of two main components of GPUBox infrastructure that are running on your local computer.
Status of severs:
running- server is running
starting- server is starting
stopping- server is stopping
not running- server is not running
|Process ID in system.|
|Stop or start server. Button is disabled When status is in orange state.|
|Click to open server's configuration file in notepad.|
|Select checkbox if you want to start server while user logs into the system.|
|Number of GPUs to be allocated|
|Allocate GPUs selected in free GPUs panel.|
|Release GPUs selected in allocated GPUs panel.|
There are two panels to show logs, respectively for OServer and GPUServer.
|Realod log from file.|
|Path to log file.|
|Every a few seconds, if checkbox is selected, application will automatically reload log from file.|
Servers are console based program and by default console window is hidden. Click
Dialog shows details of currently logged in user.Dialog's detail correspond to command
gpubox whoamigpubox whoami.
For freshly initialized security database, the default user and password are 'gpubox'.
For more information refer to Security
We highly recommend to not changing the values otherwise you have to know what are you doing!
Paths to servers' configuration files.
|It generates default OServer's configuration file. The current is backed up in folder |
|Sometimes OServer's database can be corrupted. The button removes the database. It is disable when OServer is running. Entire database will be rebuild within next OServer's start however all users' allocation will be lost.|
|Select checkbox to start GPUBox Starter when user log in.|
|Set all values to default.|
|Exit - exit application.|
|Wizard... - opens Setup Wizard.|
|Options... - options dialog.|
|Copy nvcuda.dll - copy |
It extends the capability of managing GPUBox infrastructure via terminal commands:
In some cases OServer or GPUServer may not be closed properly, for example clicking the
|View help - opens this manual in web browser manual. This options requires Internet access.|
|About GPUBox - shows details information about GPUBox.|
Closing GPUBox Starter window turns application into tray icon.
Right click on the icon shows menu:
|Open GPUBox - restores application window.|
|GPUBox Web Console - opens GPUBox Web Console in web browser.|
|OServer - stops or starts OServer.|
|GPUServer - stops or starts GPUServer.|
|Exit - exits application.|
In order to use GPUs from GPUBox infrastructure you have to:
For information about extended, command line interface visit:
Type address manually or use the find button to connect to OServer.
Type username and password in Login dialog
When you are logged in successfully, all panels be refreshed automatically.
Click the link with username to open user details dialog.
|Enter the number of required GPUs.|
|Select checkbox if you want to use the GPU exclusively otherwise device will be shared among other users of GPUBox infrastructure.|
|Select device from 'Available GPUs'.|
|Click button |
|In less than a second a new GPU(s) will be displayed in table of 'Allocated GPUs'.|
|Select single or multiple GPUs from table of 'Allocated GPUs'.|
|Click the drop button |
|In less than a second device(s) will return to the pool of available GPUs. The number of free GPU will increase respectively.|