Without an efficient way to squeeze additional computing power from existing infrastructure, organizations are often forced to purchase additional hardware or delay projects. This can lead to longer wait times for results and potentially losing out to competitors. This problem is compounded by the rise of AI workloads which require a high GPU compute load.
ClearML has come up with what it thinks is the perfect solution to this problem – fractional GPU capability for open source users, making it possible to “split” a single GPU so it can run multiple AI tasks simultaneously.
This move recalls the early days of computing when mainframes could be shared among individuals and organizations, giving them the ability to utilize computing power without needing to buy additional hardware.
Source link