Quote Originally Posted by chovy
View Post

to be honest I don’t know much about hardware. I just want like a 32 core server with fast cpu 128gb ram and whatever GPU they have in the $100/month range

Training is often more demanding than inference, all else being equal. AI training with “whatever gpu” is gonna end in tears and frustration. Have you tried your workload against some per-second billing instance first before making a longer term commitment?