podplex

powered by

distributed training & serverless inference at scale

how it works

  1. select a model: choose a model from the list of available models to finetune
  2. hang tight! we'll use runpod + FSDP to distribute your training job across multiple GPUs.
  3. monitor your job: track the progress of your training job in real-time using the pod status page
enter →