podplex
powered by
distributed training & serverless inference at scale
how it works
- select a model: choose a model from the list of available models to finetune
- hang tight! we'll use runpod + FSDP to distribute your training job across multiple GPUs.
- monitor your job: track the progress of your training job in real-time using the pod status page