-
|
Hi everyone, I’ve been working hard to set up an environment to train a Piper voice model, but I’ve run into quite a few challenges—even with the help of ChatGPT. My goal is to train a model using a very small dataset consisting only of spoken phonetic alphabet words (Alpha, Bravo, Charlie, etc.) and digits from 0 to 9. The idea is to create a lightweight voice model for specific use cases like ham radio or tactical speech synthesis. So far, I’ve managed to get the training process running twice—once in a Windows environment and once in Ubuntu via RunPod cloud. Both setups took several hours to configure, and while training technically completed, the resulting models either failed to export properly or produced no usable audio output. Given the complexity of the setup and the number of moving parts, I’m wondering if anyone in the community could help in one of the following ways: 🐳 Provide a Docker image with a fully working training environment 🛠️ Run my small dataset in your working setup to produce a usable model 📄 Share any updated documentation or tips for reliably exporting trained models I’d really appreciate any support or guidance. I’m confident this is doable, and I’d love to contribute back once I get it working. Thanks in advance! Kari |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
Replying to myself. It was already there and ready to be used if I had just read the README properly. https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/TRAINING.md |
Beta Was this translation helpful? Give feedback.
Replying to myself. It was already there and ready to be used if I had just read the README properly. https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/TRAINING.md
Training now with: ifansnek/piper-train-docker:lates