Livepeer AI (SPE) changelog
New Audio Pipeline and Major Fixes 🛠️

Over the past two months, we've made significant strides in the AI subnet that we're thrilled to share. We've kicked off collaborations with 7 startups as design partners and successfully launched our bounty program in partnership with the ecosystem team. Additionally, we have been diligently working towards a mainnet release and have implemented major architectural changes, including the AI remote worker, external container release, and pipeline generalization. Given the scale of these changes, we will be rolling them out over several releases in the coming months, with this one being the first.
Here’s a snapshot of what’s new:
Critical Fixes and Metrics: Several critical bugs have been addressed, and both AI gateways now have metrics to track their AI operations more effectively.
Text-to-Audio Pipeline: We’re on track to release our text-to-audio pipeline, also known as Whisper, to the AI subnet. This will support two of our startups and significantly boost network demand on the AI subnet.
Experimental SDK release: We have implemented experimental SDKs and tested them with our partners. These releases will be published to package repositories like PyPI and npm later this month.
The next release will involve syncing the AI-video branch with the main branch to enable gateways to filter orchestrators by go-livepeer version, helping to prevent potential breaking issues. Stay tuned for more updates as we continue to enhance and optimize the AI subnet ⚡!
We are thrilled to announce that as of this week, a dedicated community member, @interptr,, has joined our team full-time to drive the AI roadmap forward 🚀.
Main Changes
Features
Add Upscale pipeline - @Livepeer.cloud (colab 🫂)
Implement new audio-to-text pipeline.
Provide dApps with NSFW warnings for T2I, I2I, I2V and Upscale pipelines.
Improvements
Take inference request latency into account in selection algorithm.
Allow users to use
--gateway
instead of--broadcaster
.Rename
pricePerBroadcaster
flag withpricePerGateway
.Remove
pricePerUnit
dependency for AI orchestrators.Provide a way to specify the AI Runner container version.
Model/Features
Add support for Real-viz model.
Add support fo Stable Diffusion 3.
Add Pix2Pix support - @Mike | Xodeapp.
Pipeline/Improvements
Allow multiple prompts support to T2I pipeline.
Add the
num_inference_steps
to T2I endpoint - @Mike | Xodeapp.Made
num_inference_steps
configurable in ai-worker's I2I and I2V pipelines - @Jason | everestnode (bounty 🪙).Enabled configuration of
num_inference_steps
on the go-Livepeer side for I2I, I2V, and Upscale pipelines.
Bug Fixes
Resolve T2I and I2I output truncation with non-empty seed and batch size > 1.
Process T2I batch sequentially to avoid CUDA memory errors.
Throw error for empty ai-runner response.
Fix
nil
pointer runtime error in I2V.Fix huggingface login token not found error.
Ensure I2I latency score takes number of images into account.
Fix incorrect latencyScore for ByteDance/SDXL-Lightning model.
Fix pipeline multipart writers.
Metrics
Rename census Broadcaster metrics to Gateway.
Documentation
Replace
--broadcaster
flag with--gateway
in AI subnet documentation.Add
num_inference_steps
to T2I API reference documentation.Replace Broadcaster with Gateway in the livepeer.org docs.
Reduce documentation showcase image sizes to improve loading times.
Create documentation for the text-to-audio pipeline.
Improve pipeline optimization documentation.
Lots of small documentation fixes.
More improvements & fixes
Create Ruby client SDK.
Create Javascript client SDK.
Create Typescript client SDK.
Create python client SDK.
Create Golang client SDK.
Add AI apps and tools to Livepeer Awesome list.
Create Livepeer AI dune Dashboard.
Improve CI github actions to improve docker and binary releases.