Livepeer AI (SPE)
Menu
Roadmap
Changelog
Planned
Implement C2PA spec into pipelines
To ensure content provenance, we should integrate the C2PA specification into our pipelines.
1
Backlog
Add custom container support
After the AI subnet is stable we should add support for more custom containers.
0
Communicate used model parameters back to Gateway
Provide gateways with a way to see which parameters were used during inference.
0
Add communication protocol for long-running requests
For long-running requests, providing users with intermediate progress updates would be beneficial.
0
Implement AI job verification
To ensure network quality on the AI subnet, we should implement job result verification.
0
Network Analytics
Provide Gateways with on connection runner GPU stats
INC asked us to provide the gateway with GPU stats of the runner containers. This data can be used by their aggregator.
in 4 months
3
Improve Developer Experience
Create Generic inference job type
We should create a generic inference job type between pipelines.
0
Batch Jobs Support
Ensure container cleanup on AI worker shutdown
We should cleanup the started containers and release VRAM when the worker is shut down.
0
Issues
In Review
Address model not found silent container pipeline crash
0
In Progress
Add Frame Interpolation in go-livepeer
3
Add GPU selection parameter for orchestrator model loading
0
Ensure json string format works in `aiModels`
0
Todo
Black images returned from ai-runner
1
Create an AI orchestrator control dashboard
0
Backlog
Communicate AI-Runner errors to the Orchestrator
0
Powered by Productlane
Powered by Productlane
Terms
Privacy