ם
Distributed Training
Subnet ID: 38
Coldkey:5EFJ7mEVyjxT3iXqxuHyMMuGgJKFt85tb5s4vEvnCTpSoP3w
Distributed Training
Our proposed solution is a subnetwork that incentivises Compute, Bandwidth and Latency. The compute helps power the training of a miner’s local version of a model and the bandwidth and latency helps power the averaging of each miners local model weights using an operation called butterfly all-reduce. Once this process is successfully completed, each miner has a unified global averaged gradient that it can use to update it’s model weights.
Price
Validators
8
Emission
0.23%
Miners
224
GitHub Contribution Activity
606 contributions in the last year
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Mon
Wed
Fri
Less
More
Yuma Pulse™
Key Features
No key features available.