TTL Subnets: 129
$TAO Price: $284.04
Total Supply: 21M
Circulating Supply: 10M
24h Vol: 105.074M
Dominance: BTC: 0.085%
γ

τemplar

Subnet ID: 3

Coldkey:5G26HqQg8M6hfw9q84gM3udYHHymThmswRKgSGtwdcduBSos

Distributed TrainingDecentralized Training

τemplar is a decentralized training framework enabling large-scale AI model training across heterogeneous compute resources over the internet. By connecting diverse computational nodes through carefully designed incentive mechanisms, it makes collaborative model training possible while ensuring honest participation. The system coordinates miners who train on data subsets with validators who evaluate contribution quality and assign rewards accordingly.

Read White Paper
Price
TAO0.0258
Validators
3
Emission
2.58%
Miners
253
Deregistration Status

This subnet is currently safe from deregistration.

EMA Price0.031720 TAO
Deregistration Rank#106
GitHub Contribution Activity
1221 contributions in the last year
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Mon
Wed
Fri
Less
More

Yuma Pulse™

Featured Applications
Discover top applications built on this subnet
Arcana

Arcana

New

Arcana is a specialized AI assistant—a tailored version of ChatGPT—designed to support users working with Templar, a framework for decentralized, permissionless training of large language models (LLMs). It serves as an expert guide for understanding the Gauntlet incentive mechanism, distributed training protocols, and blockchain coordination systems underpinning Templar.

Gauntlet

Gauntlet

New

Gauntlet, developed by Templar AI, is an incentive mechanism that enables permissionless, decentralized LLM training by rewarding global contributors with tokens based on their computational contributions, as shown in a 1.2B parameter model run on the Bittensor blockchain.

Key Features

Decentralized Training

Large-scale model training across heterogeneous internet compute resources with fair participation

Incentive-Driven

Reward system encouraging quality contributions with built-in mechanisms preventing gaming

Gradient Compression

DCT-based top-k compression for efficient gradient sharing and reduced communication overhead

Synchronized Windows

Coordinated training cycles where miners and validators work in sync guided by blockchain blocks