Protocol Learning: Decentralized Collaborative Learning at Scale
A full-day workshop that defined the emerging paradigm of Protocol Learning - decentralized, communication-efficient, model-parallel training of foundation models. The event brought together researchers from academia and industry to explore the open problems shaping collaborative AI.
Protocol Learning
Training frontier foundation models today demands massive, co-located clusters of high-end GPUs - accessible only to a handful of the most well-resourced organizations. Protocol Learning removes this co-location requirement, enabling multi-participant training of foundation models across open, permissionless networks of globally distributed compute, where no single participant has, or can ever obtain, a full copy of the model.
This requires solving hard open problems in low-bandwidth model parallelism, asynchronous distributed optimization, supporting heterogeneous hardware, fault-tolerant training systems, Byzantine robustness, and trustless verification. This workshop convenes the researchers advancing these building blocks to define the challenges ahead and chart a research roadmap for training the next generation of community-owned frontier models with self-sustaining economics.