Protocol Learning: Decentralized Collaborative Learning at Scale
A full-day workshop defining the emerging paradigm of Protocol Learning - decentralized, communication-efficient, model-parallel training of foundation models. Join leading researchers from academia and industry to explore the open problems shaping the future of collaborative AI.
Protocol Learning
Training frontier foundation models today demands massive, co-located clusters of high-end GPUs - accessible only to a handful of the most well-resourced organizations. Protocol Learning removes this co-location requirement, enabling multi-participant training of foundation models across open, permissionless networks of globally distributed compute, where no single participant has, or can ever obtain, a full copy of the model.
This requires solving hard open problems in low-bandwidth model parallelism, asynchronous distributed optimization, supporting heterogeneous hardware, fault-tolerant training systems, Byzantine robustness, and trustless verification. This workshop convenes the researchers advancing these building blocks to define the challenges ahead and chart a research roadmap for training the next generation of community-owned frontier models with self-sustaining economics.
Speakers
Workshop Schedule
Registration and Breakfast
Check-in, coffee, and networking
Unextractable Protocol Models
Dr. Alexander Long
TBD
Prof. Nic Lane
Coffee Break
Communication-Efficient Training in the Era of Reinforcement Learning
Dr. Max Ryabinin
Mitigating Staleness in Asynchronous Pipeline Parallelism via Basis Rotation
Prof. Namhoon Lee
Lunch Break
Buffet lunch and networking
Communication Efficient Model Parallel Training
Dr. Sameera Ramasinghe
Lightning Talks
Coffee Break
Attacks and Defenses in Decentralised Collaborative Learning
Dr. Oguzhan Ersoy
Beyond Open Weights: Why Frontier AI Must Be Collectively Built, Trustlessly Owned, and Sovereign by Design
Riccardo Patana
Closing Remarks
Dr. Alexander Long
Hosted by Pluralis Research
Pluralis Research is an AI research organization advancing the field of Protocol Learning - decentralized, communication-efficient, model-parallel training of foundation models. The team has published peer-reviewed papers at NeurIPS, ICML, and ICLR on communication-efficient model parallelism, asynchronous pipeline-parallel optimization, unextractable protocol models, and stable large-scale transformer training - and demonstrated the world's first public model-parallel training run over the internet with Node0. Pluralis is backed by Union Square Ventures.
For questions about the workshop, reach out at events@pluralis.ai.