MEERKAT algorithm drastically improves LLMs' efficiency where multiple parties train shared model without exchanging private raw data
https://www.eurekalert.org/news-releases/1126798 "federated learning requires constant sending/ receiving entire (often GBs) updated model to central server, often changing few entries... overcome identifying/ sharing updates for only most critical 0.1% of parameters, reducing data transmission >1,000X, turning GB-sized updates into few MB... (1) error-checking tweaks model slightly/ checks results directly, bypassing backpropagation, (2) more frequent synchronization since data packets so tiny... small research groups sharing without massive data center"