Workshop: Collaborative Perception & Federated ML for Autonomous Driving
Link : https://sites.google.com/view/cofed-d…
Speaker : Nadav Tal-Israel, CTO and Co-Founder, Edgify
Abstract : In recent years, Large Batch training and Federated Learning have emerged as ways for training models in a distributed manner over edge devices, keeping the data on the devices themselves. This holds the immense promise of extending Machine Learning to scenarios that are constrained by data privacy limitations or simply offer vast data and computational power in this form. There is no straightforward way, however, to simply turn any classical ML/DL system into such an edge-distributed one. In this talk, we will cover a few of the topics and challenges we’ve encountered on our way towards a more systematic solution:
1) Large Batch vs. Federating Learning: Large batch training is the classical training method, adapted to the distributed case. This adaptation doesn’t always fit, for example in scenarios with an internet connection that is only partially available. Federated Learning, as a more radical solution offered for this kind of scenario, aims to save on communication rounds. This doesn’t always translate to a reduction in the amount of data transmitted, however, while bringing in new problems. We will give a basic map of the tradeoffs landscape.
2) Compression: With communication bandwidth being the bottleneck in many edge-device-powered scenarios, various compression methods have been suggested. However, they can many times be detrimental or even destructive for the learning task.
3) Non-IID data distributions: When substantial independent training is done on the edge device, the question of whether its local data represents the overall data becomes critical. This is a central challenge that may determine if Federated Learning can succeed at all, but also poses further finer issues, with regard to certain architectural components, which have to be adapted to this unique learning scheme.