Poster
SuperFedNAS: Cost-Efficient Federated Neural Architecture Search for On-Device Inference
Alind Khare · Animesh Agrawal · Aditya Annavajjala · Payman Behnam · Myungjin Lee · Hugo M Latapie · Alexey Tumanov
# 11
Neural Architecture Search (NAS) for Federated Learning (FL) is an emerging field. It automates the design and training of Deep Neural Networks (DNNs) when data cannot be centralized due to privacy, communication costs, and regulatory restrictions. Recent federated NAS methods not only reduce manual effort but also provide more accuracy than traditional FL methods like FedAvg. Despite the success, existing federated NAS methods fail to satisfy diverse deployment targets common in on-device inference like hardware, latency budgets, or variable battery. Most federated NAS methods search for only a limited range of archi- tectural patterns, repeat the same pattern in DNNs and thereby harm performance. Moreover, these methods incur prohibitive training costs to satisfy deployment targets. They perform the training and search of DNN architectures repeatedly for each case. We propose FedNasOdin to address these challenges. It decouples the training and search in federated NAS. FedNasOdin co-trains a large number of diverse DNN architectures contained inside one supernet in the FL setting. Post-training, clients perform NAS locally to find specialized DNNs by extracting different parts of the trained supernet with no additional training. FedNasOdin takes O(1) (instead of O(N)) cost to find specialized DNN architectures in FL for any N deployment targets. As part of FedNasOdin, we introduce MaxNet—a novel FL training algorithm that performs multi-objective federated optimization of a large number of DNN architectures (≈ 5 ∗ 10^18) under different client data distributions. Overall, FedNasOdin achieves upto 37.7% higher accuracy for the same MACs or upto 8.13x reduction in MACs for the same accuracy than existing federated NAS methods.
Live content is unavailable. Log in and register to view live content