6.2C Federated Learning among Multiple NWDAFs

23.2883GPPArchitecture enhancements for 5G System (5GS) to support network data analytics servicesRelease 18TS

6.2C.1 Description

This clause specifies how NWDAF containing MTLF can leverage Federated Learning technique to train an ML model, in which there is no need for input data transfer (e.g. centralized into one NWDAF) but only need for cooperation among multiple NWDAFs (MTLF) distributed in different areas i.e. sharing of ML model(s) and of the learning results among multiple NWDAFs (MTLF).

6.2C.2 Procedures

6.2C.2.1 General procedure for Federated Learning among Multiple NWDAF Instances

Figure 6.2C.2.1-1: General procedure for Federated Learning among Multiple NWDAF

0. The consumer (NWDAF containing AnLF) sends a subscription request to NWDAF containing MTLF to retrieve a ML model, including Analytic ID and ML model filter information as described in TS 23.288 [5], the NWDAF containing MTLF can be a FL server (Server NWDAF) with FL server capability or a MTLF without FL server capability.

Editor’s note: Procedure of MTLF registration and discovery with respect to FL is FFS.

Editor’s note: Definition of FL capability is FFS.

1. Server NWDAF sends a request to the selected NWDAF containing MTLF(Client NWDAF) that participates in the Federated learning to perform the local model training for Federated Learning.

2. Each Client NWDAF collects its local data by using the current mechanism in clause 6.2 of TS 23.288 [5].

3. During Federated Learning training procedure, each Client NWDAF further trains the retrieved ML model from the server NWDAF based on its own data, and reports interim local ML model information to the Server NWDAF.

The ML model information are exchanged between the Client NWDAF(s) and the Server NWDAF during the FL training process.

Editor’s note: The ML model information exchanged between the Client NWDAF(s) and the Server NWDAF is FFS.

Editor’s note: The services in step 1, 3, 5a and 6 to enable FL based ML model training is for FFS, and the Figure 6.2C.2.1-1 will be updated accordingly.

4. The Server NWDAF aggregates all the local ML model information retrieved at step 3, to update the global ML model.

5a. Based on the consumer request, the Server NWDAF updates the training status (an accuracy level) to the consumer periodically (one or multiple rounds of training or every 10 min, etc.) or dynamically when some pre-determined status (e.g. some accuracy level) is achieved.

5b. [Optional] Consumer decides whether the current model can fulfil the requirement e.g. accuracy and time. The consumer modifies subscription if the current model can fulfil the requirement.

Editor’s note: Further details on providing the accuracy of FL model process is for FFS.

5c. According to the request from the consumer, Server NWDAF updates or terminates the current FL training process.

6. If the FL procedure continues, Server NWDAF sends the aggregated ML model information to each Client NWDAF for next round model training.

7. Each Client NWDAF updates its own ML model based on the aggregated ML model information distributed by the Server NWDAF at step 6.

NOTE: The steps 2-7 should be repeated until the training termination condition (e.g. maximum number of iterations, or the result of loss function is lower than a threshold) is reached.

After the training procedure is complete, the Server NWDAF may send the globally optimal ML model information to the consumer.