Abstract:Federated learning is a successful solution for shared knowledge in the context of data islands. However, with the advent of new attacks such as gradient reverse reasoning, the security of federated learning is faced with a new challenges again. In the federated learning, an inter-generational model leakage problem under the asynchronous federated learning framework is proposed aimed at the problem that participants maliciously steal gradient information from other clients by any possibility. By utilizing the characteristics of central server receiving then aggregating, multiple malicious clients can reversely compute other clients’ model update data through inter-generational versions of the global model in a specific update order. In view of this problem, a random aggregation algorithm based on α moving average is proposed. Firstly, the model update being received each time, the central server is to aggregate it with the global model randomly selected from the latest α aggregations, and shuffle the clients’ update order through the randomness of the aggregation. Secondly, as the number of global iterations increases, the central server performs a moving average on the global model of the latest aggregation to calculate the final global model. The experiment simulations show that the FedAlpha method can effectively reduce the possibility of inter-generational model leakage in comparison with the asynchronous federated learning method.