Federated learning, due to its distributed and privacy-protecting properties, is a good solution to the data silo problem in machine learning, but there are still many hidden dangers that threaten the security of federated learning, such as privacy leakage and potential malicious users. In this paper, we propose a federated learning copyright protection framework FedAaT. Our framework employs group signatures for authentication as well as user traceability, and reduces the computational cost of the server by aggregating signatures. To prevent models from being distributed maliciously, we introduce a conditional loss function to add traceability to local models.First, for the problem of difficult to verify the identity in the anonymity scenario of federated learning, considering that group signature has both anonymity and verifiability, a federated learning environment is organically combined with group signature, and a federated learning model copyright protection method based on group signature, FedAaT, is proposed.Second, to address the problem of excessive authentication overhead in federated learning scenarios, the process of authentication is optimized by introducing the aggregate signature technique, which reduces the computational overhead used for authentication by reducing the number of signatures that need to be verified.Finally, the problem of difficult traceability of illegally distributed models is addressed.For a specific set of inputs, we assign unique output sequences to each user and embed them into the model through a conditional loss function, and trace the model by detecting the output sequences.Experimental results show that our proposed FedAaT is effective in federated learning for authentication, user traceability, and reducing computational cost.