Nowadays, both the privacy of the data that feeds information systems and the effects of its improper handling are a matter of concern. Thus, in recommender systems, mitigating the disclosure of user privacy has become an issue of attention given the challenges that this entails. While research on algorithmic strategies to synthesize user ratings has addressed the concealment of user profiles in traditional recommender systems, less work has been done regarding the exposure of preferences and sensitive data in group recommender systems. In this paper, we evaluate two methods used to generate synthetic ratings and how they perform when applied in group recommender systems. Specifically, we apply state-of-the-art privacy-based methods in a group recommender system to introduce synthetic data and to assess whether and to what extent their behavior changes in terms of rating prediction. We performed this by exploring several possible combinations of GRS implementation (strategies used in group creation, size of groups, individual ratings prediction algorithms, preferences on aggregation methods). Our results show that it was possible to hide the information of individual preferences without affecting group recommendation performance while preserving privacy. We also found that CART was the method that synthesized data with the less difference from the original data. We do not find significative influence of strategies for group creation, group size or individual ratings prediction methods. To the best of our knowledge, this work represents the first attempt to use partially synthetic group ratings in GRS.