Deep learning has the potential to improve and even automate the interpretation of biomedical images, making it more accessible, particularly in low-resource settings where human experts are often lacking. The privacy concerns of these images necessitate innovative approaches to model building, such as DIStributed COllaborative learning (DISCO), which allows several data owners (clients) to learn a joint model without sharing the original images. However, this black-box data can conceal systematic bias, compromising model performance and fairness. This work adapts an interpretable prototypical part learning network to a DISCO setting enabling each client to visualize the differences in features learned by other clients on its own image: comparing one client’s 'This' with others’ 'That'. We present a setting where four clients collaborate to train two diagnostic classifiers on a benchmark chest X-ray dataset. In an unbiased setting, the global model reaches 74.14 % balanced accuracy for cardiomegaly and 74.08 % for pleural effusion. We then compare performance under the strain of systematic visual bias, where a confounding feature is associated with the label in one client. In this setting, global models drop to near-random when used on unbiased data. We demonstrate how differences between local and global prototypes can indicate the presence of bias and allow it to be visualized on each client's data without compromising privacy. We further show how these differences can guide model personalization.