In this paper, we propose new methodologies for distributed tensor Principal Component Analysis (PCA) in heterogeneous data environments. These environments often arise when tensor datasets are stored across multiple machines, each with unique statistical properties, presenting significant challenges for aggregation and analysis. We extend classical Tucker decomposition to a distributed setting, providing sharp statistical convergence guarantees. Our approach han- dles both homogeneous and heterogeneous tensor data by developing estimators that account for common and individual components, achieving minimax optimal rates in various data settings. Extensive numerical simulations and real data analysis demonstrate the efficacy of our methods, which outperform traditional single-location PCA and pooled estimators, especially in hetero- geneous environments. Key applications include healthcare data analysis and recommendation systems, where privacy and communication costs are critical factors.