Exposure to online information is often determined by recommendation algorithms that introduce unintended biases when information system platforms attempt to deliver content that is engaging and relevant to their users. Further investigation into the fairness of AI-powered recommendation systems is crucial to understanding technology’s effect on societal behavior. This study underscores the need for further investigations of algorithmic biases within these AI-powered information systems, particularly in the context of geopolitical discourse. Our investigations examine the behavior of YouTube’s recommendation algorithm regarding narratives from the Indo-Pacific region to identify potential biases and study the decision-making behavior of the algorithm. For our analysis, we collected recommended videos across five recommendation depths originating from seed videos related to our narratives. We used drift analysis to examine the evolution of various video characteristics such as emotion, sentiment, and content at each depth. Network analysis was also performed on each depth of recommended videos to determine the "highly-influential" videos responsible for driving the recommendations at each depth. Our analysis reveals narrative-dependent drifts from the original content and emotion present in our seed videos in YouTube’s recommendations. We also observe that highly influential videos at each depth act as attractors, directing content across recommendations where attractors in each depth can become topically unrelated to the original content. The contributions of this analysis add a layer of understanding to the "black-box" nature of the YouTube recommendation algorithm. This study also provides a quantifiable approach for assessing fairness in information systems that are capable of influencing vulnerable populations.