Roughly 6,800 natural disasters occur worldwide annually, and this alarming number continues to grow due to the effects of climate change. Effective methods to improve natural disaster response include performing change detection, map alignment, and vision-aided navigation to allow for the time-efficient delivery of life-saving aid. Current software functions optimally only on nadir images taken ninety degrees above ground level. The inability to generalize to oblique images increases the need to compute an image's geocentric pose, which is its spatial orientation with respect to gravity. This Deep Learning investigation presents three convolutional models to predict geocentric pose using 5,923 nadir and oblique red, green, and blue (RGB) satellite images of cities worldwide. The first model is an autoencoder that condenses the 256x256x3 images to 32x32x16 latent space representations, demonstrating the ability to learn useful features from the data. The second model is a U-Net Fully Convolutional Network with skip connections used to predict each image's corresponding pixel-level elevation mask. This model achieves a median absolute deviation of 0.335 meters and an R2 of 0.865 on test data. Afterward, the elevation masks are concatenated with the RGB images to form four-channel inputs fed into the third model, which predicts each image's rotation angle and scale, the components of its geocentric pose. This Deep Convolutional Neural Network achieves an R2 of 0.943 on test data, significantly outperforming previous models designed by researchers. The high-accuracy software built in this study contributes to mapping and navigation procedures to accelerate disaster relief and save human lives.