Video-based deep learning (DL) algorithms often rely on segmentation models to detect clinically important features in transthoracic echocardiograms (TTEs). While effective, these algorithms can be too data hungry for practice and may be sensitive to common data quality issues. To overcome these concerns, we present a data-efficient DL algorithm, Scaled Gumbel Softmax (SGS) EchoNet, that is robust to these common data quality issues and, importantly, requires no ventricular segmentation model. In lieu of a segmentation model, we decompose and transform the output of an R(2+1)D convolutional encoder to estimate frame-level weights associated with the cardiac cycle, that are then used to obtain a video representation that can be used for estimation. We find that our transformation obviates the need for a segmentation model while improving the ability of the predictive model to handle noisy inputs. We show that our model achieves comparable performance to the state of the art, while demonstrating robustness to noise on an independent (external) validation set.