In the United States, Palmer amaranth is a troublesome weed that competes with major crops, such as, soybean, and may lead to significant crop yield reduction if not managed properly. Integrated weed management practices using eco-friendly artificial intelligence based weeding robots and spot sprayers have been gaining popularity in agriculture. All of these robotic systems and weed recognition approaches, utilize a weed image database and a set of machine learning algorithms. This study investigates the performance of classification and object detection algorithms using unmanned aerial systems based red, green, and blue imageries acquired at different growth stages of soybean and Palmer amaranth. Vision transformer and EfficientnetB0 achieved test accuracies of 97.69% and 93.26% respectively, but Vision Transformer was 2.5-times slower than EfficientNetB0 on inference speed. Based on the tradeoff between speed and accuracy, experimentally it was observed that YOLOv6s is a suitable object detection model for real-time deployment with 82.6% mean average precision. Additionally, we present a self-supervised contrastive learning approach to label Palmer amaranth and soybean classes, achieving 98.5% test accuracy, demonstrating the potential for cost-efficient data acquisition and labeling to advance precision agriculture research.