Choosing the right Automated Machine Learning (AutoML) tool is crucial for researchers of varying expertise to achieve optimal performance in diverse classification tasks. However, the abundance of AutoML frameworks with varying features makes selection challenging. This study addresses this gap by conducting a practical evaluation informed by a theoretical and bibliographical review and a feature-based comparison of twelve AutoML frameworks. The evaluation, conducted under time constraints, assessed accuracy and training efficiency across binary, multiclass, and multilabel (considering both native and label powerset representations) classification tasks on fifteen datasets. We acknowledge limitations, including dataset scope and default parameter usage, which may not capture the full potential of some frameworks. Our findings reveal no single ``perfect'' tool, as frameworks prioritize accuracy or speed. For time-sensitive binary/multiclass tasks, \claas, \autogluon, and \autokeras showed promise. In multilabel scenarios, \autosklearn offered higher accuracy, while \autokeras excelled in training speed. These results highlight the crucial trade-off between accuracy and speed, emphasizing the importance of considering both factors during tool selection for binary, multiclass, and multilabel classification problems. We made the code, experiment reproduction instructions, and outcomes publicly available on GitHub.