Webinar 8: Learning Evaluation
Learning evaluation is crucial in assessing the performance and reliability of machine learning models. Trainees will grasp the importance of separating data into train, validation, and test sets to ensure unbiased evaluation. Performance metrics for classifiers and regressors, including error, precision, recall, and confusion matrix, will be elucidated to measure model effectiveness. Additionally, techniques such as cross-validation aid in robust performance evaluation. Trainees will learn to tune model parameters using validation sets and gain insights into understanding model behaviors, pitfalls, and implications of decisions. This session serves as a primer for trainees to navigate the intricacies of model evaluation, fostering informed decision-making in machine learning applications.
Hajar Alhijailan is an Assistant Professor of Computer Science at King Saud University in Riyadh, Saudi Arabia. She received her Ph.D. in Computer Science from the University of Liverpool in Artificial Intelligence. She taught AI courses for students in Computer Science Department at King Saud University.