📅 2025-06-19 — Session: Developed and Evaluated Rubric Criteria for AI Challenges
🕒 00:00–23:55
🏷️ Labels: Rubric, Evaluation, Ai Models, Machine Learning, Bias, Overfitting
📂 Project: Teaching
⭐ Priority: MEDIUM
Session Goal
The session focused on developing and evaluating rubric criteria for AI challenges, particularly in educational and machine learning contexts.
Key Activities
- Developed MECE rubric criteria for evaluating a physics-heavy AI challenge prompt.
- Outlined guidelines for citing sources in scientific prompts.
- Graded AI models using rubrics, aligning quantitative scores with qualitative assessments.
- Analyzed bias and variance in regression trees, exploring overfitting and underfitting.
- Examined parameters controlling overfitting in decision trees.
- Proposed a correction scheme for evaluating prediction models.
- Evaluated student performance on bias and variance concepts in machine learning.
- Clarified the distinction between bias and overfitting in machine learning.
- Explored inductive bias in machine learning algorithms.
- Explained the use of Gini and entropy in decision trees for classification.
Achievements
- Created comprehensive rubric criteria and evaluation schemes for educational purposes.
- Enhanced understanding of bias, variance, and overfitting in machine learning models.
Pending Tasks
- Further exploration of inductive biases in different machine learning algorithms.
- Implementation of proposed correction schemes in educational settings.