By Jovina Ang
SMU Research and Technology Transfer Office – “Correct. Good job.” “Bad. Try again.”
These are the typical results-based feedback provided by most computer programming grading tools, including commercial tools like Gradescope.
“Existing grading tools, including many research prototypes, are insufficient to meet the needs of instructors teaching computer programming,” said assistant professor of computer science (education) Don Ta. Office of Research and Technology Transfer.
“While some tools are good for summative assessment, they are unable to provide a holistic assessment of the cognitive process and approach taken by students when working on designing algorithms or writing code for solve a problem,” he continued.
“So to provide constructive feedback, computer and information systems (CIS) instructors like myself have to review hundreds, if not thousands, of lines of code. This is a time-consuming process as there may be 400-500 students enrolled in the introductory programming course at SMU,” he added.
“Based on my years of experience teaching computer science, I am aware that students learn best when they receive timely, frequent, formative, and personalized feedback. The more feedback students receive, including suggestions of relevant code samples and given additional programming tasks to work on their previous mistakes, the faster they will improve their skills in reading code, designing algorithms, and writing code, which are among the core competencies of any CIS student,” he continues.
In order to develop a tool that provides instant and constructive feedback to students, Professor Ta and his three collaborators, SMU Associate Professor of Computer Science Shar Lwin Khin, SMU Professor of Information Systems (Education) Venky Shankararaman and Professor associate Hui Siu Cheng from the School of Computer Science and Engineering of Nanyang Technological University, recently received a grant from the Ministry of Education’s Higher Education Research Fund (TRF). The project will produce a web-based tool named AP-Coach, which stands for Automated Programming Coach.
This research expands on Professor Ta’s previous work that focused on the accuracy and efficiency of automatic notation of codes and short texts in natural languages.
The AP-Coach will be tested on a pilot class of first-year undergraduate SMU students enrolled in the Introduction to Python Programming course, starting in January 2023. It will be rolled out to the rest of the students over the course of following semesters if it proves to be useful for learning.
The primary goals of the AP-Coach are to automate the process of large-scale code review, while enhancing learning by providing instant, constructive, and personalized feedback to students showing them tips on what should be next steps, relevant code samples and giving them additional appropriate programming tasks to further their learning in reading code, designing algorithms and writing code.
The AP-Coach will review the code or pseudocode submitted by the students to generate relevant and personalized feedback through the use of similarity matching algorithms based on recent advances in AI (code integration and models natural language processing) and software engineering techniques to evaluate abstract syntactic structures of code.
To provide more hands-on tasks, the AP-Coach will be designed to automatically generate various programming exercises and pseudocode using AI techniques such as the OpenAI GPT-3 (Generative Pre-trained Transformer 3) model, which is an auto-regressive language model. capable of producing human-like text and code.
The tool is also designed to track student progress. Each student will receive a summary of mistakes made throughout the 13 week course. Students can also use the AP-Coach to review past programming exercises.
To verify the effectiveness of AP-Coach, students’ skills in reading code, designing algorithms, and writing code will be monitored over several consecutive semesters.
There are three important implications of this research.
First, immediate and relevant feedback has been found to be highly motivating for students. It also allows independent learning.
Second, efficient and automatic coaching not only keeps the code review process moving, but also significantly reduces the workload for instructors. Thus, the instructors would have more time to help and guide the weaker students.
Third, the AP-Coach can be an important step towards realizing AI-based computer science education.
AI-based education is an exciting discipline in learning and teaching, and Professor Ta is eager to find out how the tool can benefit students.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of press releases posted on EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.