AI Activity and Discussion

Video and Readings

During class, we will discuss the research project: How Beginning Programmers and Code LLMs (Mis)read Each Other. This work is highly relevant to our class as it investigates how beginner coders prompt LLMs to develop code and the challenges that arise in this process.

Before class, watch the conference presentation video of the paper. If you have time, feel free to read through the full paper.

Abstract for the paper: Generative AI models, specifically large language models (LLMs), have made strides towards the long-standing goal of text-to-code generation. This progress has invited numerous studies of user interaction. However, less is known about the struggles and strategies of non-experts, for whom each step of the text-to-code problem presents challenges: describing their intent in natural language, evaluating the correctness of generated code, and editing prompts when the generated code is incorrect. This paper presents a largescale controlled study of how 120 beginning coders across three academic institutions approach writing and editing prompts. A novel experimental design allows us to target specific steps in the text-to-code process and reveals that beginners struggle with writing and editing prompts, even for problems at their skill level and when correctness is automatically determined. Our mixed-methods evaluation provides insight into student processes and perceptions with key implications for non-expert Code LLM use within and outside of education.

Activity

In class, we will similarly explore prompting AI to develop code in python for a few problems. After, we will reflect on the process, benefits, challenges, and ethics. Click on the project description to read the full instructions for this activity and download the starter code.

Optional Testing and Debugging Lecture Materials

For course material on testing and debugging, see these additional lecture slides.