A few years after their release, Large Language Models (LLMs)-based tools are becoming an essential component of software education, as calculators are used in math courses. When learning software engineering (SE), the challenge is the extent to which LLMs are suitable and easy to use for different software development tasks. In this paper, we report the findings and lessons learned from using LLM-based tools-ChatGPT in particular-in five SE courses from four universities. After instructing students on the LLM potentials in SE and about prompting strategies, we ask participants to complete a survey and be involved in semi-structured interviews. The collected results report (i) indications about the usefulness of the LLM for different tasks, (ii) challenges to prompt the LLM, i.e., interact with it, (iii) challenges to adapt the generated artifacts to their own needs, and (iv) wishes about some valuable features students would like to see in LLM-based tools. Although results vary among different courses, also because of students' seniority and course goals, the perceived usefulness is greater for lowlevel phases (e.g., coding or debugging/fault localization) than for analysis and design phases. Interaction and code adaptation challenges vary among tasks and are mostly related to the need for task-specific prompts, as well as better specification of the development context.

Students' Perception of ChatGPT in Software Engineering: Lessons Learned from Five Courses

Di Penta M.;Zampetti F.
2025-01-01

Abstract

A few years after their release, Large Language Models (LLMs)-based tools are becoming an essential component of software education, as calculators are used in math courses. When learning software engineering (SE), the challenge is the extent to which LLMs are suitable and easy to use for different software development tasks. In this paper, we report the findings and lessons learned from using LLM-based tools-ChatGPT in particular-in five SE courses from four universities. After instructing students on the LLM potentials in SE and about prompting strategies, we ask participants to complete a survey and be involved in semi-structured interviews. The collected results report (i) indications about the usefulness of the LLM for different tasks, (ii) challenges to prompt the LLM, i.e., interact with it, (iii) challenges to adapt the generated artifacts to their own needs, and (iv) wishes about some valuable features students would like to see in LLM-based tools. Although results vary among different courses, also because of students' seniority and course goals, the perceived usefulness is greater for lowlevel phases (e.g., coding or debugging/fault localization) than for analysis and design phases. Interaction and code adaptation challenges vary among tasks and are mostly related to the need for task-specific prompts, as well as better specification of the development context.
2025
Empirical Study
Large Language Models for Software Engineering
Software Engineering Education
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12070/73676
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact