AI education

Schools are on more solid footing with generative AI after being surprised by the launch more than a year ago of OpenAI’s ChatGPT chatbot, but they are still struggling with issues ranging from detecting student use to teachers distrusting the work that is handed into them.

So says a new study by the Center for Democracy and Technology (CDT), which recently released the results of a survey of 460 sixth to 12th-grade teachers that follows a similar study the center conducted last summer, when the rapidly emerging technology was still much newer.

What the CDT found was that schools are more proactive in addressing the use of generative AI in the classrooms, offering more training to teachers in using the technology in their classrooms and putting more policies in place for how it should be used.

However, there are still a range of issues that need to be addressed, from how teachers should respond if they believe students are using generative AI in unapproved ways to detecting such unapproved use to how students are disciplined and inherent biases in disciplining.

“Though there has been positive movement, schools are still grappling with how to effectively implement generative AI in the classroom – making this a critical moment for school officials to put appropriate guardrails in place to ensure that irresponsible use of this technology by teachers and students does not become entrenched,” Maddy Dwyer and Elizabeth Laird of the CDT’s Equity in Civic Technology unit wrote in the 22-page report. “Schools should push beyond general permission and banning policies, and invest in educating teachers on the risks of generative AI, how to manage disciplinary action, and how to teach and promote responsible student use.”

Putting Policies in Place

According to the report, the survey in August 2023 found that schools were still “largely bewildered” and trying to catch up with guardrails and policies for generative AI. Now, 85% of teachers say their schools have policies that either permit – with some conditions or limitations – or ban the use of ChatGPT and other generative AI tools for schoolwork, with 71% saying they are the first such policies their schools have put in place.

In all, 60% of teachers said their schools have policies that allow for some use of the technology by students, doubling that number from the last school year.

In addition, 80% said they’ve gotten formal training about use policies and procedures for the technology, a 37% jump from last year.

Schools also are looking for teacher input regarding generative AI in classrooms, with 72% of teachers saying they’ve been asked for their thoughts. That’s more than the input sought for other technologies used by schools, such as content blocking or filtering and how certain types of content or websites should be handled, where a little more than half of teachers say they are asked.

With all this, the 71% of teachers who say their schools and districts do a good job responding to generative AI and other technologies eclipses the 51% percent who said the same thing last year.

More Teacher Guidance Needed

Despite the progress that’s been made, there is a lot more work to do, according to Dwyer and Laird. This can be seen by the lack of guidance for helping teachers promote the responsible use by students of generative AI while protecting student privacy, safety and civil rights, “leaving teachers to navigate practical management of generative AI use in the classroom on their own,” they wrote.

This is shown through 28% saying they’ve gotten guidance for how to respond if they believe a student used generative AI in a way that isn’t permitted, such as plagiarism, and the 37% who’ve gotten guidance regarding what responsible student use looks like. In addition, 37% said they’ve gotten guidance for how to detect student use of generative AI in assignments.

Another problem is that teachers – 68% of them – are relying heavily on school-sanctioned AI detection tools, which Dwyer and Laird said have been shown to be inconsistently effective at telling the difference between tests created by generative AI and humans. Only 25% of teachers surveyed said the tools were effective. Despite this, 78% of teachers said their schools support such technologies.

This combined with the lack of training on how to respond to instances of students using generative AI in ways that are prohibited is contributing to a rise in the disciplining of students, according to the report.

“As schools are trending towards generally permitting student use of generative AI, it is hard to pinpoint a singular cause of this increase in discipline, but some of the dimensions at play are the low levels of teacher training on how to manage student use and the increase in the use of school-sanctioned detection tools,” Dwyer and Laird wrote.

Discipline Still a Challenge

The issue of disciplining is layered. Now that generative AI has been used by both teachers and students for a longer period of time, it could open up an opportunity for discipline. However, given the more permissive policies employed by schools, it could be expected that disciplinary incidents would go down. However, disciplining of students for their generative AI use increased this year, with 64% of teachers saying such actions were taken against students who used or were accused of using the technology in ways that violated policies. Last year that number was 48%.

That accusations alone can result in disciplinary action and also has consequences, with 40% of teachers saying students got in trouble for how they reacted when confronted for allegedly misusing the technology.

In addition, “these consequences also present greater risks to certain groups of students,” they wrote.  “Nearly half of teachers agree that students that use school-provided devices are more likely to get in trouble or face negative consequences for using generative AI. And previous CDT research has shown that Black, Hispanic, rural and low-income students rely more heavily on school-issued devices.”

There also is an effect on students with disabilities, with such students reporting higher levels of use of generative AI and special education teachers being more likely to use generative AI detection tools, which they wrote “creates a potentially ripe environment for increased disciplinary action.”

The presence of generative AI also has made teachers more distrustful of students’ work, particularly in schools where the technology is banned.