Jobs by JobLookup

Phone breaks in class: The 60-second solution boosting grades and focus


 The war on classroom phone use has a new strategy: surrender, but only for a minute. A recent study reveals that allowing students brief, scheduled periods to use their phones during lectures may actually reduce overall usage and even lead to higher grades.

In an era where smartphones are ubiquitous, educators face a growing challenge: how to keep students engaged in the classroom when digital distractions are just a tap away. The recent study published in Frontiers in Education offers a potential solution that doesn’t involve confiscating phones or imposing strict bans. Instead, it suggests giving students periodic “technology breaks” to check their devices during class.

The most surprising finding? Shorter breaks of just one minute proved most effective in reducing overall phone use and improving test performance.

Led by Professor Ryan Redner at Southern Illinois University, researchers implemented this strategy in an undergraduate critical thinking course over 22 class periods. They compared the effects of technology breaks to “question breaks,” where students could ask the professor questions about the lecture material instead of using their phones.

“We show that technology breaks may help reduce cell phone use in the college classroom,” Redner explains in a statement. “To our knowledge, this is the first evaluation of technology breaks in a college classroom.”

During technology break sessions, the professor would announce at the beginning of class that students would have a brief period – either 1, 2, or 4 minutes – to use their phones freely during the lecture. The breaks were scheduled about 15 minutes into the 45-minute lecture. In contrast, during question break sessions, students were told they would have a similar break to ask questions, but phone use was discouraged.

Teens, children using smartphones listening to music
(© georgerudy – stock.adobe.com)

The results were promising, albeit with some caveats. On average, when technology breaks were implemented, students used their phones less frequently throughout the class compared to days with question breaks. The rate of cell phone use during technology break days was 0.35 times per minute, while on question break days, it was 0.53 times per minute.

Interestingly, the one-minute technology breaks seemed to be the most effective. Not only did they result in the lowest levels of cell phone use, but they also correlated with significantly higher quiz scores compared to other break durations or question breaks.

“Our hope is that it means students were less distracted during lecture, which leads to better performance,” Redner says.

So, why are shorter breaks more effective?

“One possibility is that one minute is enough to read and send a smaller number of messages. If they have more time to send many messages, they may be more likely to receive messages and respond again during class,” Redner hypothesizes.

However, the study wasn’t without its quirks. The researchers noticed high variability in phone use among sessions, making it difficult to draw definite conclusions. They also observed that while phone use decreased in some experimental settings, it was not eliminated entirely.

The implications of this study are fascinating for educators grappling with the digital age. Instead of fighting against the tide of technology, this approach suggests working with it. By acknowledging students’ desire to stay connected and providing structured times to do so, educators might be able to reclaim more focused attention during lecture times.

Paper Summary

Methodology

The researchers used a method called a multi-element design to compare different durations of technology breaks and question breaks. They observed students across 22 class periods, with an average attendance of 21 students. During each class, observers counted how many students were using cell phones every 10 seconds, moving row by row. This data was then converted into a rate of cell phone use per minute. The researchers also collected quiz scores to see if the different break types affected academic performance.

Key Results

The main finding was that technology breaks led to less overall cell phone use during class compared to question breaks. The 1-minute technology breaks were particularly effective, resulting in both the lowest cell phone use and the highest quiz scores. Higher average test scores (over 80%) were consistently observed during sessions with 1-minute breaks. However, there was high variability in the data, making it difficult to draw firm conclusions.

Study Limitations

There was high variability in the data, with some unexpected spikes and dips in cell phone use that couldn’t be easily explained. The researchers also didn’t distinguish between academic and non-academic phone use. Additionally, the study didn’t account for other devices like laptops or smartwatches. Lastly, the sample size was relatively small and limited to one course at one university, so the results might not apply to all college classrooms.

Discussion & Takeaways

The researchers suggest that technology breaks, particularly short ones, could be a promising, non-punitive way to reduce cell phone distractions in class. They acknowledge that more research is needed to confirm these effects and understand why they occur.

“We are trying to find ways to reduce cell phone use and doing so without penalties. We hope our findings inspire researchers and teachers to try approaches to reducing cell phone use that are reinforcement-based,” concludes Redner.

The study raises interesting questions about how to balance technology use with focused learning in modern classrooms.

It’s been nearly two years since generative artificial intelligence was made widely available to the public. Some models showed great promise by passing academic and professional exams.

For instance, GPT-4 scored higher than 90% of the United States bar exam test takers. These successes led to concerns AI systems might also breeze through university-level assessments. However, my recent study paints a different picture, showing it isn’t quite the academic powerhouse some might think it is.

My study

To explore generative AI’s academic abilities, I looked at how it performed on an undergraduate criminal law final exam at the University of Wollongong – one of the core subjects students need to pass in their degrees. 225 students were doing the exam.

The exam was for three hours and had two sections. The first asked students to evaluate a case study about criminal offenses – and the likelihood of a successful prosecution. The second included a short essay and a set of short-answer questions.

The test questions evaluated a mix of skills, including legal knowledge, critical thinking, and the ability to construct persuasive arguments.

Students were not allowed to use AI for their responses. And did the assessment in a supervised environment.

I used different AI models to create ten distinct answers to the exam questions.

Five papers were generated by just pasting the exam question into the AI tool without any prompts. For the other five, I gave detailed prompts and relevant legal content to see if that would improve the outcome.

I hand-wrote the AI-generated answers in official exam booklets and used fake student names and numbers. These AI-generated answers were mixed with actual student exam answers and anonymously given to five tutors for grading.

Importantly, when marking, the tutors did not know AI had generated ten of the exam answers.

writing on paper
We handwrote the AI answers so markers would think they were done by students. (Credit: Andrea Piacquadio from Pexels)

How did the AI papers perform?

When the tutors were interviewed after marking, none of them suspected any answers were AI-generated.

This shows the potential for AI to mimic student responses and educators’ inability to spot such papers.

But on the whole, the AI papers were not impressive.

While the AI did well in the essay-style question, it struggled with complex questions that required in-depth legal analysis.

This means even though AI can mimic human writing style, it lacks the nuanced understanding needed for complex legal reasoning.

The student’s exam average was 66%.

The AI papers that had no prompting, on average, only beat 4.3% of students. Two barely passed (the pass mark is 50%), and three failed.

In terms of the papers where prompts were used, on average, they beat 39.9% of students. Three of these papers weren’t impressive and received 50%, 51.7%, and 60%, but two did quite well. One scored 73.3%, and the other scored 78%.

Person using ChatGPT on their smartphone
Generative AI has gained a reputation for passing difficult exams. (Photo by Ascannio on Shutterstock)

What does this mean?

These findings have important implications for both education and professional standards.

Despite the hype, generative AI isn’t nearly replacing humans in intellectually demanding tasks such as this law exam.

My study suggests AI should be viewed more like a tool, and when used properly, it can enhance human capabilities.

So schools and universities should concentrate on developing students’ skills to collaborate with AI and analyze its outputs critically rather than relying on the tools’ ability to simply spit out answers.

Further, to make collaboration between AI and students possible, we may have to rethink some traditional notions about education and assessment.

For example, we might consider when a student prompts, verifies, and edits an AI-generated work, that is their original contribution and should still be viewed as a valuable part of learning.

Post a Comment

Previous Post Next Post