Skip to main content

Because artificial intelligence tools generate writing, images, and video, students may be tempted to take shortcuts in their assignments, and therefore it can be challenging to find the right assessment strategy in teaching.

AI is here to stay, and we owe it to our students to show them how to think and work with artificial intelligence, not avoid it. To do otherwise is to risk their career readiness. Smaller assignments with low grading stakes might be best to ask students to use AI and to learn about it. But we might want to also consider strategies for convincing students to skip AI shortcuts for some assignments.

Blue Book Exams

If you choose to implement in-person blue-book exams to ensure accountability, you may wish to consider that many of today’s students are unprepared for this type of assessment. This guide will give you some ideas about how to prepare students.

AI Agents

AI agents are automated programs that can perform tasks in online environments, sometimes even attempting coursework on behalf of students. To protect the integrity of high-stakes assessments, faculty should consider Respondus Lockdown Browser, which blocks outside tools—including AI agents—from interfering during exams. Faculty are also encouraged to explore designing assessments that emphasize critical thinking, problem-solving, and authentic application of knowledge, which are less susceptible to AI automation.

Curricular Redesign

We are committed to partnering with departments to redesign curricula that infuse AI fluency into core courses within each major. Email ai@ucf.edu for information.

“60+ ChatGPT Assignments to Use in Your Classroom Today”

The Faculty Center staff assembled this open-source book to provide faculty with ideas for using AI in their assignments. It is free for anyone to use and may be shared with others both inside and outside of UCF.

“50+ AI Hacks for Educators”

The Faculty Center staff wrote this second open-source book that provides ideas on how to use AI in your own life as a faculty member, including teaching, research, and productivity. It is free for anyone to use and may be shared with others both inside and outside of UCF.

“Coach for the Approach”

The Faculty Center staff wrote this third open-source book to help faculty consider small changes they can make to their assignments, their syllabus policies, their grading structure, and their use of in-class time to finally convince students that unethical or unapproved AI use is “cheating themselves.”

AI Detectors and Academic Integrity

AI detectors are not reliable and relatively easy for students to circumvent, consequently UCF does not have a current contract with any detector. If you use third-party detectors, you should keep in mind that both false positives and false negatives can occur, and student use of Grammarly can return a result of “written by AI.”

We recommend that UCF faculty NOT use AI detectors. Because the detectors don’t work, independent verification is required. If you have other examples of this student’s writing that does not match, that might be reason enough to take action. Evidence of a hallucinated citation is even stronger.

  • A confession of using AI by the student is, of course, the gold standard for taking action.
  • One approach might be to require the student to meet with you and explain why you suspect the student used AI and ask them how they would account for these facts.
  • Another option is to inform them of your intention to fail the paper, but offer them the chance to perform proctored, in-person writing on a similar prompt to prove they can write at this level.

The Student Conduct and Academic Integrity office will not pursue a case where the only evidence comes from an AI detector, due to the possibility of false positives and false negatives. A hallucinated citation does constitute evidence.

  • They do still encourage you to file a report in any event and can offer suggestions on how to proceed.
  • Existing University-level policies ban students from representing work that they did not create as their own, so it’s not always necessary to have a specific AI policy in your syllabus – but it IS a best practice to have such a policy for transparency to students and to communicate your expectations. Research indicates that students are unclear about what counts as appropriate AI use, especially since faculty expectations vary widely. From the student perspective, this inconsistency makes an extremely clear policy on AI use not just helpful, but essential.

Ultimately, grading decisions rest with the instructor. In borderline cases—where AI use is suspected, but not conclusive—a range of responses is available, depending on the severity of the situation. These options may include:

  • Assigning a failing grade for the course
  • Assigning a failing grade or zero for the specific assignment
  • Applying a grade penalty (e.g., 10% or 20%)
  • Allowing the student to revise and resubmit the assignment, with or without a grade deduction
  • Issuing a warning without altering the grade
  • Choosing to take no action

When deciding on an appropriate response, be aware that students have the right to appeal academic grades. For that reason, it may be advisable to check with your supervisor about how to proceed in specific cases.

Rather than address the unapproved use of AI in assignments, many instructors opt to select assessments that replace essays. Some examples include narrated audio presentations (e.g., Prezi, Canva, PowerPoint), video presentations delivered without scripts, infographics, digital posters, e-portfolios, podcasts, and other forms of electronic deliverables, as well as in-class presentations, debates, panels, and additional activities that demonstrate learning without text generation.

Another alternative is to assign AI-generated output as part of the assessment, and then require the students to “do something” with the output, such as analyze or evaluate it. As always, accessibility must be considered, with accommodations made available as necessary.

AI in Webcourses

Instructure announced in July 2025 that, in the future, artificial intelligence tools will be embedded within the Canvas interace. Promised tools include summaries of discussion board posts, ALT text generation, and first-pass grading of student submissions based on a rubric. There will also be automated tools to help students study. We do not yet have details or a timeline for integration at UCF.