Skip to main content

Ethical Considerations

  1. The intentional use of AI-assisted tools for any element of a work or school product must be disclosed (example: “an outline of topics for this work were generated by Copilot with Enterprise Data Protection, March 7, 2025”). Use of AI that is embedded within larger software (e.g., predictive text sentence completions, auto-correct, spell check, etc.), does not need to be disclosed. Disclosing the use of grammar assistance is not centrally mandated for students, but may depend on individual faculty policy.
  2. Most AI tools use queries, uploads, and outputs to help train and refine their models. It is a copyright violation to upload intellectual property / copyrighted materials (which includes student assignments, publisher materials, and articles obtained from the UCF library by download) to an AI tool that trains itself or stores such materials.
  3. AI tools can create situations that may warrant extra attention. For instance, one might look at resources/money needed for some advanced AI tools, or the energy requirements of AI tools.
  4. AI tools can be used to deceive and represent something as real when it is not, intentionally or accidentally. So-called deepfakes and hallucinations must be acknowledged and must not be used fraudulently. All people bear responsibility to vet content and ensure their use does not generate, transmit, or perpetuate deepfakes that others may mistake as real.

Policies, Laws, and External Considerations

  1. Campus/UCF: The “AI for All” initiative maintains a separate page about internal policies that apply to technologies, including artificial intelligence. There is not, as yet, a Policy specifically about artificial intelligence.
  2. State/Florida: There are, as yet, no statewide guidelines or mandates for AI use or non-use.
  3. Federal: Congress is considering, but has not yet passed, federal laws about AI use or non-use.
    • Federal Aviation Administration (FAA): In 2025, the FAA issued this guidance:
      • “Agency personnel must not use GAI to perform or facilitate illegal or malicious activities; cite it as direct evidence or authority for a determination/decision; use it to contravene or circumvent any requirement, guidance, order, policy, notice, or standard operating procedure that otherwise applies; abuse, harm, interfere with, or disrupt FAA activities or services; knowingly generate content that is meant to harm or imitate individuals or groups; and infringe on the intellectual property of a third party to create substantially similar new work.”
      • As a contractor of the FAA, UCF stakeholders are likely bound by these guidelines.

Student Use of AI

  1. Students should look to individual syllabi for guidance on how, when, and if AI use is permitted. If no policy is stated, students should seek clarification from the instructor rather than make assumptions about acceptable use.
  2. Students are responsible for the accuracy of the work they submit, whether AI assistance was employed or not.
  3. Students who fraudulently represent AI output as their work are committing plagiarism.
  4. AI should be considered as a way to complement students’ creativity and intellectual capabilities, but not a replacement for their own critical thinking. The most effective approach positions AI as a “thought partner” that challenges, extends, and enhances student thinking rather than substituting for it. Many faculty design writing and critical thinking assignments specifically to develop the PROCESS of intellectual growth, not merely to produce a finished PRODUCT. This developmental journey—where students learn to research, analyze, synthesize, and articulate ideas—constitutes the heart of education and cannot be skipped without compromising the learning objectives these experiences are designed to cultivate.

See this page for additional advice for students considering AI use at UCF.

Faculty Use of AI for Teaching

  1. Like all UCF stakeholders, UCF faculty must ensure their use of AI for teaching is ethical, responsible, fair, transparent, and disclosed.
  2. Grading accomplished only through AI, with no human review, is not ethical and should be avoided.
  3. Faculty are advised to avoid using AI detectors. They are not reliable and can generate convincing-looking false positives and false negatives. Further, research indicates that students writing in a non-native language are disproportionately likely to be unjustly accused of AI use.
  4. Because AI is now part of our technology ecosystem, faculty may want to consider assignments that encourage student AI use as part of the learning process.
  5. Recommendation letters produced solely by AI lack the personal insights and specific details that give such letters their credibility.

Faculty Use of AI for Research

  1. Like all UCF stakeholders, UCF faculty must ensure their use of AI for research is ethical, responsible, fair, transparent, and disclosed.
  2. The potential use of AI tools in any part of the publication process is not universally accepted. Faculty should check with individual publishers regarding policies of AI use and/or disclosure.
  3. Similarly, the use of AI tools for grant-writing is not universally accepted, and faculty will need to make case-by-case decisions. This includes the use of AI tools to write, summarize, augment, lengthen, or shorten grant applications.
  4. Some grant and publication organizations prohibit the use of AI when writing critiques, reviews, or acceptance/denial decisions. Faculty should seek clarification about whether AI tools are permitted on a case-by-case basis.
  5. Researchers are responsible for the protection of research data; the use of AI to analyze or report on research data should be performed carefully only via enterprise-level platform access to ensure data integrity and data security, and to avoid data sharing with or data use by those platforms.
  6. For scholarly reviews, researchers should avoid using AI. Human expertise and judgment remain essential in evaluating the research and professional contributions of others.

Staff/Administrator Use of AI

  1. Like all UCF stakeholders, UCF staff and administrators must ensure their use of AI is ethical, responsible, fair, transparent, and disclosed.
  2. Staff have access to sensitive University data and must be extra vigilant not to expose proprietary data, student records, and other sensitive information to unsecured AI. De-identifying data is always preferred before exposing it to AI, even within the protections afforded with Copilot.

Acknowledgements

The Responsible Use principles above were developed and written without AI assistance. Groups that assisted in fine-tuning these principles include UCF IT, UCF Libraries, UCF Center for Ethics, Center for Distributed Learning, Faculty Center for Teaching & Learning, Faculty Senate IT Committee, Office of Research, and several groups and coordinators exploring AI at departmental or College levels.