We recommend that UCF audiences approach the AI revolution with the recognition that AI is here to stay and will represent a needed skill in the workplace of the future (or even the present!) As such, both faculty and students need to develop AI Fluency skills, which we define as:
- Understanding how AI works – knowing how LLMs operate will help users calibrate how much they should (mis)trust the output. They are word predictors that sometimes hallucinate, and have other limitations like bias from their training materials. Further, many AI tools do not keep prompts and their generated outputs private, which can create privacy issues.
- Deciding when to use AI (and when not to) – AI is just another tool. In some circumstances users will get better results than a web-based search engine, but in other circumstances the reverse may be true. There are also moments when it may be unethical to use AI without disclosing the help.
- Valuing AI – a dispositional change such as this one is often overshadowed by outcomes favored by faculty on the cognitive side, yet true fluency with AI – especially the AI of the future – will require a favorable disposition to using AI. Thus, we owe it to students to recognize AI’s value.
- Applying effective prompt engineering methods – as the phrase goes, “garbage in, garbage out” applies when it comes to the kind of output AI creates. Good prompts give better results than lazy or ineffective prompts. Writing effective prompts is likely to remain a tool-specific skill, with different AI interfaces needing to be learned separately.
- Evaluating AI output – even today’s advanced AI tools can create hallucinations or contain factual mistakes. Employees in the workplace of the future – and thus our students today – need expertise in order to know how trustworthy the output is, and they need practice in fixing/finalizing the output, as this is surely how workplaces will use AI.
- Adding human value – things that can be automated by AI will, in fact, eventually become fully automated. But there will always be a need for human involvement for elements such as judgment, creativity, or emotional intelligence. Our students need to hone the skill of constantly seeking how humans add value to AI output. This includes sensing where (or when) the output could use human input, extrapolation, or interpretation, and then creating effective examples of them. Since this will be context-dependent, it’s not a single skill needed so much as a set of tools that enable our alumni to flourish alongside AI.
- Displaying digital adaptability – today’s AI tools will evolve, or may be replaced by completely different AI tools. Students and faculty need to be prepared for a lifetime of changing AI landscapes. They will need the mental dexterity and agility to accept these changes as inevitable, and the disposition to not fight against these tidal forces. The learning about AI, in other words, should be expected to last a lifetime.
We view “AI literacy” as understanding AI, and “AI fluency” as effectively using AI.
Trust Index: What Types of AI Output Can Be Trusted?
LLMs are seemingly good at several things: giving explanations, writing essays, brainstorming, etc. Yet some of these “skillsets” are safer to trust than others, especially those that continue to prioritize human agency over an apparently-complete AI output. We’ve grouped types of AI “skillsets” into a stoplight system; those in “green” are safest, while those in yellow, and especially red, should be regarded with heightened caution and awareness that hallucinations and incorrect “facts” may be present.
- GREEN
- Brainstorming (especially lists of ideas)
- Identifying patterns
- Finding needles in haystacks
- Creating outlines
- YELLOW
- Providing explanations
- Editing human-created text
- Summarizing longer texts (because critical ideas might be left out of the summary)
- RED
- Writing computer code
- Drafting long texts, such as an essay
- Executing numerical analyses