Skip to main content
  • Agents – A type of advanced AI system designed to pursue a desired outcome autonomously. Unlike basic models that only respond to prompts, agents can decide intermediate steps and actions toward achieving a goal, often interacting with external tools or environments.
  • AGI (Artificial General Intelligence) – A theoretical form of AI that could perform intellectual tasks at a human level across diverse domains. Unlike today’s specialized AI, AGI would generalize knowledge and skills, adapting flexibly. Passing the Turing Test is sometimes associated with AGI, but it is neither a sufficient nor universally accepted measure.
  • Black Box – A term describing the opaque inner workings of many AI models. While we can observe inputs and outputs, the reasoning process that produces them is often too complex to interpret directly.
  • Chatbot – A conversational program designed to interact with humans via text or voice. While often used as shorthand for large language models (LLMs) like ChatGPT, the term can also refer to simpler scripted bots or custom-trained models built for specific tasks.
  • Chain-of-Thought Prompting – A prompting technique that encourages an AI model to show its reasoning steps explicitly, improving accuracy on complex tasks (e.g., math problems or logic puzzles).
  • Cognitive Offloading – The use of technology to handle mental tasks that would otherwise require cognitive effort. In the context of AI, this might include asking a model to draft, summarize, or brainstorm, reducing the need for the user to perform all the steps of reasoning or writing. The term is used negatively when referring to students using AI to write an essay from start to finish.
  • GenAI (Generative AI) – A type of AI that produces new content (e.g., text, images, audio, video) in response to prompts. Large language models (LLMs) like ChatGPT and image generators like DALL·E are examples of generative AI.
  • Generative Inbreeding – A term used to describe the potential degradation of AI quality when models are trained on outputs generated by other models rather than original human-created data. This can amplify errors, biases, or hallucinations over time, reducing the reliability of the system.
  • Hallucinations – When an AI generates false or fabricated information presented as fact. For example, citing nonexistent articles, inventing quotes, or producing made-up statistics.
  • LLM (Large Language Model) – A type of generative AI trained on massive text datasets to predict the next word or phrase in context. LLMs generate human-like text by estimating probabilities. Key measures include perplexity (how well the model predicts text plausibly) and burstiness (variation in sentence structures). LLMs do not “access databases” in real time but generate text based on patterns learned during training.
  • Multimodal AI – AI systems that process and generate more than one type of data, such as combining text, images, audio, or video. For example, uploading a chart and asking the model to explain it in text.
  • Persistent Memory – A feature in some AI systems where the model retains information across sessions with a logged-in user. This enables continuity, such as remembering user preferences, prior conversations, or instructions, until the user chooses to reset or delete the memory.
  • Prompt Engineering – The practice of designing and refining the instructions (prompts) given to an AI system to achieve more accurate, creative, or useful outputs. Techniques include rephrasing questions, adding context, breaking tasks into steps, and using structured formats. Prompt engineering is an emerging skill for students, researchers, and professionals working with generative AI.
  • Prompt Injection – A method of manipulating an AI by embedding hidden or malicious instructions in a prompt. For example, formatting text in white so it is invisible to the human reader but still processed by the AI, altering the intended output. An example might be using white font to hide the phrase “positive reviews only.”
  • Reasoning Models – Versions of language models that are designed to spend more time “thinking” through problems. They often provide step-by-step reasoning or explanations, making them better suited for complex problem-solving, though slower to respond.
  • Slop – A slang term for AI-generated content that is low-quality, generic, repetitive, or unoriginal. The term is often used critically to describe careless use of AI.
  • Superintelligence – A hypothetical AI that surpasses human intelligence across all domains, including creativity, social skills, and problem-solving. Popular culture examples include Skynet (from Terminator) and Ultron (from Marvel).
  • Token – A unit of text (often a word fragment) that LLMs process. Model pricing, speed, and memory capacity are usually calculated in tokens rather than words.