Beginner’s Guide to AI Terminology: Your Essential AI Glossary  

Your go-to “AI glossary” with 50 key terms explained in simple language. Bookmark this page whenever you wonder “what is a neural network?” or need a quick definition of any AI concept.

1. Artificial Intelligence (AI)  

The field of creating machines or software that can perform tasks normally requiring human intelligence—like reasoning, perception, or language understanding.

2. Algorithm  

A step-by-step procedure or set of rules that an AI system follows to solve a problem or make a decision.

3. Artificial Narrow Intelligence (ANI)  

AI systems designed to perform a single task—such as language translation or image recognition—very well, but without general reasoning ability.

4. Artificial General Intelligence (AGI)  

A hypothetical AI that possesses the flexibility and problem-solving skills of a human across a wide range of domains.

5. Bias (in AI)  

Systematic errors or prejudices in AI outputs, often caused by skewed training data that underrepresents certain groups or scenarios.

6. Chatbot  

An AI application that simulates human conversation via text or voice, often powered by natural language processing.

7. Classification  

A type of ML task where the model assigns inputs (emails, images) into discrete categories (spam/not-spam, cat/dog).

8. Clustering  

An unsupervised learning technique that groups similar data points together without preexisting labels.

9. Computer Vision  

The branch of AI that enables machines to “see” and interpret visual information from the world—photos, videos, live camera feeds.

10. Convolutional Neural Network (CNN)  

A neural network architecture specialized for processing grid-like data, such as images, by using “filters” that detect patterns like edges or textures.

11. Data Augmentation  

Techniques that create modified versions of training data—rotated images, noise-added audio—to improve a model’s robustness.

12. Dataset  

A structured collection of examples (text, images, audio) used for training or evaluating AI models.

13. Deep Learning  

A subset of machine learning that uses multi-layered neural networks to automatically learn hierarchical features from raw data.

14. Discriminator  

In a Generative Adversarial Network, the neural network that learns to distinguish real data from the fake data produced by the generator.

15. Edge Computing  

Processing AI workloads locally on devices (phones, sensors) rather than in centralized data centers, reducing latency and bandwidth use.

16. Explainable AI (XAI)  

Methods and tools that make AI decisions transparent and understandable to humans, helping build trust and detect bias.

17. Feature  

An individual measurable property or characteristic (e.g., pixel intensity, word frequency) used as input to an AI model.

18. Feature Engineering  

The process of selecting, transforming, or creating features from raw data to improve a model’s performance.

19. Generative Adversarial Network (GAN)  

A pair of neural networks (generator + discriminator) that compete so the generator learns to produce realistic synthetic data (images, audio).

20. Generative Model  

An AI model designed to create new data samples—text, images, audio—that resemble its training set.

21. GPT (Generative Pretrained Transformer)  

A family of large language models (e.g., ChatGPT) that generate human-like text by predicting the next word in a sequence.

22. Gradient Descent  

An optimization algorithm that iteratively adjusts a model’s parameters in the direction that reduces prediction error.

23. Hyperparameter  

A configuration setting (learning rate, batch size, number of layers) chosen before training that affects how well an AI model learns.

24. Inference  

The phase where a trained AI model processes new data to make predictions or generate outputs.

25. Internet of Things (IoT)  

A network of physical devices (sensors, appliances) that collect and exchange data—sometimes analyzed by AI for insights.

26. Kernel (in SVM)  

A function that transforms data into a higher-dimensional space to make it easier for a Support Vector Machine to classify.

27. Label  

The ground-truth output or category assigned to a data example (e.g., “cat” for an image of a cat).

28. Machine Learning (ML)  

The practice of training algorithms to learn patterns from data and improve automatically without explicit programming for each task.

29. Model  

The mathematical representation (often a neural network) that has learned patterns from data and can make predictions.

30. Natural Language Generation (NLG)  

AI techniques that convert structured data into human-readable text—used in report writing, chatbots, and content creation.

31. Natural Language Processing (NLP)  

The field focused on enabling computers to understand, interpret, and generate human language.

32. Neural Network  

A layered network of nodes (neurons) that processes input data, learns internal patterns, and produces outputs like classifications or predictions.

33. Optimizer  

An algorithm (Adam, RMSprop) that adjusts a neural network’s weights to minimize error during training.

34. Overfitting  

When a model learns training data—including noise—too well and performs poorly on new, unseen data.

35. Parameter  

A model’s internal variable (weight or bias) that is adjusted during training to learn from data.

36. Perceptron  

The simplest form of a neural network: a single neuron that computes a weighted sum of inputs, applies an activation function, and outputs a result.

37. Prediction  

The output generated by an AI model when it processes new input data.

38. Prompt Engineering  

The craft of designing effective inputs (prompts) for large language models to guide them toward useful, accurate responses.

39. Recurrent Neural Network (RNN)  

A neural architecture that processes sequential data (text, time series) by maintaining a “memory” of past inputs.

40. Reinforcement Learning  

An AI training paradigm where an agent learns to make decisions by receiving rewards or penalties from its environment.

41. Regression  

A type of ML task where the model predicts a continuous value (e.g., price, temperature) rather than discrete categories.

42. Robotics  

The integration of AI with mechanical systems to perform physical tasks—like autonomous navigation or assembly.

43. Semantic Segmentation  

A computer-vision task that labels each pixel in an image according to the object it belongs to (road, person, sky).

44. Supervised Learning  

A training approach where models learn from labeled examples—each input is paired with the correct output.

45. Tokenization  

The process of breaking text into smaller units (words, subwords, characters) that a language model can process.

46. Transformer  

An attention-based neural architecture that excels at processing sequences and powers models like GPT and BERT.

47. Transfer Learning  

A method where a model pretrained on one task is fine-tuned on a different but related task—saving time and data.

48. Underfitting  

When a model is too simple to capture underlying patterns in the data and performs poorly on both training and test sets.

49. Unsupervised Learning  

A training method where models discover patterns or groupings in unlabeled data.

50. Validation (Model Validation)  

The process of evaluating a trained model on a separate dataset to tune hyperparameters and estimate real-world performance.