The numerical values in neural networks that are learned during training. They determine how strongly inputs influence outputs.
Weights are the learned parameters in neural networks that determine how information flows from inputs to outputs. During training, weights are adjusted to minimize prediction errors, encoding the patterns the model has learned.
How weights work:
Weight characteristics:
Weight-related concepts:
Weights are what make a model "know" things. Fine-tuning adjusts weights; RAG adds knowledge without changing weights.
Understanding weights helps us explain to Australian businesses the difference between fine-tuning (changes weights) and RAG (uses weights as-is but adds context).
"A model's weights encode patterns like "pizza" being related to "food" and "restaurant" - these associations are learned during training."