The learned values (weights and biases) in a neural network that determine its behavior. LLMs have billions of parameters.
Parameters in machine learning are the learnable values that define a model's behavior. For neural networks, parameters primarily consist of weights (connection strengths) and biases (offset values) that are adjusted during training.
Understanding parameter counts:
What parameters represent:
Parameters vs hyperparameters:
Resource implications:
Parameter count roughly indicates model capability. GPT-4 has ~1.7T parameters; smaller models with 7-70B can still be very capable for many tasks.
We help Australian businesses choose models with appropriate parameter counts, balancing capability against deployment cost and speed.
"A 70B parameter model offers a good balance of capability and deployment cost for many enterprise applications."