Performing tasks without any examples in the prompt, relying solely on the model's pre-trained knowledge and the task description.
Zero-shot learning is the ability of AI models to perform tasks they haven't been explicitly shown examples of, using only a natural language description of what's needed. This demonstrates the generalisation power of modern LLMs.
How zero-shot works:
Zero-shot vs few-shot:
When zero-shot works well:
When to add examples (few-shot):
The emergence of zero-shot capability is one of the remarkable properties of large-scale language models - they can generalise to new tasks without explicit training on them.
Zero-shot works for straightforward tasks. For complex or specialised tasks, few-shot learning typically performs better.
We start with zero-shot approaches for simplicity. When more precision is needed, we add examples strategically. This iterative approach helps us find the optimal balance for each use case.
"Asking "Classify this text as positive, negative, or neutral" without any examples - the model understands the task from the description alone."