Mixture of Experts (MoE)
Mixture of Experts (MoE) is an architecture where a model has multiple specialized "expert" sub-networks and a router that picks which ones to activate for each input. Only a fraction of the model runs at any time, so you get a huge model's quality at a fraction of the compute cost.
Definition
An architecture where a model has multiple specialized "expert" sub-networks and a router that picks which ones to activate for each input. Only a fraction of the model runs at any time, so you get a huge model's quality at a fraction of the compute cost.
Related Terms
More AI & Machine Learning Terms
LLM (Large Language Model)
The AI brain behind ChatGPT and similar tools. It's a massive program trained on tons of text that can understand and generate human-like writing. Think of it as autocomplete on steroids.
RAG (Retrieval Augmented Generation)
A technique that lets AI search your documents before answering questions. Instead of just making stuff up, it pulls real info from your data first. This is how you build a chatbot that actually knows your business.
Embeddings
A way to turn words, sentences, or documents into numbers that capture their meaning. Similar concepts get similar numbers, which lets AI find related content even if the exact words don't match.
Fine-tuning
Teaching an existing AI model new tricks by training it on your specific data. It's like hiring someone with general skills, then training them on how your company does things.
Prompt Engineering
The art of writing instructions that get AI to do what you actually want. It's surprisingly important—the same AI can give garbage or gold depending on how you ask.