Over the past 5 years, we’ve gone from language models that can barely output coherent English to ones that can write essays, browse the web, and partner with people from all walks of life in all kinds of tasks.
We have, for many practical purposes, created thinking machines – and yet, we barely understand how they work.
We really only know two things about how these models work: how to tweak and preface their inputs (prompting) and how to patch some of their outputs (finetuning).
Imagine if we understood programming as poorly as we understand LLMs. I give you a sorting algorithm, tell you “it generally sorts things but sometimes fails”, let you see the inputs and outputs, and tell you that it runs on an x86 architecture. What could you do with that algorithm? You could mess around with how you formatted and ordered inputs to the algorithm to improve results (“prompt it”). And you might be able to add some quick fixes on specific outputs where it fails (“fine-tune it”). Dealing with algorithms that worked this way would be a nightmare. All the tools available to programmers today — refactoring, editing, unit testing, formal verification, debuggers — none of them would be possible.
Unfortunately, that’s exactly the state of LLMs today — they’re black boxes that can model language. We know they take in inputs and spit out outputs. We know they run on a transformer architecture. And… that’s it.
This is why most AI infrastructure tooling today looks the same. They all help you with one of two things: prompting or fine-tuning. Transformers, the machine learning architecture underlying practically all modern LLMs, are too poorly understood for developers to do anything else.
All of the good AI infra tools — the things that are to LLMs what AWS, Docker, or IDEs are to assembly — do not exist yet. With our current level of understanding for LLMs, no one can build them.
This is arguably the biggest problem in AI. When companies are hesitant to adopt AI in mission critical systems, it’s because they’re black boxes that nobody can trust. When LLMs hallucinate and give undesirable output, it’s because they’re black boxes that nobody can fix. When people are worried about AI being unsafe, being difficult to regulate or protect society from, and being a potential threat to the very existence of humanity – it’s because they’re black boxes that nobody can understand.
Our goal at Martian is to make awesome AI tooling possible. We’re doing that by turning transformers into programs, allowing us to understand precisely how they work.
Over the next few months, we’ll be releasing some of the tools made possible by this understanding of AI. Today, we want to tell you about our first tool — the model router — and explain our approach to understanding how AI models work.
The Model Router
The cost of compute is going down. Models are becoming more efficient. Model training tools are becoming more widely available. These three trends lead to a world with a potentially huge number of models. Indeed, driven by the more efficient scaling laws and the release of models like Meta’s llama , that world has already started – there are now 380,000 open source models on Hugging Face, up from just 80,000 a year ago.
Different models can have radically different price points and radically different performance. The cheapest models can be >900x less expensive than the most expensive ones. Models can even have entirely different sets of capabilities. This creates a new challenge (find the right model to use), but also a new opportunity.
With many models available, companies can think about these models like they would think about employees. You wouldn’t send junior work to a senior employee, or send scientific work to the finance department. By using a team of models in an application, you can achieve a higher performance and lower cost than any single LLM could achieve alone.
The model router is a tool that routes each individual query to the best LLM in real-time, achieving higher performance and lower cost than any individual provider. In fact, we’ve built a model router capable of out-performing GPT-4 on OpenAI’s own evals – and doing so at a lower cost
On openai/evals, an open source evaluation suite made available by OpenAI, the model router outperforms GPT-4 (getting performance at least as good, at a lower cost) on 91.8% of tasks. On average, it produces a 20% reduction in cost – when we optimize purely for performance. On some tasks, we can even get the same performance as GPT-4 with a 97% reduction in cost.
And the best part is, it’s not just some 7 page paper without any details – you can go use it and reproduce our results here .
The model router is the first commercial application of large-scale AI interpretability. Routing between models is fundamentally about understanding how they work: what makes them succeed or fail. The better that you understand models, the more effectively you can route between them. To develop the model router, we created a new way of looking at AI interpretability, which allows us to much more deeply understand how models operate: model mapping.
Understanding Models Through Model Mapping
You may not have heard of model mapping before (we coined the phrase), but you’ve almost certainly interacted with an example of it — model distillation.
Model distillation is the practice of training a small model to mimic a larger one. We can view distillation as a mapping from larger transformers to smaller transformers, preserving the output of the original transformer. This is really useful; it lets us get the same output with a smaller, faster, less expensive model. That’s why many engineers are distilling open-source models from larger closed-source ones, and why many companies putting models into deployment will distill those models.
Model mapping is a generalization of this process: mapping (in the mathematical sense of “converting”) models into a new format, while preserving the properties we care about in the model.
Nothing says we have to map larger models into smaller ones. We could, for example, convert smaller models into larger ones. Such a mapping actually has direct application to AI interpretability.
One of the biggest difficulties in understanding how AI models work is polysemanticity — which is to say, the fact that each neuron inside of any given model appears to be doing many different things (Anthropic, for example, has done some very good work on this problem). Without separating out those different functions, it’s hard to interpret the models. By mapping to a larger model, we can split the functions of each neuron into several different neurons, creating a model which is larger but sparser and easier to understand.
The most exciting piece of model mapping is that our mappings don’t just have to turn transformers into other transformers. Indeed, you could try to turn transformers into anything.
Take, for example, the problem of model routing. To route between models effectively, you want to be able to understand what causes them to fail or succeed. Being able to understand these characteristics with model-mapping allows us to determine how well any given model will perform on a request without having to run that model. As a result, we can send that request to the model which will produce the best result.
One way of getting such an understanding is embedding models into a vector space in a way that preserves the expected performance of the model. This lets us predict how well a model will do on a prompt without having to run the model . Instead, we can use the tools used to understand embeddings (think “king” - “man” + “woman” = “queen” ) in order to understand these models.
And that’s just one example of how we can understand models by mapping them into other formats. Vastly more powerful mappings are possible – we can even turn transformers into programs.
By mapping transformers into programs, we can take all the tools used for understanding code and apply them to our models. Instead of dealing with millions or billions of numbers packed into incomprehensible matrices, we can read out the algorithms these models are implementing.
That is exactly what is needed to understand how AI models operate. And with that understanding, we have the potential to build truly awesome tools.
The model router is a tangible demonstration of the capabilities enabled by model mapping. But it’s just the beginning.
The Past & Future of AI
If we harken back to ye olde days of AI (2018, with models like BERT or GPT-1 ), there was only a single way of interacting with models: fine-tuning. Unlike today, we could not prompt our models to tell them what kind of output we wished to see or what problem we were trying to solve. We had no understanding of how the inputs to our models impacted the outputs, so we couldn’t prompt. The big story in making AI more useful was that we began to understand the relationships between the inputs and outputs of models.
Now that we can just talk to our models, it’s hard to appreciate just how consequential a breakthrough this was. As the research community figured this out, it was a big deal . The paper announcing GPT-3 even named their paper after this phenomenon: “Language Models are Few-Shot Learners ”. Or, to translate: if we show a model a few examples of something in its inputs, we can make it output more such examples.
Many of the biggest breakthroughs in AI since then have been works which give us a better understanding of the inputs to our models. Techniques like Chain-of-Thought prompting (which enabled the creation of AI agents) and Reinforcement Learning from Human Feedback (which enabled the creation of ChatGPT) did exactly this.
Each time we improve our fundamental understanding of models, it results in a paradigm shift for AI. Fine-tuning was the paradigm driven by understanding outputs. Prompting is the paradigm driven by understanding inputs. That single difference in our understanding of models is much of what differentiates traditional ML (“let’s train a regressor”) and modern generative AI (“let’s prompt a baby AGI”).
Our goal is to consistently deliver such breakthroughs until AI is fully understood and we have a theory of intelligence as robust as our theories of logic or calculus.
To us, this means building. It means creating awesome AI tooling and putting it into people’s hands. It means releasing things which break the mold, which no-one has done before, and which — more than anything else — are interesting and useful.
In the words of Sir Francis Bacon, “Knowledge is power”. Accordingly, the best way to be sure that we understand AI is to release powerful tools. In our opinion, a model router is a tool of that kind. We’re excited to build it, grow it, and put it in people’s hands.
This is the first of many tools we’re going to release in the coming months. To discover a beautiful theory of artificial intelligence, to enable entirely new types of AI infrastructure, to help build a brighter future for both man and machine – we can’t wait to share those tools with you.