We’re partnering with Accenture to power their AI "Switchboard", a multi-LLM platform built to service the >$1B of GenAI deployments Accenture has booked this year. We're launching Airlock, our LLM Compliance Automation tool. These moves aim to accelerate enterprise AI adoption by simplifying model integration — we let companies use *every* AI model, instead of being stuck with just one.
With the release of Claude 3.5 Sonnet, there has been a lot of press about Moore's law and the recent LLM price decreases. We put this question to our Co-Founder and Co-CEO Shriyash Upadhyay (Yash). We wanted to share his perspective on this and his view for the future as it relates to token price and token consumption. We think you will enjoy what Yash had to say.
Anthropic and OpenAI recently released groundbreaking mechanistic interpretability work on frontier models, using Sparse AutoEncoders (SAEs) at scale. Martian's research has uncovered why these methods are effective, leveraging category theory to understand models without manual inspection. This approach not only validates SAEs but also opens the door to other scalable interpretability methods, which Martian is exploring further.
The recent exit of safety researchers from OpenAI underscores a troubling shift among AI giants from prioritizing safety to focusing on enhancing capabilities, potentially compromising AI's safe development. As AI companies, driven by the need to remain competitive, increasingly prioritize advancing model capabilities, they neglect crucial interpretability research, creating a misalignment of incentives that could pose risks. To counter this trend, companies like Martian are emerging with business strategies that emphasize understanding and safety in AI development, aiming to realign the industry towards a more secure and interpretable AI ecosystem.
The rising energy consumption of large language models (LLMs) threatens the sustainability and scalability of AI systems. Martian's "model routing" technique and industry collaborations like the Green Software Foundation offer promising solutions to reduce costs and emissions. The AI community must prioritize sustainability alongside performance to ensure the benefits of AI are realized while minimizing its carbon footprint.
Model mapping, a novel approach towards mechanistic interpretability, transforms opaque neural networks into transparent, verifiable programs. This approach has multiple benefits: a way to measure AI alignment, improve model efficiency, adaptability, and human-AI interaction.