
Alan Dao's personal blog
AI Researcher
This quarter, I focused on launching two major projects:
AlphaMaze 🔗AlphaMaze is a two-stage training framework that enhances large language models with visual reasoning for maze navigation. It combines Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO) to help models build an internal “mental map” of their environment.
Find out more:
Paper: arXiv:2502.14669 Code: GitHub - janHQ/visual-thinker Live Demo: alphamaze.menlo.ai PoseLess 🔗PoseLess is a vision-based robot control framework that maps 2D images to joint angles without explicit pose estimation.
If you’ve been keeping an eye on OpenRouter, you might have noticed that its total token usage has skyrocketed. In fact, I’ve tracked the public data from their website and compiled some stats, showing an incredible 76x growth since 2024 (y-axis is in 10B).
So, what’s driving OpenRouter’s success?
What give way to the success of OpenRouter? 🔗OpenRouter’s growth isn’t happening in isolation—it’s closely tied to the rapid adoption of AI-powered coding assistants.
DeepSeek-V3 is the latest model in the DeepSeek’s model family. The model itself is packed with many cummulative efforts to improve the performance of the model.
The model is literally blowing every closed source or open source model out of the water given its size.
But what made DeepSeek-V3 so good? Let’s deep dive.
Overall 🔗At first you might have an impression that this model is a gigantic monster. Surely 671B is big, but it is really not that big from architecture point of view, that I will explain shortly.
2024 the year of paper 🔗In 2024, over 100 papers are published daily on arXiv, a staggering amount that’s impossible to read all of. However, I’ve come across a few fascinating AI papers—some less mainstream but with solid ideas—grouped into three main categories of interest.
Emergence compressive behavior Distribution matching Alternative I will share what I learned by providing a short explanation (not just summary) of each paper, as well as give some of my general view on each category.
How small can a F5TTS model get? Very small!
Now you can generate synthetic voice using AI, with pretty high quality under very constrained hardware. In order to run F5-TTS quantized version you just need around ~400mb of VRAM. (Spoiler: mac only currently)
Yes, that’s everything you need to do to do voice generating (and voice cloning) on a macbook under 30s. The entire model pipeline is including the below types of weights: