avatar

Alan Dao's personal blog

AI Researcher

avatar

For work πŸ”—

About me πŸ”—

Passionate AI Practitioner that wants to build things!

Some of the things I have built, contributed and invented with my dear friends and colleagues.

Things I have built!

Ichigo πŸ”—

Ichigo is a multimodal AI model that can handle speech natively with enhanced latency compared to traditional ASR to LLM solution. Find out more below

AlphaMaze πŸ”—

AlphaMaze is a novel two-stage training framework that equips LLMs with visual reasoning abilities for maze navigation using Supervised Fine-Tuning (SFT) and Group Relative Policy Optimization (GRPO).

Find out more:

AlphaSpace πŸ”—

AlphaSpace, developed by Menlo Research, is a framework that enables large language models to perform precise spatial reasoning for robotic manipulation in 3D space. Instead of relying on vision models, it uses a hierarchical tokenization method to represent spatial concepts directly in language. This allows LLMs to understand and execute tasks like picking, placing, or stacking objects in simulated environments. AlphaSpace significantly outperforms models like GPT-4o and Claude 3.5 Sonnet on such tasks, and a 1.5B parameter version is publicly available for testing.

Find out more:

Poseless πŸ”—

PoseLess is a novel framework for robot hand control that maps 2D images to joint angles without explicit pose estimation, enabling zero-shot generalization and cross-morphology transfer.

Find out more:

VoxRep πŸ”—

Comprehending 3D environments is vital for intelligent systems in domains like robotics and autonomous navigation. Voxel grids offer a structured representation of 3D space, but extracting high-level semantic meaning remains challenging. This paper proposes a novel approach utilizing a Vision-Language Model (VLM) to extract “voxel semantics”-object identity, color, and location-from voxel data. Critically, instead of employing complex 3D networks, our method processes the voxel space by systematically slicing it along a primary axis (e.g., the Z-axis, analogous to CT scan slices). These 2D slices are then formatted and sequentially fed into the image encoder of a standard VLM. The model learns to aggregate information across slices and correlate spatial patterns with semantic concepts provided by the language component. This slice-based strategy aims to leverage the power of pre-trained 2D VLMs for efficient 3D semantic understanding directly from voxel representations. Find out more:

ReZero πŸ”—

Retrieval-Augmented Generation (RAG) improves Large Language Model (LLM) performance on knowledge-intensive tasks but depends heavily on initial search query quality. Current methods, often using Reinforcement Learning (RL), typically focus on query formulation or reasoning over results, without explicitly encouraging persistence after a failed search. We introduce ReZero (Retry-Zero), a novel RL framework that directly rewards the act of retrying a search query following an initial unsuccessful attempt. This incentivizes the LLM to explore alternative queries rather than prematurely halting. ReZero demonstrates significant improvement, achieving 46.88% accuracy compared to a 25% baseline. By rewarding persistence, ReZero enhances LLM robustness in complex information-seeking scenarios where initial queries may prove insufficient.

Jan πŸ”—

Jan is the most popular open source AI Chatbot Client with more than 1.5M downloads (and counting up)

Nitro (Now is cortex) πŸ”—

Super light-weight inference engine for LLM models

Publication πŸ”—

I also wrote some papers! (More coming soon.) Check out my Google Scholar here.


I believe that in the next 10 years, everyone will have access to the most sophisticated and well-tailored education system that has ever existed, thanks to mass AI adoption.

The future is bright! πŸš€