avatar

Alan Dao's personal blog

AI Researcher

🪔Deepseek V3 Is Cool Here Is Why

DeepSeek-V3 is the latest model in the DeepSeek’s model family. The model itself is packed with many cummulative efforts to improve the performance of the model. The model is literally blowing every closed source or open source model out of the water given its size. But what made DeepSeek-V3 so good? Let’s deep dive. Overall 🔗At first you might have an impression that this model is a gigantic monster. Surely 671B is big, but it is really not that big from architecture point of view, that I will explain shortly.

🎄10 Papers That Caught My Attention: a Year in Review

2024 the year of paper 🔗In 2024, over 100 papers are published daily on arXiv, a staggering amount that’s impossible to read all of. However, I’ve come across a few fascinating AI papers—some less mainstream but with solid ideas—grouped into three main categories of interest. Emergence compressive behavior Distribution matching Alternative I will share what I learned by providing a short explanation (not just summary) of each paper, as well as give some of my general view on each category.

⚡Ultra Compact Text-to-Speech: A Quantized F5TTS

How small can a F5TTS model get? Very small! Now you can generate synthetic voice using AI, with pretty high quality under very constrained hardware. In order to run F5-TTS quantized version you just need around ~400mb of VRAM. (Spoiler: mac only currently) Yes, that’s everything you need to do to do voice generating (and voice cloning) on a macbook under 30s. The entire model pipeline is including the below types of weights:

Mlops Engineer: a Pragmatic Perspective

I’ve noticed that there are plenty of guides and courses on “learning MLOps” available online, but many of them fail to address these critical aspects: What the daily work looks like What roles can be transferred into MLOps Career prospects With that in mind, I hope to provide a practical perspective based on my personal experience. This could help you adjust your learning path (if you’re planning to transition into this position) or adapt to moving from large companies with standardized processes to smaller companies or startups.

Paper Summary: What Makes Rope Useful

Up until now everyone has been using “Rotary Position Embedding” (RoPE) like a default method for positional encoding for awhile. However, specifically how and why RoPE makes things “better” is still a little unexlored. Luckily there is a paper Round and Round We Go! What makes Rotary Positional Encodings useful? addressing this specific issue. I found a few of their results are quite interesting. 1. RoPE does not necessarily decay activations with distance: 🔗In the original paper RoFormer the authors made an analysis about the fact that RoPE has some level of decay of the increasing of the context len.