avatar

Alan

AI Practitioner

🍓 Ichigo: Llama Learns to Talk

We rebranded llama3-s into “Ichigo” with a cute UI just like below. If you are coming from Singapore Techweek you can also visit the Homebrew Blog Post in the new Annoucement Post. I will update this blog post once our paper comes out.

Tutorial: High Quality Llm on Low Vram - Llama3.1

With recent release of cutting edge model like gemma9B, llama3.1, etc… we are in an era that people can have model as small as just 8B parameters and it can have the same performance with ChatGPT3.5 or ChatGPT4 (according to lmsys ranking). Strangely, the general vibe on community like r/locallama does not seem to agree. But why my Llama3.1 seems, dumb? 🔗Or at the very least, nowhere near chatGPT 3.5?

We Released a New Model

Dated back to this post Multi Modal Tokenizing With Chameleon. I have worked with my team at HomeBrew Research to make something. We wanted to give the community something new and not simply a replicate of Chameleon (vision modality). By that, we decided to work on a model that can do sound, that you can talk to it, that you can give commands to it. A Llama model that can listen! “What is the color of the sky?”