⏰ Featured AIs: AMD Releases Instella and Alibaba Released Babel......

Hi There,

Dive into the hottest AI breakthroughs of the week—handpicked just for you!

Super Important AI News 🔥 🔥 🔥

 Alibaba Released Babel: An Open Multilingual Large Language Model LLM Serving Over 90% of Global Speakers

📢  Meta AI Introduces Brain2Qwerty: Advancing Non-Invasive Sentence Decoding with MEG and Deep Learning

Featured AI Update 🛡️🛡️🛡️

🔥 Alibaba Released Babel: An Open Multilingual Large Language Model LLM Serving Over 90% of Global Speakers

Researchers from DAMO Academy at Alibaba Group introduced Babel, a multilingual LLM designed to support over 90% of global speakers by covering the top 25 most spoken languages to bridge this gap. Babel employs a unique layer extension technique to expand its model capacity without compromising performance. The research team introduced two model variants: Babel-9B, optimized for efficiency in inference and fine-tuning, and Babel-83B, which establishes a new benchmark in multilingual NLP. Unlike previous models, Babel includes widely spoken but often overlooked languages such as Bengali, Urdu, Swahili, and Javanese. The researchers focused on optimizing data quality by implementing a rigorous pipeline that curates high-quality training datasets from multiple sources…….

Other AI News 🎖️🎖️🎖️

Coding Tutorial 👩🏼‍💻👩🏼‍💻

</> Fine-Tuning NVIDIA NV-Embed-v1 on Amazon Polarity Dataset Using LoRA and PEFT: A Memory-Efficient Approach with Transformers and Hugging Face [Colab Notebook Included]

In this tutorial, we explore how to fine-tune NVIDIA’s NV-Embed-v1 model on the Amazon Polarity dataset using LoRA (Low-Rank Adaptation) with PEFT (Parameter-Efficient Fine-Tuning) from Hugging Face. By leveraging LoRA, we efficiently adapt the model without modifying all its parameters, making fine-tuning feasible on low-VRAM GPUs.…..

from huggingface_hub import login


login()  # Enter your Hugging Face token when prompted


import os
HF_TOKEN = "...."  # Replace with your actual token
os.environ["HF_TOKEN"] = HF_TOKEN


import torch
import torch.distributed as dist
from transformers import AutoModel, AutoTokenizer, TrainingArguments, Trainer
from datasets import load_dataset
from peft import LoraConfig, get_peft_model