AI Insights: Liquid Foundation Models (LFMs) and Prithvi WxC Released..

Newsletter Series by Marktechpost.com

Hi There,

It was another busy week with plenty of news and updates about artificial intelligence (AI) research and dev. We have curated the top industry research updates specially for you. I hope you enjoy these updates, and make sure to share your opinions with us on social media.

Super Important AI News 🔥 🔥 🔥

🎃 Liquid AI Introduces Liquid Foundation Models (LFMs): A 1B, 3B, and 40B Series of Generative AI Models.

⭐ Microsoft Released VoiceRAG: An Advanced Voice Interface Using GPT-4 and Azure AI Search for Real-Time Conversational Applications

📍 Prithvi WxC Released by IBM and NASA: A 2.3 Billion Parameter Foundation Model for Weather and Climate

🧲 Google Releases FRAMES: A Comprehensive Evaluation Dataset Designed to Test Retrieval-Augmented Generation (RAG) Applications on Factuality, Retrieval Accuracy, and Reasoning

📊 CopilotKit’s CoAgents: The Missing Link that Makes It Easy to Connect LangGraph Agents to Humans in the Loop

🔖 Ovis-1.6: An Open-Source Multimodal Large Language Model (MLLM) Architecture Designed to Structurally Align Visual and Textual Embeddings

⛳ OpenAI Introduces the Realtime API: Developers can now build fast speech-to-speech experiences into their applications

Featured AI Research 🛡️🛡️🛡️

MALPOLON: A Cutting-Edge AI Framework Designed to Enhance Species Distribution Modeling Through the Integration of Geospatial Data and Deep Learning Models

A research team from INRIA, the University of West Bohemia, the Swiss Federal Institute for Forest, and Université Paul Valéry developed the MALPOLON framework, a comprehensive Python-based deep species distribution modeling tool. This innovative framework, built using PyTorch and PyTorch Lightning, provides a seamless platform for training and inferring deep SDMs. MALPOLON’s design caters to novice and advanced users, offering a range of plug-and-play examples and a highly modular structure. It supports multi-modal data integration, allowing researchers to combine diverse data types such as satellite images, climatic time series, and environmental rasters to build robust predictive models. The framework’s modular architecture facilitates straightforward modification of its components, enabling users to easily customize data preprocessing, model structures, and training loops.

Other AI News 🎖️🎖️🎖️

🎯 Researchers from MIT and Peking University Introduce a Self-Correction Mechanism for Improving the Safety and Reliability of Large Language Models

♦️ Researchers from UC Berkeley Present UnSAM in Computer Vision: A New Paradigm for Segmentation with Minimal Data, Achieving State-of-the-Art Results Without Human Annotation

🧩 MIO: A New Multimodal Token-Based Foundation Model for End-to-End Autoregressive Understanding and Generation of Speech, Text, Images, and Videos

📢 MassiveDS: A 1.4 Trillion-Token Datastore Enabling Language Models to Achieve Superior Efficiency and Accuracy in Knowledge-Intensive NLP Applications

🥁 AMD Releases AMD-135M: AMD’s First Small Language Model Series Trained from Scratch on AMD Instinct™ MI250 Accelerators Utilizing 670B Tokens

🎙️ STGformer: A Spatiotemporal Graph Transformer Achieving Unmatched Computational Efficiency and Performance in Large-Scale Traffic Forecasting Applications

Trending Tweets 🐤🐤🐤

➡️ Is training large models specifically for reasoning enough? [Tweet]

➡️ Introducing Reverse UI An animated UI library reverse-engineered from the most beautiful websites on the web. Available in React and Vanilla JavaScript.[Tweet]

➡️ New Paper 📢 Not All LLM Reasoners Are Created Equal Just because models have high scores on GSM8K doesn't mean they can solve two linked questions! [Tweet]

➡️ When Double Descent & Benign Overfitting became a thing, I was a masters student in statistics — and so confused. I couldn't reconcile what l had literally just learned about bias-variance&co with modern ML [Tweet]

➡️ Diffusion models turn the data into a mixture of isotropic Gaussians, and so struggle to capture the underlying structure when trained on small datasets. In our new #ECCV2024 paper, we introduce RS-IMLE, a generative model that gets around this issue. [Tweet]

Interested in promoting your company, product, service, or event to over 1 Million AI developers and researchers? Let's collaborate!