• AI Research Insights
  • Posts
  • Marktechpost Newsletter: Is OpenAI following Anthropic in Critic LLMS? Meta LLM Compiler and Two AI Releases SUTRA Model

Marktechpost Newsletter: Is OpenAI following Anthropic in Critic LLMS? Meta LLM Compiler and Two AI Releases SUTRA Model

Marktechpost Newsletter: Is OpenAI following Anthropic in Critic LLMS? Meta LLM Compiler and Two AI Releases SUTRA Model

Presented by

Want to get in front of 1.5 Million AI enthusiasts? Work with us here

Featured Research..

OpenAI Introduces CriticGPT: A New Artificial Intelligence AI Model based on GPT-4 to Catch Errors in ChatGPTā€™s Code Output

OpenAI researchers have introduced CriticGPT, a very important tool that helps human trainers spot errors in ChatGPTā€™s responses. CriticGPTā€™s primary purpose is to produce thorough criticisms that draw attention to mistakes, especially in code outputs. This model has been created to overcome the inherent limitations of human review in RLHF. It offers a scalable supervision mechanism that improves the precision and dependability of AI systems.

CriticGPT has proven to be remarkably effective in enhancing the assessment procedure. In experiments, human reviewers who examined ChatGPTā€™s code outputs with CriticGPT performed 60% better than those who did not receive such assistance. This major advancement highlights CriticGPTā€™s ability to increase human-AI cooperation and produce more thorough and accurate evaluations of AI outputs.

In light of these great results, attempts are being made to incorporate CriticGPT-like models into the RLHF labeling pipeline. Through this integration, AI trainers will have access to explicit AI support, which will facilitate the evaluation of advanced AI system outputs. This is an important development because it tackles one of the core issues of RLHF, which is that human trainers find it harder to identify small errors in increasingly complex AI models.

 Editorā€™s Picksā€¦

Two AI Releases SUTRA: A Multilingual AI Model Improving Language Processing in Over 30 Languages for South Asian Markets

SUTRAā€™s architecture comprises two mixture-of-experts transformers: a concept model and an encoder-decoder for translation. The concept model is trained to predict the next token, leveraging publicly available datasets primarily in languages with abundant data like English. Concurrently, the translation model learned from 100 million human- and machine-translated conversations across multiple languages, allowing it to map concepts to similar embeddings in all languages it supports.

The innovative integration of these models involves the translation modelā€™s encoder generating an initial embedding from the input text, which the concept model processes and feeds into the translation modelā€™s decoder to produce the final output. This approach ensures that SUTRA can effectively handle a diverse range of languages, making it a robust tool for multilingual communication.

ADVERTISEMENT

Meet Gretel Navigator: the first compound AI system built to create, edit, and augment tabular data. šŸš€šŸš€šŸš€

Get inspired by popular Navigator use cases:

  • Empower frontier AI teams with high-quality datasets to train LLMs.

  • Safeguard sensitive, proprietary datasets when evaluating public ML models

  • Teach LLMs new tasks or domains for new generative AI-powered applications.

  • Augment real-world data to build more performant intelligent applications.

  • Generate synthetic question-truth pairs to evaluate RAG models.

    [Sign Up for Free] Try Gretel Navigator, the first compound AI system built to create, edit, and augment tabular data.

Meta AI Introduces Meta LLM Compiler: A State-of-the-Art LLM that Builds upon Code Llama with Improved Performance for Code Optimization and Compiler Reasoning

Researchers at Meta AI have introduced the Meta Large Language Model Compiler (LLM Compiler), specifically designed for code optimization tasks. This innovative tool is built on Code Llamaā€™s foundation and fine-tuned on an extensive dataset of 546 billion tokens of LLVM intermediate representations (IRs) and assembly code. The Meta AI team has aimed to address the specific needs of compiler optimization by leveraging this extensive training, making the model available under a bespoke commercial license to facilitate broad use by academic researchers and industry practitioners.

The LLM Compiler undergoes a robust pre-training process involving 546 billion tokens of compiler-centric data, followed by instruction fine-tuning 164 billion tokens for downstream tasks such as flag tuning and disassembly. The model is available in 7 billion and 13 billion parameters. This detailed training process enables the model to perform sophisticated code size optimization and accurately convert assembly code back into LLVM-IR. The training stages include understanding the input code, applying various optimization passes, and predicting the resulting optimized code and size. This multi-stage training pipeline ensures that the LLM Compiler is adept at handling complex optimization tasks efficiently.

Jina AI Releases Jina Reranker v2: A Multilingual Model for RAG and Retrieval with Competitive Performance and Enhanced Efficiency

Jina AI has released the Jina Reranker v2 (jina-reranker-v2-base-multilingual), an advanced transformer-based model fine-tuned for text reranking tasks. This model is designed to significantly enhance the performance of information retrieval systems by accurately reranking documents according to their relevance for a given query. It operates as a cross-encoder model, taking a query and a document pair as inputs and outputting a relevance score for the document concerning the query.

The Jina Reranker v2 model builds on the capabilities of its predecessor, the jina-reranker-v1-base-en, and extends its functionality to support multiple languages. This makes it particularly valuable in multilingual settings, where the model can accurately handle and rerank documents across different languages. The model has demonstrated competitiveness across various benchmarks, including text retrieval, multilingual capabilities, function-calling-aware and text-to-SQL-aware reranking, and code retrieval tasks.

Marktechpost Mentionsā€¦.