- AI Research Insights
- Posts
- AI News: 🚀 What if ChatGPT were 3D? | Hugging Face Introduces StackLLaMA | Microsoft AI Open-Sources DeepSpeed Chat |Amazon enters the generative AI race with Bedrock | Open AI has released the code for its Consistency Model....
AI News: 🚀 What if ChatGPT were 3D? | Hugging Face Introduces StackLLaMA | Microsoft AI Open-Sources DeepSpeed Chat |Amazon enters the generative AI race with Bedrock | Open AI has released the code for its Consistency Model....
This newsletter brings AI research news that is much more technical than most resources but still digestible and applicable
Hugging Face Introduces StackLLaMA: a 7B parameter language model based on Meta’s LLaMA model that has been trained to answer questions from Stack Exchange using RLHF with Hugging Face’s Transformer Reinforcement Learning (TRL) library. The researchers fine-tuned Meta’s original LLaMA model using a combination of mainly three strategies: Supervised Fine-tuning (SFT), Reward/ Preference modeling (RM), and Reinforcement Learning Human Feedback (RLHF). The model can be accessed here, and the entire training pipeline is available as a part of the TRL library.
Google researchers present Zip-NeRF: a model that integrates the progress made in the formerly divergent areas of scale-aware anti-aliased NeRFs and fast grid-based NeRF training. By leveraging ideas about multisampling and prefiltering, our model is able to achieve error rates that are 8% – 76% lower than prior techniques, while also training 22× faster than mip-NeRF 360 (the previous state-of-the-art on our benchmarks).
Calibrated Chaos: Variance Between Runs of Neural Network Training is Harmless and Inevitable. Neural network trainings are nondeterministic. Repeated runs each produce a unique network, often with significantly varying test-set performance. Researchers from Hive AI demonstrate that this variation has a simple statistical structure, and is harmless & inevitable. Background: for standard CIFAR-10 trainings there exist rare “lucky seeds/runs” attaining over +0.5% higher test-set accuracy than the average (10% fewer errors). ImageNet trainings are similar with +0.4%. These differences are considered significant in computer vision.
Microsoft AI Open-Sources DeepSpeed Chat: An End-To-End RLHF Pipeline To Train ChatGPT-like Models. The researchers have included a whole end-to-end training pipeline in DeepSpeed-Chat and modeled it after InstructGPT to make the training process as streamlined as possible. The AI community can now access DeepSpeed-Chat thanks to its open-sourced nature. On the DeepSpeed GitHub website, the researchers invite users to report issues, submit PRs, and participate in discussions.
OpenAI released a Jupyter Notebook, which demos a Q&A workflow using ChatGPT API as a base (similar to the Agent/LangChain workflow). The notebook has good test cases for an AI Q&A system, including a fun prompt injection one. The notebook demonstrates a two-step Search-Ask method for enabling GPT to answer questions using a library of reference text. (1) Search: search your library of text for relevant text sections. (2) Ask: insert the retrieved text sections into a message to GPT and ask it the question.
Amazon enters the generative AI race with Bedrock: Amazon has entered the field of generative AI, but with a different approach. Instead of solely developing AI models, Amazon is enlisting third-party hosts to offer models on AWS. The latest offering from AWS is Amazon Bedrock, which allows the creation of generative AI-based applications by utilizing pre-trained models from startups like AI21 Labs, Anthropic, and Stability AI. This service is presently in a "limited preview" phase and also provides access to in-house trained foundation models called Titan FMs.
What if ChatGPT were 3D? Structure GPT4's responses in 3D with Sensecape to better understand large amounts of text. New Human-Computer Interaction HCI research makes AI easier to use. UCSD researchers introduce Graphologue and Sensecape. At a micro-level, Graphologue transforms GPT-4-generated text into interactive node-link diagrams in real-time, facilitating rapid comprehension and exploration of information. At a macro-level, Sensecape enables users to spatially organize information obtained from GPT-4, offering a flexible way to organize and make sense of large amounts of information. These research projects are grounded in our previous work aimed at enabling intuitive and flexible human-AI collaboration.
Open AI has released the code for its Consistency Model, a new technique for creating images with AI in one shot. With this method, AI can generate natural language images in real time without the need for many steps to remove noise, as is the case with Diffusion. Furthermore, it can perform zero-shot image editing, enabling it to alter the color or style of existing images without additional data or training. This model achieves state-of-the-art performance on multiple image datasets, surpassing other models in terms of image quality. Real-time editing, NeRF rendering, and real-time game rendering are among the possible use cases.
AI Tools Club: Find 100s of cool artificial intelligence (AI) tools. Our expert team reviews and provides insights into some of the most cutting-edge AI tools available.
Do You Know Marktechpost has a community of 1.5 Million+ AI Professionals and Engineers? For partnership and advertisement, please feel to contact us through this form.