Cohere has released its newest foundation model, Rerank 3, purpose-built to enhance enterprise search and Retrieval Augmented Generation (RAG) systems. The model is compatible with any database or search index and can also be plugged into any legacy application with native search capabilities. Rerank 3 can boost search performance or reduce the cost of running RAG applications with negligible impact on latency with a single line of code.
Meta is expanding its "Made with AI" labeling to cover a broader range of AI-generated content starting in May in response to the evolving landscape of manipulated media. Labels will be applied through various means, including user disclosure and detection of AI-made content. The policy update follows recommendations from Meta's independent Oversight Board.
Mistral has released a new 8x22B model via magnet link. The first version of benchmarks from the community show it performs admirably as a base model with 77 MMLU (commonly associated with reasoning).
This study introduces a weak-to-strong eliciting framework to improve surround refinement in Multi-Camera 3D Object Detection (MC3D-Det), a field enhanced by bird's-eye view technologies.
PoLoPCAC is a method for lossless point cloud attribute compression that marries high efficiency with strong adaptability across varying scales and densities of point clouds.
The Motif Channel Attention Stereo Matching Network (MoCha-Stereo) is a novel approach that preserves geometric structures often lost in traditional stereo matching methods.
A recent study investigated how different concepts are understood by various layers within large language models. It found that simpler tasks are handled by earlier layers while more complex ones require deeper processing.
The generative AI industry looks like it is facing many issues, but there is still a lot of hype. Many people have found a use case for the technology and are using it. Research continues to push the field further at a fast pace.
This mixture-of-experts model was trained using public datasets on a reasonable amount of compute. It matches the performance of the much larger and more expensive Llama 2 7B model from Meta.