AI at Meta’s cover photo
AI at Meta

AI at Meta

Research Services

Menlo Park, California 1,076,682 followers

Together with the AI community, we’re pushing boundaries through open science to create a more connected world.

About us

Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.

Website
https://ai.meta.com/
Industry
Research Services
Company size
10,001+ employees
Headquarters
Menlo Park, California
Specialties
research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing

Updates

  • View organization page for AI at Meta

    1,076,682 followers

    We’re releasing SAM 3.1: a drop-in update to SAM 3 that significantly improves video processing efficiency without sacrificing accuracy. By implementing object multiplexing, SAM 3.1 doubles the processing speed for videos with a medium number of objects, increasing throughput from 16 to 32 frames per second on a single H100 GPU. We’re sharing this update with the community to help make high-performance applications feasible on smaller, more accessible hardware. 🔗 Model Checkpoint: https://go.meta.me/8dd321 🔗 Codebase: https://go.meta.me/b0a9fb

    • No alternative text description for this image
  • View organization page for AI at Meta

    1,076,682 followers

    Today we're introducing TRIBE v2, a foundation model trained to predict how the human brain responds to almost any sight or sound. Building on our Algonauts 2025 award-winning architecture, TRIBE v2 draws on 500+ hours of fMRI recordings from 700+ people to create a digital twin of neural activity. It enables zero-shot predictions for new subjects, languages, and tasks, consistently outperforming standard modeling approaches. We’re releasing the model, codebase, paper, and an interactive demo to help researchers advance neuroscience, apply brain insights to build better AI, and use computational simulation to speed up breakthroughs in neurological disease diagnosis and treatment. Try the demo and learn more here: https://go.meta.me/tribe2

  • View organization page for AI at Meta

    1,076,682 followers

    We’re announcing Canopy Height Maps v2 (CHMv2), an open source model for high-resolution global forest canopy mapping, developed in partnership with the World Resources Institute. CHMv2 leverages our DINOv3 Sat-L vision model, specifically optimized for satellite imagery, to deliver substantial improvements in accuracy, detail, and global consistency. CHMv2 is already supporting public sector efforts in the United States, Europe, and beyond. By making these advances open source, we aim to accelerate research and inform carbon offsetting, reforestation, and land management decisions globally. 🔗 Learn more: https://go.meta.me/a09fc0 🔗 Read the paper: https://go.meta.me/9a9e42 🔗 Download the model: https://go.meta.me/2edd52

  • View organization page for AI at Meta

    1,076,682 followers

    Custom silicon is critical to scaling next-gen AI. We’re detailing the evolution of the Meta Training and Inference Accelerator (MTIA), our homegrown silicon family designed to power the next era of AI experiences. Traditional chip cycles span years, but model architectures change in months. To close this gap, we’ve accelerated MTIA development to release four generations in just two years. See our roadmap and tech specs here: https://go.meta.me/38842b

    • No alternative text description for this image
  • View organization page for AI at Meta

    1,076,682 followers

    Meta 🤝 AMD Today we’re announcing a multi-year agreement with AMD to integrate their latest Instinct GPUs into our global infrastructure. With approximately 6GW of planned data center capacity dedicated to this deployment, we’re scaling our compute capacity to accelerate the development of cutting-edge AI models and deliver personal superintelligence to billions around the world.  Learn more: https://go.meta.me/220f12

    • No alternative text description for this image
  • View organization page for AI at Meta

    1,076,682 followers

    Our team is heading to India this week for the AI Impact Summit & Expo 🇮🇳 Stop by the Meta booth (Exhibition Hall 3, Booth No. 3.7) to meet our team and experience: 📚 Demos of research, including Omnilingual Automatic Speech Recognition (ASR) and SeamlessExpressive ⚡ Lightning talks from experts on how AI is unlocking real-world benefits across language, accessibility and health 👓 Hands-on demos with our latest AI glasses including the Oakley Meta Vanguard We look forward to seeing you there!

    • No alternative text description for this image
  • View organization page for AI at Meta

    1,076,682 followers

    Our open source DINO model is enhancing reforestation efforts around the world. DINOv2 was trained on 18 million satellite images to create a global map of tree canopy height, allowing the detection of single trees at a global scale. Now, Forest Research in the UK is using this high-resolution canopy height model to reduce costs and improve the accuracy of their environmental monitoring. ➡️ Learn more: https://go.meta.me/4d79fe

  • View organization page for AI at Meta

    1,076,682 followers

    We’re open-sourcing Perception Encoder Audiovisual (PE-AV), the technical engine that helps drive SAM Audio’s state-of-the-art audio separation. Built on our Perception Encoder model from earlier this year, PE-AV integrates audio with visual perception, achieving state-of-the-art results across a wide range of audio and video benchmarks. Its native multimodal support can assist people in everyday tasks, including sound detection and richer audio-visual scene understanding. 🔗 Read the paper: https://go.meta.me/e541b6 🔗 Download the code: https://go.meta.me/7fbef0

    • No alternative text description for this image
  • AI at Meta reposted this

    🚀 Announcing Meta Seal: Invisible Watermarking for Every Modality 🚀 https://lnkd.in/dWeb7CCs TL;DR: We are open-sourcing Meta Seal - production-ready, Invisible watermarking suite for Image, Video, Audio, Text. SOTA performance, MIT license, full weights & code. We are thrilled to unveil the open-source release of Meta Seal—a comprehensive, production-ready framework for invisible watermarking. Already widely adopted, Meta Seal empowers researchers and developers to build better provenance and attribution tools that can help distinguish between human and AI-generated content across all major modalities: images, video, audio, and text. 🌟 THE SUITE 🌟 📷 1️⃣ PIXEL Seal : Flagship Image & Video Watermarking Uses novel adversarial-only training to achieve SOTA robustness and imperceptibility. 🔗 Code: https://lnkd.in/duzXr7ig 📷 2️⃣ CHUNKY Seal: High-Capacity Image watermarking increasing the hidden message payload to 1024 bits (4x improvement). 🔗 Code: https://lnkd.in/duzXr7ig 3️⃣ DIST Seal: Unified Latent Space Watermarking that Enables a 20x inference speedup + in-model distillation perfect for securing open models at scale. 🔗 Code: https://lnkd.in/dCkeXbMx 🔊 4️⃣ AUDIO Seal: The first model for localized audio watermarking at the sample level with a specialized streaming architecture. 🔗 Code: https://lnkd.in/e_YaRkRi 📝 5️⃣ TEXT Seal: A complete toolkit for post-hoc watermarking and detecting benchmark contamination via "radioactivity." 🔗 Code: https://lnkd.in/dnF9V9US 🙏 Meta Seal is the culmination of a multi-year effort to bridge the gap between fundamental research and scalable, production-ready systems. It has been a privilege to see this come to life through the hard work of this exceptional team: Tom Sander, Tomáš Souček, Tuan Tran, Valeriu Lacatusu, Pierre Fernandez, Alexandre MOURACHKO, Hongyan C., Sylvestre Rebuffi #Watermarking #GenerativeAI #OpenSource #Deepfakes #AISecurity

  • View organization page for AI at Meta

    1,076,682 followers

    🔉 Introducing SAM Audio, the first unified model that isolates any sound from complex audio mixtures using text, visual, or span prompts. SAM Audio represents a new era in audio separation technology, outperforming previous models across a wide range of benchmarks and tasks. We’re sharing SAM Audio with the community, along with a perception encoder model, benchmarks and research papers, to empower others to explore new forms of expression and build applications that were previously out of reach. 🔗 Learn more: https://go.meta.me/65b5a3

Affiliated pages

Similar pages