Hugging Face’s cover photo
Hugging Face

Hugging Face

Software Development

The AI community building the future.

About us

The AI community building the future.

Website
https://huggingface.co
Industry
Software Development
Company size
51-200 employees
Type
Privately Held
Founded
2016
Specialties
machine learning, natural language processing, and deep learning

Products

Locations

Employees at Hugging Face

Updates

  • Hugging Face reposted this

    View profile for Lucille Bateman

    Special Projects | Hugging Face 🤗

    🚀 Open source is coming to #Slush! I’m thrilled to share that I’ve been working with an incredible group of people to bring The Power of Open Source: Building Giants in the Open to Slush this year! Join us on Nov 20 (2:30–4:30 PM) inside the Slush venue in Helsinki for a high-energy session combining talks, demos, and networking exploring how openness is transforming how startups build, fund, and scale. We’re bringing together amazing voices from Hugging Face, Black Forest Labs, Lovable, NVIDIA, OpenAI, Snowflake, Supabase, Andreessen Horowitz, General Catalyst, Accel… and more to come. Registration is available directly on the Slush platform. Hope to see you there! https://lnkd.in/eWvYPDu8

    • No alternative text description for this image
  • Hugging Face reposted this

    View profile for Lewis Tunstall

    Post-training LLMs at 🤗

    We've just published the Smol Training Playbook: a distillation of hard earned knowledge to share exactly what it takes to train SOTA LLMs ⚡️ Featuring our protagonist SmolLM3, we cover: 🧭 Strategy on whether to train your own LLM and burn all your VC money 🪨 Pretraining, aka turning a mountain of text into a fancy autocompleter 🗿How to sculpt base models with post-training alchemy 🛠️ The underlying infra and how to debug your way out of NCCL purgatory Happy Halloween 🎃! Link to the book in the first comment 👇

    • No alternative text description for this image
  • Hugging Face reposted this

    View profile for Henry Ndubuaku

    Co-Founder @ Cactus (YC S25)

    Cactus (YC S25), Nothing and Hugging Face are teaming up on a hackathon to bring mobile AI agents to phones, come spend a weekend with us, build and win fantastic prizes. 1. Sponsored trip to San Francisco 2. Lunch with our Y Combinator Group Partner 3. Guaranteed interviews at HuggingFace, Nothing, Cactus 4. Dinner with the founders 5. HuggingFace Reachy robots 6. Nothing phones 7. More Learn More: https://luma.com/jrec73nt Location: London & Online

    • No alternative text description for this image
  • Hugging Face reposted this

    View profile for Ben Burtenshaw

    Community Education in AI @ Hugging Face

    We open sourced on-policy distillation from Thinking Machines Lab in TRL so that everyone can fine-tune on their own domain, with any model, and recover general performance! This is massive unlock for post-training LLMs because, in most cases, when improving on a domain like coding, or internal documents, you'll see a decline in general chat/ifeval performance. I'll run through a tldr, but there's also a super detailed blogpost and a recipe for your own use case - we open sourced on-policy distillation in TRL as GKDTrainer and GOLDTrainer for knowledge distillation. You'll need the latter when tokenizer don't align. - we fine tuned a Qwen3-4B on a competive coding task with the codeforces dataset. It's coding performance improved by 10% but it's instruction following dropped by 5%. In practice, this make the model harder to use. - then, we used the GKDTrainer with tuned model as a student and the original Qwen3-4B as a teacher. IFEval rose back by 5% and coding stayed at its higher value. - The student model 'relearnt' it's instruction following abilities without dropping coding ones. This process is useful if you want to improve an LLM on your own domain, without it becoming hard to use in practice.

    • No alternative text description for this image
    • No alternative text description for this image
  • Hugging Face reposted this

    View profile for Carlos Miguel Patiño

    Post-Training @HuggingFace | MSc in AI Student

    On-policy distillation is a promising way to train small models, but it’s usually limited to teacher–student pairs sharing the same tokenizer. With our GOLD method, you can now distill across different model families and even outperform GRPO! https://lnkd.in/eyqb6XMN We extended papers in the distillation literature to show that on-policy distillation works well for a math task even when distilling between families like a when distilling Qwen teacher to a Llama student. We also replicate the "Distillation for personalization" results from Thinking Machines Lab by improving the code performance of a model with SFT and then recovering it's IFEval scores with distillation. Our implementation is open-source and already available in TRL, so you can go and test your knowledge distillation ideas. Share with us all the results you achieve with distillation in the Community tab from the blogpost 🤗 

    • No alternative text description for this image
  • Hugging Face reposted this

    Welcome 𝐅𝐈𝐁𝐎!! The world’s first JSON-native, Open Source Text-to-Image model! After months (and countless late nights) of work, I’m beyond excited to share something truly different with you: FIBO, our open-source text-to-image model, built from the ground up for 𝒄𝒐𝒏𝒕𝒓𝒐𝒍, 𝒑𝒓𝒆𝒄𝒊𝒔𝒊𝒐𝒏, and 𝒓𝒆𝒑𝒓𝒐𝒅𝒖𝒄𝒊𝒃𝒊𝒍𝒊𝒕𝒚 - not just imagination. FIBO wins over all open-source competitors in PRISM benchmark evaluation and delivers a new standard for professional image generation flow (attached). 🧠 Trained on 100M+ structured JSON captions (each over 1,000 words), FIBO understands the why behind every visual element - from lighting and composition to depth of field and camera parameters. That means: 🔹 No more “prompt drift.” 🔹 You can tweak one attribute (say, camera angle) without breaking the scene. 🔹 You can 𝒈𝒆𝒏𝒆𝒓𝒂𝒕𝒆 → 𝒓𝒆𝒇𝒊𝒏𝒆 → 𝒊𝒏𝒔𝒑𝒊𝒓𝒆 — all with consistent, controllable outputs. 👉 Try it here: platform.bria.ai/labs/fibo 👉 Hugging Face Model card: huggingface.co/briaai/FIBO 👉Also available on fal & Replicate with their own cool experiences! This one means a lot to me personally. Because FIBO isn’t just another model drop - it’s a statement about what responsible and professional GenAI should look like. Huge thanks and congrats to our amazing Bria AI team! This huge milestone belong to each an every person across the team!! #GenAI #AIresearch #opensource #texttoimage #diffusers #BriaAI #FIBO

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +1
  • Hugging Face reposted this

    View profile for Giada Pistilli

    Principal Ethicist at Hugging Face | PhD in Philosophy at Sorbonne Université

    🎂 The Machine Learning & Society team at Hugging Face turned 3! What started with the BigScience project has evolved into a thriving space for open, interdisciplinary research on how AI shapes (and is shaped by) society. Over these three years, we’ve built over 60 research artifacts exploring sustainability, agency, and governance, while staying true to open science principles: transparency, replicability, and collaboration. We’ve also launched a new website gathering our work, methods, and insights all in one place -- go check it out!

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Hugging Face 8 total rounds

Last Round

Series unknown
See more info on crunchbase