Step inside “Hotel Room #2: Communal Dreams,” an immersive installation at the MIT Museum created by artist Carsten Höller, Media Lab alum Adam Haar Horowitz, and Seth Riskin, of the MIT Museum Studio and Compton Gallery. Part of the Museum’s exhibition “Lighten Up! On Biology and Time,” the installation is designed to bring sleepers together in a common dream, cocooning them in a shared experience modulated by sound, motion, and light. The work inspires questions: How do we dream new worlds into being? How might dreams function as a form of communal imagination, creating possibilities for new realities? “Whose shared dream are we living in right now,” Haar asks, “and what would we have to do to share a different dream? It starts with the really simple practice of sharing dreams. It starts with the simple practice of making your interior less invisible to others.”
MIT Media Lab
Higher Education
Cambridge, Massachusetts 201,181 followers
News and ideas from the MIT Media Lab
About us
The Media Lab is an interdisciplinary creative playground rooted squarely in academic rigor, comprising dozens of research groups, initiatives, and centers working collaboratively on hundreds of projects. We focus not only on creating and commercializing transformational future technologies but also on their potential to impact society for good. Accessibility: https://accessibility.mit.edu/
- Website
-
http://www.media.mit.edu/
External link for MIT Media Lab
- Industry
- Higher Education
- Company size
- 201-500 employees
- Headquarters
- Cambridge, Massachusetts
- Type
- Educational
- Founded
- 1985
Locations
-
Primary
Get directions
75 Amherst St
Cambridge, Massachusetts 02142, US
Employees at MIT Media Lab
Updates
-
MIT Media Lab reposted this
We are seeking current Massachusetts Institute of Technology undergraduate and graduate students to apply for our Student Advisory Committee in support of the inaugural MIT Future Fest. The MIT Future Fest is led by the MIT Museum with curatorial support from PACT in collaboration with MIT Technology Review and MIT MAD / Morningside Academy for Design this will be a collaborative effort designed to engage MIT's academic and creative communities, with student voices playing a central role in shaping and presenting at the event. Learn more and apply here: https://lnkd.in/eUVCJ8d5 Stay in the know for updates on the MIT Future Fest here: https://mitfuturefest.org/ Application period closes on March 2, 2026
-
-
MIT Media Lab reposted this
*IT IS TIME* Join Jaleesa Trapp, PhD, Alexis Hope, and me as we host the virtual finale of the #CodedBiasWorldTour 🎉🌍 📅 February 27 | 6:30pm ET 💻 Online & FREE 🔗Sign up here: https://lnkd.in/etVnMS_W Celebrate the global impact of Coded Bias, hear exclusive behind-the-scenes stories, and help us honor our inaugural winner of the Global AI Justice Award! Thank you to our partners for making this possible: Accelerator Fellowship Programme of the Institute for Ethics in AI | Institute for Ethics in AI | Qhala | Shikoh Gitau | Ushahidi | Angela Oduor Lungati | Baraza Media Lab | MIT Media Lab | Rhodes House | Nina D. | Luana Génot | Rwanda Centre for the Fourth Industrial Revolution | Crystal Rugege | RightsCon | Creative Learning Community in Rio | Diaspora.Black | and so many more!
-
-
MIT Media Lab reposted this
“The technology should be used to liberate humans to have more human moments with each other.” Watch or listen to my Lifelong Kindergarten podcast conversation with Sal Khan of Khan Academy at bit.ly/llkpodcast
-
Congratulations to the 2026 LEGO Papert Fellows, Ayat Abodayeh, Eitan Wolf, and Ila Krishna Kumar! The LEGO Papert Fellowships, endowed by The LEGO Foundation, are intended to honor the legacy and extend the work of Seymour Papert, one of the founding faculty members of the Media Lab and a pioneer in the development and study of new technologies to support playful, creative learning. Learn more about the Papert Fellows' work: https://lnkd.in/dHWiKN65
-
New research from the MIT Center for Constructive Communication (MIT CCC) finds that leading chatbots may provide less-accurate, less-truthful responses to users who have lower English proficiency, less formal education, or who originate from outside the United States. They also refuse to answer questions at higher rates for these users, and in some cases, respond with condescending or patronizing language. Lead author Elinor Poole-Dayan, a Media Lab alum who is now a technical associate at the MIT Sloan School of Management, says, “LLMs have been marketed as tools that will foster more equitable access to information and revolutionize personalized learning. But our findings suggest they may actually exacerbate existing inequities by systematically providing misinformation or refusing to answer queries to certain users. The people who may rely on these tools the most could receive subpar, false, or even harmful information.” Media Lab Professor Deb Roy, who leads the MIT CCC and was a co-author of the paper, adds, “The value of large language models is evident in their extraordinary uptake by individuals and the massive investment flowing into the technology. This study is a reminder of how important it is to continually assess systematic biases that can quietly slip into these systems, creating unfair harms for certain groups without any of us being fully aware.” The paper describing the work, “LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users,” was presented at the AAAI Conference on Artificial Intelligence in January. Authors: Elinor Poole-Dayan, Deb Roy, and Jad Kabbara
-
The Media Lab’s February newsletter is coming soon! Catch up with the January newsletter now, and subscribe to receive the new edition as soon as it’s posted.
The January edition of the Media Lab's LinkedIn newsletter is available now! In this issue: Advancing wildlife sensing technology to aid conservation efforts; exploring the evolution of eyes; learning with a new podcast from the Lifelong Kindergarten group; and more!
-
At 6:30pm ET on February 27, to wrap up the #CodedBiasWorldTour, The Algorithmic Justice League is hosting a *virtual* discussion with AJL Founder Dr. Joy Buolamwini, Senior Education Advisor Jaleesa Trapp, PhD, and AJL Advisor Dr. Alexis Hope, all of whom are Media Lab alumni. Coded Bias, an award-winning documentary that premiered at Sundance in 2020, sheds light on critical flaws in AI systems that threaten democracy and our basic rights. It tells the story of Dr. Joy Buolamwini, who uncovered biases in these systems while at MIT and subsequently founded the Algorithmic Justice League to combat AI-related harm. The film follows her research into gender and skin tone bias, her efforts to bring these issues before governments and major tech companies, and also features many AI trailblazers who are working to raise awareness about the dangers of unchecked AI systems. The event is free, but registration is required: https://lnkd.in/gBrHEXgz
-
The shimmer of an opal and the sheen of a butterfly’s wing arise from microscopic structures that reflect light to produce radiant hues. Known as “structural color,” this effect has been recreated in MorphoChrome, a new optical system developed by a research team from MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The system combines software with a handheld device that allows MorphoChrome to act as a “brush" to paint with red-green-blue (RGB) laser light, while a holographic photopolymer film (used for things like passports and debit cards) is the canvas. Paris Myers, a Media Lab alum who is now a PhD student in CSAIL, says, “We wanted to tap into the innate intelligence of nature. In the past, you couldn’t easily synthesize structural color yourself, but using pigments or dyes gave you full creative expression. With our system, you have full creative agency over this new material space, predictably programming iridescent designs in real-time.”