MediaLaws is sharing the call for papers on “Disinformation after Generative AI and Synthetic Data” for Information Polity Journal, published by SAGE. Contributions should aim to unpack the method, historical and regulatory context of GenAI and disinformation. Interdisciplinary is welcomed. For more info on topics and timeline, click here: https://lnkd.in/dR8C7BYq
MediaLaws’ Post
More Relevant Posts
-
Interested in #HybridAI? Here’s a great discussion from our recent #GRAPHIA & LUMEN panel: Hybrid AI and the Future of Semantic Research & Innovation (moderated by Larry Swanson with speakers Sy Holsinger Suzanne Dumouchel Zvonimir Petkovic & Michela Magas) 🎥 https://lnkd.in/dA6VkDSU The panel explores how Hybrid AI — combining data-driven machine learning with rule-based reasoning and knowledge graphs — can help make research systems smarter, more connected, and more trustworthy. At GRAPHIA, we’re especially interested in how these ideas can support the way scholarly information is shared, understood, and used. #KnowledgeGraphs #OpenResearch #SemanticWeb
To view or add a comment, sign in
-
-
I really enjoyed facilitating this panel. This event was focused on semantic tech in the humanities and social sciences, but as I listen to it again I see that the panelists' insights can help pretty much anyone who's implementing AI systems, in particular conversational solutions (the bread and butter of Infobip, who hosted the event) and, of course, ontologies and knowledge graphs. Among the themes and topics we discussed: • the challenges of staying on top of AI tech advances • how transparency and open communication instill trust • how co-creation can help align diverse stakeholders • the benefits of "enjoying the complexity" • how "fuzzification" can improve collaboration • the importance of quality data in semantic systems Huge thanks to Michela Magas, Suzanne Dumouchel, Sy Holsinger, and Zvonimir Petkovic for the great conversation, to the GRAPHIA Project and LUMEN for organizing, and to Infobip for providing such a nice venue. #knowledgeGraphs #ontology #AI
Interested in #HybridAI? Here’s a great discussion from our recent #GRAPHIA & LUMEN panel: Hybrid AI and the Future of Semantic Research & Innovation (moderated by Larry Swanson with speakers Sy Holsinger Suzanne Dumouchel Zvonimir Petkovic & Michela Magas) 🎥 https://lnkd.in/dA6VkDSU The panel explores how Hybrid AI — combining data-driven machine learning with rule-based reasoning and knowledge graphs — can help make research systems smarter, more connected, and more trustworthy. At GRAPHIA, we’re especially interested in how these ideas can support the way scholarly information is shared, understood, and used. #KnowledgeGraphs #OpenResearch #SemanticWeb
To view or add a comment, sign in
-
-
It was a pleasure to contribute to this very interesting panel, especially when moderated by Larry Swanson! It's been an occasion to balance between governance needs on one side and co-creation on the other side. What we said in particular is that governance and co-creation are deeply interconnected because both involve collective decision-making and shared responsibility. Good governance requires the participation and inclusion of diverse stakeholders to ensure transparency, accountability, and legitimacy—principles that are also central to co-creation. Conversely, co-creation benefits from clear governance structures that define roles, manage power dynamics, and coordinate collaborative efforts effectively. In essence, governance provides the framework that makes co-creation organized and sustainable, while co-creation enriches governance by bringing in new perspectives, innovation, and trust through participatory engagement. These principles are deeply rooted into #GRAPHIA and #LUMEN projects.
Interested in #HybridAI? Here’s a great discussion from our recent #GRAPHIA & LUMEN panel: Hybrid AI and the Future of Semantic Research & Innovation (moderated by Larry Swanson with speakers Sy Holsinger Suzanne Dumouchel Zvonimir Petkovic & Michela Magas) 🎥 https://lnkd.in/dA6VkDSU The panel explores how Hybrid AI — combining data-driven machine learning with rule-based reasoning and knowledge graphs — can help make research systems smarter, more connected, and more trustworthy. At GRAPHIA, we’re especially interested in how these ideas can support the way scholarly information is shared, understood, and used. #KnowledgeGraphs #OpenResearch #SemanticWeb
To view or add a comment, sign in
-
-
I assume you are fed up with me always talking about futures methods. BUT :) We published this paper with Dr. Topol to demonstrate how important it is to regulate generative AI. And in the paper, we already talked about possible future GPT versions that didn't exist then. However, when such versions analyzing not only text, but sound, image and video come alive, regulators will have to be ready to regulate them as soon as possible. The paper: https://lnkd.in/ee4skA7D
To view or add a comment, sign in
-
-
Being popular isn’t the same as being cited. New independent findings echo Yext Research: AI engines source from the long tail, not the leaderboard. Structure still beats scale, and visibility starts with the data you own. - Citations research from Yext: https://yex.tt/47utxcO - Research from Ars Technica: https://yex.tt/4nxAgJ7
To view or add a comment, sign in
-
-
I assume you are fed up with me always talking about futures methods. BUT :) We published this paper with Dr. Topol to demonstrate how important it is to regulate generative AI. And in the paper, we already talked about possible future GPT versions that didn't exist then. However, when such versions analyzing not only text, but sound, image and video come alive, regulators will have to be ready to regulate them as soon as possible. The paper: https://lnkd.in/dsDpiNWx
To view or add a comment, sign in
-
-
Exciting news — Potato and Wiley have partnered to advance AI-generated methods and optimization. Together we’re combining Wiley’s scientific and publishing expertise with Potato’s AI innovation to accelerate research, improve reproducibility, and empower scientists with smarter, faster tools. Big things ahead! ⚡ #AI #Innovation #Research #Partnership #WileyScienceSolutions
Scientific experiments are only as good as the methods you use to run them. We've partnered with Wiley to give Potato access to >25,000 validated, peer-reviewed methods from Current Protocols. Tater, our AI scientist, uses those protocols to plan end-to-end experiments. Already in the hands of alpha testers. Make sure to check out Potato+ Wiley on the website!! We've got options. https://lnkd.in/e77T3Pie
To view or add a comment, sign in
-
This report by Wiley presents one of the most comprehensive and representative collections of perspectives on the use of AI in research (that I have seen). It does a really fantastic job of moving beyond sentiment and surface trends to provide practical insights for researchers on current AI use cases and their impact on research processes. TL;DR: Check out the executive summary, especially the Wiley AI Framework and recommendations for the research community. For those who want the full report, you can read it here: https://lnkd.in/gKvnJGxD
To view or add a comment, sign in
-
How many layers of AI can you add to a system and still make progress? We looked at the discovery process for Norwegian research papers from 2020 up to now, put the content in a great search engine (Vespa.ai). Set up AI for all the things! Watch the video for a quick walk through of the prototype (query rewrite, query extension, query translation, item scoring, item summary, query answer, reranking with classic LLMs, reranking with cross encoders, BM25, vector search, as well as engagement like 3d clustering, panel debate with AI). https://lnkd.in/dFSqdpff https://lnkd.in/dbbNCxcZ
Academic Research Search using Many Layers of AI
https://www.youtube.com/
To view or add a comment, sign in
-
What happens when AI tries to describe this image using only ONE set of concepts? Picture this: An image with two motorcyclists wearing helmets. Many Traditional Concept Bottleneck Models see: ✓ helmet ✓ wheels ✓ person ✓ vehicle Seems good, right? But now answer: "How many bikers are in this image?" 🤔 The model can't tell you. Why? Because it compressed the entire image into a single concept vector. This is the fundamental limitation we tackle in our NeurIPS 2025 paper. The Problem with Image-Level Concepts: When CBMs encode an entire image into one global concept representation, they lose critical information: ❌ Can't distinguish between "one person with a helmet" vs "two people with helmets" ❌ Can't answer "which object has wheels?" ❌ Can't reason about relationships between objects ❌ Struggle with counting, spatial reasoning, and multi-object scenes It's like describing a complex scene using only a simple, single sentence—you lose the structure. Our Solution: Object-Centric Concept Bottlenecks (OCB) Instead of one holistic encoding, we: 1️⃣ Detect objects first (using pretrained models like Mask-RCNN or SAM) 2️⃣ Extract concepts for EACH object + the full image 3️⃣ Aggregate intelligently to preserve both object-level and scene-level information By moving from flat image-level encodings to structured object-level representations: ✅ Handles complex tasks: Multi-label classification, logical reasoning ✅ Maintains interpretability: You can trace decisions back to specific objects ✅ Preserves counts: Finally knows the difference between "a bike" and "three bikes" ✅ Outperforms baselines: Especially on tasks requiring object-centric reasoning We also release a benchmark, COCOLogic, designed to break image-level approaches: classes like "Animal Meets Traffic" (requires detecting both livestock AND vehicles) or "Exactly two types of pets present." These tasks are designed to expose the limitations of holistic encodings—and our object-centric approach excels at them. The Bigger Picture: Real-world visual reasoning is inherently compositional. We don't think about scenes as one blob of concepts—we think about objects, their properties, and their relationships. If we want interpretable AI that reasons like humans do, we need to move beyond flat, image-level representations. 📄 Paper: https://lnkd.in/d_Ybwq5h Huge thanks to David Steinmann, Antonia Wüst, and Kristian Kersting for this collaboration! 🙌 #NeurIPS2025 #AI #Interpretability #MachineLearning #ComputerVision #ConceptBottlenecks #XAI #NeuroSymbolic #NeuroExplicit
Exciting news! Our paper “Object-Centric Concept Bottlenecks” has been accepted at NeurIPS 2025! In this work, we combine concept-bottleneck models with object-centric representations. By enriching the concept space of CBMs, we enable more complex visual reasoning, taking a step toward more interpretable and object-aware vision systems. We’re also releasing the CocoLogic dataset, a benchmark derived from the MSCOCO dataset that evaluates a model’s ability to perform structured visual reasoning in a single-label classification setting. Unlike standard object recognition tasks, COCOLogic requires classification based on logical combinations of object categories, including conjunctions, disjunctions, negations, and counting constraints. Check it out here: https://lnkd.in/d_Ybwq5h Big thanks to my co-authors Wolfgang Stammer, Antonia W. and Kristian Kersting! #NeurIPS2025 #AI #Interpretability #MachineLearning #ComputerVision
To view or add a comment, sign in