Electronic Markets offers fast track opportunities for the following three minitracks at Hawaii International Conference on System Sciences, HICSS 60: 1) AI-enabled Digital Transformation for SMEs (EI) Co-Chairs: Yao Shi and Judith Gebauer 2) AI Ecosystems: Agents, Assistants, and Platforms (IN) Co-Chair: Rainer Schmidt 3) Managing Platforms and Ecosystems (OS) Co-Chair: Vladimir Sobota Selected papers will be invited to submit an extended version to Electronic Markets, providing an opportunity for a possible journal publication. More details about these fast-track opportunities can be found here: https://lnkd.in/dmrvcKkX
Electronic Markets - The International Journal on Networked Business’ Post
More Relevant Posts
-
Semantics, LLMs and Ontologies: We are going through a period of intellectual turbulence. Under the banner of innovation, we are casually blending statistics (LLMs), structure (ontologies), and meaning (semantics) — three fundamentally distinct layers. By Dr. Nicolas Figay
To view or add a comment, sign in
-
Global Information Technology Industry Intelligence Report: Week Ending 27 March 2026 The final week of March 2026 has witnessed a fundamental restructuring of the global information technology landscape, transitioning from the hype-driven "generative era" into a disciplined "agentic epoch." This week has been defined by the convergence of trillion-parameter frontier models, the deployment of inference-optimised hardware architectures, and a landmark shift in the regulatory oversight of algorithmic management, particularly within the Australian jurisdiction. As enterprise software valuations undergo a significant bifurcation based on AI integration, the industry is grappling with the dual pressures of massive capital expenditure and the urgent need to demonstrate tangible return on investment. This report provides an exhaustive analysis of the technological breakthroughs, geopolitical shifts, and corporate movements that have shaped the IT sector during this pivotal week....
To view or add a comment, sign in
-
Roadmap: AI in SHM for Built Environments Read here: https://lnkd.in/etE3zryZ Proud to share our new roadmap addressing a critical gap: although artificial intelligence methods are widely researched in Structural Health Monitoring (SHM), very few solutions are actually deployed in real-world, safety-critical infrastructure. This paper moves beyond algorithm development and focuses on the system-level integration of AI in SHM. It addresses key challenges such as transparency, interpretability, security, certifiability and decision-making readiness. The roadmap outlines a practical pathway towards field deployment, considering software architecture, data availability and hardware integration. Developed through a global collaboration, the work aims to support the transition towards scalable and trustworthy AI-enabled infrastructure monitoring systems. Sincere thanks to Professor Simon Laflamme, Professor Erik Blasch, and Professor Filippo Ubertini for their outstanding leadership. Co authored with good colleagues: Eleni Chatzi, Giuseppe Carlo Marano, Ertugrul Taciroglu Javad Mohammadi, Ivan Izonin #ArtificialIntelligence #StructuralHealthMonitoring #Infrastructure #Resilience
To view or add a comment, sign in
-
-
Hallucinated citations produced by generative artificial intelligence may constitute research misconduct when citations function as data in scholarly papers https://buff.ly/qOF7NRb
To view or add a comment, sign in
-
-
🚀 Thrilled to share our paper was published at IEEE AIoT 2025 (out now): https://lnkd.in/geePDP9g! "Architecting Deterministic Agentic Workflows for Reliable Data Understanding Using LLMs" Here's the core problem: LLMs are powerful but unpredictable. Run the same prompt twice — get two different answers. That's fine for a chatbot. It's a dealbreaker for mission-critical data pipelines. We built a framework to fix that! The key insight: treat LLM tasks like a state machine. Break work into fine-grained steps with precise inputs, outputs, and validation gates. Add fixed seeds, scoped prompts, and multi-pass verification — and suddenly your AI agent behaves consistently. We tested it on a real data warehouse problem — extracting lineage and metric definitions from code, metadata, and docs — and got high fidelity across repeated runs. The bigger idea: AI systems that align with human cognitive workflows are more trustworthy and easier to audit. Determinism isn't the enemy of intelligence — it's what makes intelligence deployable. Grateful to my amazing co-authors Parag Paul, Eugene C. and Haya Sridharan for bringing this to life! #llm #agenticai #ai #dataengineering #determinism #machinelearning #airesearch #chicory
To view or add a comment, sign in
-
I am pleased to share our Call for Abstracts for IFORS 2026 – the 24th Conference of the International Federation of Operational Research Societies, to be held 12–17 July 2026 in Vienna, Austria. https://ifors2026.at/home/ Together with Meliha Sermin Paksoy, I am organising the session “Building Responsible AI Governance for the Deep Technology Future” in the Data-driven Operations Research stream. This session welcomes contributions on governance frameworks, transparency, accountability, legal and regulatory dimensions, human oversight, auditability, compliance, and trustworthy Artificial Intelligence for deep technology applications. We warmly invite researchers, practitioners, and interdisciplinary experts working at the intersection of Artificial Intelligence, governance, law, operational research, and deep technology to submit their abstracts. To submit to this session, please use the session code: 66b1b9b7 through the conference abstract submission system. We look forward to inspiring discussions at IFORS 2026 in Vienna. #IFORS2026 #ResponsibleAI #AIGovernance #TrustworthyAI #OperationalResearch #DeepTech #ExplainableAI #DataDrivenOR #AIRegulation #Vienna
To view or add a comment, sign in
-
Over the past few weeks, I've noticed a fascinating and important consensus emerging in the AI governance community. A growing number of architects and thinkers are all pointing to the same fundamental conclusion: governance focused on post-execution audits is too late. We're seeing this articulated in several powerful ways: - Ricardo Muro on "Authority as a Runtime Condition" - George-Adrian Caboc on the "Decision Admissibility Architecture" - Jason Liao on the "Sentence-Execution Governance Boundary" - Roger Aeschbacher on the need for a "Signal" rule in these systems All of these ideas converge on a single, critical principle: a preventative, architectural gate must exist before an AI's proposed decision becomes an irreversible action. This is a conversation we find incredibly exciting, as it aligns with the core principles we've been building on for some time. When we were developing our own frameworks in 2025 and early 2026, we found that this "admissibility boundary" was the only way to effectively manage risk in high-stakes environments. For those who are following this emerging architectural pattern, you might find our work from that period to be a useful reference point. We operationalized these ideas into a few key components: - EFA Executive Overview (new Mar 2026): https://lnkd.in/e8YPzGCV - The Ring of Fire (Drift, Dec 2025): https://lnkd.in/dMCauHUc - Sovereign Substrate Admissibility (The Gate, Mar 2026): https://lnkd.in/eYhqnp9d - EFA Development History (The Constitution, Feb 2026): https://lnkd.in/ebnWmBZS Kudos to everyone pushing this vital conversation forward. #AIGovernance #ConvergentEvolution #SystemsDesign #ExecutionBoundary #EFA #SSA #Foresight
To view or add a comment, sign in
-
The Digital Omnibus aims to streamline the #EU’s digital landscape by simplifying existing laws and regulations. With this, we support European Alliance for Research Excellence (EARE)’s recent Position Paper, to which #LIBER contributed during its development, responding to the Simplification Omnibus package. 🔹 Instead of facing the barriers to research, harmonised and simplified digital and data regulations support research libraries in advancing research activities and knowledge exchange. Access the full Position Paper below:
📢 As negotiations on the EU’s Digital Simplification Omnibus package continue, EARE reiterates the importance of ensuring that simplification efforts genuinely strengthen Europe’s research and innovation ecosystem. EU policymakers must support efforts to promote open access and re‑use of data for research and innovation, reduce administrative burdens, and increase legal clarity for researchers, innovators, and start‑ups. 🔍 Key priorities that EU policymakers must consider include: ✅ Preserving the role of text and data mining (TDM) exceptions as a core enabler of AI training, scientific discovery, and Europe’s digital competitiveness. ✅ Clarifying research exceptions under the AI Act so that they cover the full research and development lifecycle, as well as modern public-private research collaboration. ✅ Ensuring that simplification of the EU data framework enhances, rather than restricts, access to research data and its re‑use. ✅ Aligning the definition of scientific research across EU legislation with today’s research realities, including public‑private collaboration. ✅ Safeguarding non‑discriminatory conditions for data re‑use and avoiding licensing practices that fragment the open data landscape. As discussions progress, EARE calls on EU institutions to work closely with the research and innovation community to address remaining uncertainties around data access, re‑use, and research‑driven AI development. 📣 Simplification should empower researchers and innovators not introduce new barriers. 📄 Read EARE’s full position on the Digital Simplification Omnibus package here ➡️ https://lnkd.in/eKysKg93 #DigitalOmnibus #OpenData #Research #Innovation #AIAct #TDM #DataPolicy #EUpolicy
To view or add a comment, sign in
-
Even though his research laid the groundwork for L.L.M.s, Dr. LeCun argued that they were not the final answer to A.I. development. The problem with current systems, he said, is that they do not plan ahead. Trained solely on digital data, they do not have a way of understanding difficulties in the real world. “L.L.M.s are not a path to superintelligence or even human-level intelligence. I have said that from the beginning,” he said. “The entire industry has been L.L.M.-pilled.” - Yann LeCun https://lnkd.in/dD8A6NZi
To view or add a comment, sign in
-
How do we build a governance infrastructure for AI that actually works, keeps pace with the technology, produces meaningful safety outcomes, and scales beyond what governments can do alone? That’s the question Fathom brought to a full-day workshop at the International Association for Safe and Ethical Artificial Intelligence, Inc.(IASEAI) conference in Paris. The day opened with a keynote from Dr. Gillian K. Hadfield, followed by panels featuring Dean Ball, Gemma Galdon Clavell, PhD, Dinah Rabe, Dr. Dylan Hadfield-Menell, Dr. Sebastian Hallensleben 🇪🇺, and Christine Graham. The afternoon focused on a hands-on tabletop exercise exploring how Independent Verification Organizations (IVOs) could function in practice across real-world governance scenarios. A few conclusions from the day: ➡️ The IVO framework changes incentive structures in productive ways ➡️ Authority without enforcement is theater ➡️ Definitions are load-bearing ➡️ The science of evaluation is more ready than commonly assumed ➡️ Accountability infrastructure turns failure into success ➡️ The best path forward is action Fathom is continuing through a series of IVO tabletop exercises across industries and use cases. If you’re interested in running one for your sector or jurisdiction, reach out.
To view or add a comment, sign in
University of Vaasa•6K followers
2wDanial Amin second one relevant to us!