OWASP AI Exchange’s cover photo
OWASP AI Exchange

OWASP AI Exchange

Computer and Network Security

owaspai.org : the go-to resource for AI Security, feeding straight into international standards. Open source. 200 pages.

About us

The OWASP AI Exchange at owaspai.org is as a collaborative working document to advance the development of global AI security standards and regulations. It provides a comprehensive overview of AI threats, vulnerabilities, and controls to foster alignment among different standardization initiatives. This includes the EU AI act, ISO/IEC 27090 (AI security), the OWASP ML top 10, OWASP LLM top 10, and OpenCRE - which we want to use to provide the AI Exchange content through the security chatbot OpenCRE-Chat Our mission is to be the authoritative source for consensus, foster alignment, and drive collaboration among initiatives - NOT to set a standard. By doing so, it provides a safe, open, and independent place to find and share insights for everyone.

Website
https://owaspai.org/
Industry
Computer and Network Security
Company size
51-200 employees
Type
Nonprofit

Employees at OWASP AI Exchange

Updates

  • OWASP AI Exchange reposted this

    View profile for Rob van der Veer

    AI Pioneer (33+ Years) | Chief AI Officer at SIG | Leading International Collaboration on AI Standards (AI Act Security, ISO/IEC 5338 & 27090) | Founder, OWASP Flagship project AI Exchange | Co-Founder, OpenCRE

    Introducing: the A-word. AI is becoming a bit of a fixation. Maybe we should sometimes avoid the word and call it ‘A-word’ instead — just to remind ourselves not to obsess. Last Thursday, I opened the BSides Amsterdam conference with an AI talk (surprise), and after that there were zero presentations on AI! And honestly, I think that is great. Yes, we need to deal with AI to use it well and to manage its risks - but it has also become a distraction from the thousand other things that matter in our work and lives. It even pushes us to treat AI as a goal in itself — and that is not what we need: 📈 We should not just focus on how to apply AI, but on solving real business problems. 🌟 We need to stop idolising AI: thinking that it's going to solve everything. Only bet on the AI horse if you’re well-informed. 😥 We should not build separate processes and frameworks just for AI, but integrate it into what already works. 🔐 We better not focus security only on exotic AI-attacks. Many real risks are simple, such as prompt security. 🗣️ Not every story, solution, or opinion needs “AI” in it, to be valuable. It has become a bit too much. Of course, there are moments when we must talk about ‘A-word’. I do it all the time - while trying to keep in mind the notions above. Try it. And if this resonates, please spread to your connections for awareness. Let's stay grounded. #ai

    • No alternative text description for this image
  • OWASP AI Exchange reposted this

    View profile for Niklas Bunzel

    Research Scientist in Machine Learning | Fraunhofer SIT | ATHENE | TU-Darmstadt | OWASP AI Exchange

    Last week, I had the privilege of attending IEEE TrustCom 2025, where I presented two papers from our ATHENE Center project RoMa (Robustness in Machine Learning). One paper dives into the evolving threat landscape of evasion attacks in continual learning systems, a critical area as AI systems increasingly adapt and grow over time. After each continual learning step (e.g., adding a new class), the effectiveness of evasion attacks can shift, typically becoming more effective (or staying as effective). So they do transfer across CL steps. Adversarial training, while resource-intensive and limited to scenarios where you control both model and data, isn’t a foolproof defense. In the second paper we explored the transferability of evasion attacks and how to assess the risk of susceptibility, a foundation for the risk assessment framework Disesdi Susanna Cox 🕷️ and I further developed in our recent arxiv paper. Beyond the research, I was proud to promote the OWASP AI Exchange and our mission: making AI secure worldwide. As AI systems become more dynamic, so must our defenses. Let’s build trustworthy AI together! #SecureAI #ContinualLearning #AdversarialAttacks #RiskEstimation #OWASPAIExchange

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Our own Jolly is presenting at IWWOF on AI safety and security.

    AI is changing the game for small and midsize businesses (SMEs). It brings new ways to grow, create, and connect. But here’s the truth: digital transformation without safety can put everything at risk. That’s why IWWOF would like to invite you to this event: “AI Safety Isn’t Optional in Today’s Cybersecurity Landscape for SMEs.” Together, we’ll explore: 🔹Why AI safety needs to be part of your cybersecurity culture   🔹How everyday AI tools can expose sensitive data (and what to do about it) 🔹Practical steps to make AI safety part of your cybersecurity culture Whether you’re already using AI or just getting started, this session will help you feel confident, secure, and ready to lead your business into the future. ✨ Event info:  📅Date: 13.11.2025 ⏰Time: 16:30 to 17:30 📍Venue: Business Turku, Tykistökatu 4 B, ElectroCity (Street level), 20520 Turku. 🌍Mode: Hybrid. 🔗Link for online participation: https://lnkd.in/ghyZpRfk 👉 Register for on-site participation: https://lnkd.in/gWhSf3KQ Let’s build a safer, smarter future together! #AI #AISafety #Cybersecurity #SMEs #DigitalTransformation #IWWOF

    • No alternative text description for this image
  • Great work by Michael Novack: using NotebookLLM to put together a video explaining Evasion attacks and the clever innovations that AI Exchange star members Niklas Bunzel and Disesdi Susanna Cox 🕷️have just published.

    View profile for Michael Novack

    Technologist | Advocate for approachable tech

    Thanks Disesdi Susanna Cox 🕷️for your paper on quantifying the attack space for AI systems. Really interesting approach instead of just putting various language attacks and hoping for the best. I made a video with NotebookLM that helped me understand it, so thought it might help others. I did read the paper afterwards to validate its accuracy. Original paper : https://lnkd.in/eXWd7hew

  • Our very own Iryna Schwindt is leading thoughts on the recent 'Women in cybersecurity PodCast'. She has been, and still is. an important asset to our team.

    Today we are releasing a new episode with Iryna Schwindt - Lead Secure-by-Design Manager at Vodafone Group, where she embeds security controls and guardrails across digital channels and AI-driven products serving millions of users. With a career spanning role in cybersecurity engineering, risk management, and cloud security across Azure, AWS, and GCP, Iryna has become a leading voice on AI risk management, AI red teaming, and responsible AI implementation. During our conversation, she shares her inspiring journey into cybersecurity—from her early research on cryptography and embedded systems to leading Vodafone’ secure AI initiatives—and provides actionable advice for women entering or advancing in this dynamic field. Full episode: https://lnkd.in/gFf4KFi4 #WCP #womenincybersecurity #security #ai #career

  • Our (OWASP AI Exchange) sponsor, AI Security Academy, is bringing the heat to BlackHat KSA with a hands-on AI Security Challenge that every practitioner needs to experience! 𝐓𝐡𝐢𝐬 𝐢𝐬 𝐰𝐡𝐞𝐫𝐞 𝐭𝐡𝐞𝐨𝐫𝐲 𝐦𝐞𝐞𝐭𝐬 𝐩𝐫𝐚𝐜𝐭𝐢𝐜𝐞:  • Deepfake detection & exploitation   • Shadow AI discovery & mitigation  • LLM vulnerabilities & prompt injection   • Real-world attack scenarios 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 The AI security skills gap is REAL. Events like this are critical for building the next generation of defenders who can secure AI systems at scale. Dec 2-4 | BlackHat KSA AI Security Academy Booth Win prizes, gain skills, protect AI systems. This is how we build a safer AI ecosystem together! See you there! #AISecurity #BlackHatKSA #OWASP #AIExchange #Hackathon #HandsOnLearning #SecureAI

    View organization page for AI Security Academy

    618 followers

    𝐀𝐫𝐞 𝐲𝐨𝐮 𝐫𝐞𝐚𝐝𝐲 𝐭𝐨 𝐡𝐚𝐜𝐤 𝐀𝐈 𝐀𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧/𝐒𝐲𝐬𝐭𝐞𝐦? Compete in AISA’s AI Security Challenge live at BlackHat KSA. Solve Deepfake, Shadow AI, and LLM exploits. 𝑾𝒊𝒏 𝒑𝒓𝒊𝒛𝒆𝒔 𝒂𝒏𝒅 𝒃𝒓𝒂𝒈𝒈𝒊𝒏𝒈 𝒓𝒊𝒈𝒉𝒕𝒔. ⚡ Coming Dec 2-4 AI Security Academy Booth Next to the BlackHat and and and two boot away from the OffSec #AIsecurity #Hackathon #BlackHatKSA #OWASP

    • No alternative text description for this image
  • The AI Exchange whistleblowing prompt security.

    View profile for Rob van der Veer

    AI Pioneer (33+ Years) | Chief AI Officer at SIG | Leading International Collaboration on AI Standards (AI Act Security, ISO/IEC 5338 & 27090) | Founder, OWASP Flagship project AI Exchange | Co-Founder, OpenCRE

    There's an elephant in the room of AI: the security of your prompt. We need to talk about it. When you send input to a cloud AI - ChatGPT, or your own app using a vendor's model: where does your data go? Is it stored? Is it used for training? Who can access it? Strangely, this topic gets little attention. It's not in the LLM top 10 - understandable because you only get ten items and there's nothing cool or really AI about prompt security. Some prefer to avoid the discussion, afraid it might slow down AI adoption. Whenever I raise this in talks, people figuratively cover their ears and go 'la-la-la'. Yet this is the number one question we get from clients at Software Improvement Group and a key concerns in the threat model of the OWASP AI Exchange. Here's what the Exchange advises you to check when your model is hosted by a vendor (90% of the cases): 1️⃣ Where does the model run? Is the model running in the vendor's processes or in your own virtual private cloud? Some vendors say you get a 'private instance', but that may refer to the API, and not the model. If the model runs on the cluster operated by your vendor, your data leaves your environment in clear text. Vendors will minimize storage and transfer, but they may log and monitor. 2️⃣ What are the data retention rules? Has a court required the vendor to retain logs for litigation? This happened to OpenAI in the US for a period of time. 3️⃣ What is exactly logged and monitored? Read the small print. Is logging enabled, and if so, what is logged? And what is monitored - by operators or by algorithms? And in the case of monitoring algorithms: how is that infrastructure protected? Some vendors allow you to opt out of logging, but only with specific licenses. 4️⃣ Is your input used for training? This is a common fear, but in the vast majority of cases the input is not used. If vendors would do this secretly, it would get out because there are ways to tell. If you can't accept the risk for certain data, then hosting your own (smaller) model is the safest option. Typically it won't be as good and there's the catch 22. I'll include a link with more details in the comments. Remember, having unencrypted data in the vendor's cluster is not unique to AI. It's the same for other multi-tenant SaaS services, such as commercial hosted Office suites. If attackers compromise the vendor's inftrastructure or its administrators, they may access your data. When weighing this risk, compare it fairly: the vendor may still protect that environment better than you can protect your own. Sometimes it's best to face the risks and make informed decisions, rather than ignore it until something goes terribly wrong. Don't shoot the messenger! 🙂 #ai #aisecurity

    • No alternative text description for this image

Similar pages

Browse jobs