CDI Code of Ethics I felt compelled to post a quick summary read of the ACDIS & AHMA CDI Code of Ethics, after hearing from some of my CDI colleagues describe the pressure placed on them to review records, issue more queries, capture more CC/MCCs, and generate more revenue. This pressure emanates from CFOs who have been convinced by prominent CDI consulting companies pushing their CDI consulting services and CDI software. Typical claims made by these firms include " we can get you x million dollars every year for the next three years, just sign this contract for x million dollars," or buy our software, "you will achieve 5:1 ROI starting on Day 1: An average of $2.5M in annual net new revenue per 10K patient discharges. CFOs are being misled to believe that CDI is in the business of bringing in more revenue through more queries. More queries without improvement in the quality of physician documentation generate more denials, period. Durable, sustainable net patient revenue is a byproduct of solid physician documentation that reflects the clinical truth-Cesar M Limjoco MD. CDI has, in many respects has lost its way, caught up in coding of CCs/MCCs as opposed to collaborating with physicians to enable and facilitate better documentation. In the process, the profession as a whole is deviating from the ACDIS CDI Code of Ethics 📘 ACDIS Code of Ethics — Core Principles & References 🧭 1. Integrity in Documentation Commitment to accurate, complete, and honest documentation that genuinely reflects clinical care. Prohibition against altering or suppressing information to manipulate outcomes . 👩⚕️ 2. Ethical Query Practice Queries must be neutral, non-leading, and based on clinical indicators. Forbidden: "leading" queries, introducing unsupported diagnoses, or querying when no clinical basis exists. . 💰 3. Avoiding Financial Manipulation No participation in practices that inappropriately increase payment, distort data, or misrepresent medical necessity. . 🎓 4. Ongoing Education & Expertise Maintain and enhance professional knowledge through continuing education, including coding standards and ethical guidelines . 👥 5. Team Collaboration Work collegially with providers, coders, and quality teams. Commit to interdisciplinary education to support accurate documentation and compliant CDI processes. . 🔐 6. Confidentiality & Compliance Strictly protect patient confidentiality and access only necessary health information. Report any unethical, non-compliant, or unlawful behavior through appropriate channels. . ⚖️ 7. Professional Conduct & Reporting Uphold the highest standards of integrity, honesty, and professionalism. Take action against unethical behavior, even among peers. . #CDI, #codeofethics, #moneytalkscodeofethicswalks, #misnomer, #hoodwinkedbyconsultingcompanies
CDI Code of Ethics: A Call for Integrity and Collaboration
More Relevant Posts
-
𝐌𝐨𝐬𝐭 𝐥𝐞𝐚𝐝𝐞𝐫𝐬 𝐭𝐡𝐢𝐧𝐤 𝐞𝐭𝐡𝐢𝐜𝐬 𝐢𝐬 𝐚𝐛𝐨𝐮𝐭 𝐛𝐢𝐠 𝐬𝐜𝐚𝐧𝐝𝐚𝐥𝐬. In reality, the damage often stems from small shortcuts that become ingrained habits. Set redlines early, or you will pay later. 𝐌𝐚𝐤𝐞 𝐧𝐨𝐧-𝐧𝐞𝐠𝐨𝐭𝐢𝐚𝐛𝐥𝐞𝐬 𝐞𝐱𝐩𝐥𝐢𝐜𝐢𝐭, 𝐯𝐢𝐬𝐢𝐛𝐥𝐞 𝐚𝐧𝐝 𝐞𝐧𝐟𝐨𝐫𝐜𝐞𝐝. 𝐀 𝐬𝐢𝐦𝐩𝐥𝐞 𝐝𝐞𝐜𝐢𝐬𝐢𝐨𝐧 𝐟𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤: 𝐂𝐋𝐄𝐀𝐑 → 𝐂𝐥𝐚𝐫𝐢𝐟𝐲 𝐢𝐧𝐭𝐞𝐧𝐭: purpose and who is affected. → 𝐋𝐞𝐠𝐚𝐥 𝐚𝐧𝐝 𝐩𝐨𝐥𝐢𝐜𝐲 𝐜𝐡𝐞𝐜𝐤: laws and internal standards. → 𝐄𝐱𝐩𝐨𝐬𝐮𝐫𝐞 𝐚𝐧𝐝 𝐡𝐚𝐫𝐦: customer impact and reputational risk. → 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: named owner and approval path. → 𝐑𝐞𝐜𝐨𝐫𝐝: log the decision and set a review date. 𝐓𝐡𝐞 𝟓 𝐞𝐭𝐡𝐢𝐜𝐚𝐥 𝐫𝐞𝐝𝐥𝐢𝐧𝐞𝐬: 1️⃣ 𝐃𝐚𝐭𝐚 𝐦𝐢𝐬𝐮𝐬𝐞: Using data beyond the stated purpose or without consent. Controls: data use declaration, DPIA, DLP alerts, data owner signoff. 2️⃣ 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐬𝐡𝐨𝐫𝐭𝐜𝐮𝐭𝐬: Skipping consent, retention or minimisation to move faster Controls: privacy checklist, consent logs, retention schedule, privacy officer signoff. 3️⃣ 𝐔𝐧𝐬𝐚𝐟𝐞 𝐫𝐨𝐥𝐥𝐨𝐮𝐭: Shipping without guardrails or rollback, weak testing, and no post-release monitoring. Controls: go-live checklist, kill switch, rollback plan, red team test, on-call plan. 4️⃣ 𝐅𝐚𝐥𝐬𝐢𝐟𝐢𝐞𝐝 𝐦𝐞𝐭𝐫𝐢𝐜𝐬: Vanity numbers, cherry picking, and hiding adverse results. Controls: metric dictionary, independent QA, raw logs preserved, audit trail. 5️⃣ 𝐕𝐞𝐧𝐝𝐨𝐫 𝐧𝐨𝐧-𝐜𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞: Missing certifications or controls, hidden sub-processors, and weak data residency. Controls: right to audit, SOC 2 and ISO 27001 evidence, contract clauses, penalties, and exit plan. 𝐒𝐞𝐞 𝐢𝐭 → 𝐂𝐚𝐥𝐥 𝐢𝐭 → 𝐄𝐬𝐜𝐚𝐥𝐚𝐭𝐞 𝐢𝐭 ▶️ 𝐃𝐨𝐜𝐮𝐦𝐞𝐧𝐭: Use an Ethics Decision Log with scenario, redline type, evidence link, affected customers, legal or policy refs, risk rating, owner, decision, follow-up date ▶️ 𝐄𝐬𝐜𝐚𝐥𝐚𝐭𝐞: Level 1: product owner and risk partner within 24 hours. Level 2: privacy officer, CISO, data ethics group, if customer impact or legal risk. Level 3: SteerCo or Board for material harm or media exposure. ▶️ 𝐒𝐭𝐨𝐩 𝐫𝐮𝐥𝐞: if any redline is breached, pause release and invoke rollback. 𝐒𝐢𝐦𝐩𝐥𝐞 𝐭𝐞𝐬𝐭: Would you be comfortable if this decision were on the front page with your name attached? 𝐈𝐟 𝐧𝐨𝐭, 𝐢𝐭 𝐢𝐬 𝐚 𝐫𝐞𝐝 𝐥𝐢𝐧𝐞. 𝐓𝐫𝐮𝐬𝐭 𝐢𝐬 𝐲𝐨𝐮𝐫 𝐫𝐞𝐚𝐥 𝐥𝐢𝐜𝐞𝐧𝐜𝐞 𝐭𝐨 𝐨𝐩𝐞𝐫𝐚𝐭𝐞. Leadership is not just what you achieve; it is what you refuse to compromise. P.S. Which redline do you see most under pressure in real-world delivery and what control helps you hold the line? #Leadership #Ethics #Riskmanagement #Governance #Compliance #Trust
To view or add a comment, sign in
-
Proud of achieving a distinction for "AI Ethics, Compliance at Governance" at Saïd Business School, University of Oxford. For the written assignment, we were required to apply and evaluate fairness metrics & identify prohibited AI for a recruitment program - an activity just close to my heart and aligned with my DEI expertise. "Hi Felicity Thanks for your submission. This is an excellent and well-structured response that demonstrates strong legal understanding and thoughtful ethical reasoning throughout. You’ve clearly grounded your analysis in the AI Act, and your engagement with fairness metrics and social media data use is particularly sharp. Section 1: Risk Classification and Compliance Mandates You correctly classify ValueCheck as a high-risk AI system under Article 6(2) and Annex III(4)(a), with a strong explanation of its role in recruitment and potential impact on candidate rights. Your analysis of the compliance mandate is detailed. You walk through the key obligations under Articles 9 to 15 clearly, including risk management, data governance, transparency, and human oversight. You also go beyond the basics by including provider obligations (Article 16 onward), documentation retention, post-market monitoring, and the CE conformity process. The only slight improvement would be to more directly connect specific compliance duties to potential risks in recruitment, such as data bias or opacity in automated evaluations. Section 2: Fairness Metric Analysis You provide a clear and accurate explanation of demographic parity and explain its limitations effectively. Your critique shows a strong understanding of how forced parity can mask true alignment or disadvantage qualified candidates. Your proposed alternative, equal opportunity and equalised odds, is entirely appropriate. You explain both metrics well, distinguishing their focus on true positives and false positives.The recommendation to apply these metrics to a wider set of protected attributes (beyond gender) demonstrates excellent awareness of inclusion and bias mitigation principles. Section 3: Social Media Data Usage Your legal and ethical evaluation of the three options is well judged. Option 1 is rightly flagged as a prohibited form of social scoring under Article 5(1)(c). Option 2 is correctly identified as doubly non-compliant. It involves biometric data and inferred beliefs (Article 5(1)(g)) and also reinforces prohibited social profiling. You show a strong grasp of how inferred associations can breach AI ethics and data protection principles. In Option 3, you recognise that it includes human oversight and value alignment rationale, but also correctly flag that it constitutes emotion inference, which is prohibited in employment contexts under Article 5(1)(f). This nuanced analysis demonstrates excellent judgement. Conclusion and Suggestions This is a highly articulate, well-evidenced submission with clear legal grounding and thoughtful ethical engagement. Very well done. "
To view or add a comment, sign in
-
"Let ethics be your architecture, not your add-on." AI isn't just about innovation and efficiency; it's about building systems that are fair, transparent, and responsible from the ground up. As business leaders venturing into AI, it's crucial to prioritize ethics as the foundation of your AI strategy rather than treating it as an afterthought. Why should ethics be at the core of your AI initiatives? First, consider trust. Customers and clients are becoming increasingly aware of how their data is used. They expect businesses to handle their information responsibly. When ethics are integrated into the architecture of your AI systems, you build trust and demonstrate accountability, which can differentiate your brand in a competitive market. Second, ethical AI helps mitigate risks. AI models are only as good as the data they're trained on. If biases exist in your data, those biases can be perpetuated and even amplified by AI systems. By embedding ethics into your AI strategy, you can actively work to identify and address these biases, reducing the risk of unintentional harm and ensuring fair outcomes. Third, regulation is catching up with technology. Governments worldwide are implementing stricter regulations regarding AI's use, especially concerning privacy and discrimination. By establishing an ethical framework from the outset, your business can stay ahead of legal requirements, avoiding potential fines and reputational damage. How can you make ethics the foundation of your AI strategy? 1. Foster a culture of ethical awareness: Encourage open discussions about ethics within your team and provide training on identifying ethical concerns in AI projects. 2. Develop clear guidelines: Establish policies that articulate your ethical stance on AI, covering data privacy, fairness, and accountability. 3. Engage diverse perspectives: Include diverse voices in your AI development process to better identify potential biases and create more inclusive solutions. 4. Monitor and iterate: Implement continuous monitoring of AI systems to identify ethical breaches and adjust strategies as needed. By making ethics the bedrock of your AI approach, you not only create a more robust and trustworthy system but also contribute to a more equitable digital future. P.S. How do you currently address ethical considerations in your AI projects? Made it this far to read? Awesome! But if the post didn’t help or teach you anything, check out the video below. It may be a bit unrelated, but you might still learn something, or at the very least, be entertained (hopefully). ******* Want to learn how Artificial Intelligence (AI) can improve your business operations? DM me or follow me here: https://lnkd.in/gKHyq6gN 🔄 Repost this post
"Let ethics be your architecture, not your add-on." AI isn't just about innovation and efficiency;
To view or add a comment, sign in
-
🧠The AI Misuse Report You Weren’t Supposed to Read Case X revealed what expensive “AI safety reports” don’t want you to see the ethics, the system, and the silence behind them. **Module 1.2 Released** Recent social-media monitoring has revealed several cases of AI misuse and ethical deviation.This program, based on a verified real-world case study (Case X), aims to promote long-term education and professional dialogue on AI governance and technical integrity. 🧭 Background & Purpose The era of one-off, high-priced AI ethics reports is ending. As technology and misuse scenarios evolve rapidly, only a multi-stage, open, and systematic education framework can offer sustainable defense and decision-support capacity. We therefore launch the Ethical Assistant Education Initiative, whose core goals are to: Build an open-learning framework for AI governance and ethics.Provide non-commercial, professionally structured, and reusable training resources. Foster shared accountability and knowledge exchange btw enterprises and developers. 📊 3-Stage Strategic Framework Our education roadmap aligns risk tiers with professional roles, combining real-case analysis and ethical engineering practice. S1 (this month): AI Governance & Brand Integrity Audience: Chief Risk Officers (CROs), brand-safety teams. Theme: Technical Volume Structural Anatomy of Risk Coupling Focus: Using Case X to deliver a framework for risk recognition and ethical safeguards, helping organizations assess how AI affects brand trust and public reputation over time. S2 : Model Ethics & Technical Integrity Audience: LLM developers, training and product-safety engineers. Theme: Engineering Volume Reconstructing Trust and Risk in AI Systems Focus: Analyzing model compliance, hallucination patterns, and prompt-engineering risks to build a scientifically grounded alignment and defense perspective. S3 : Cognitive Safety & Organizational Risk Management Audience: HR leaders, training managers, DEI officers. Theme: Organizational Volume Cognitive Safety in the AI Era Focus: Integrating social-psychological insights with AI governance to help organizations mitigate employee radicalization and digital-dependency risks. 💡 Join Stage 1 Training 1️⃣ The 1st module of Technical Volume launches 27/10 - weekly updates. 2️⃣ Share your most urgent topics and challenges in the comments your feedback will help refine future modules and maximize real-world value. Follow and share this open-knowledge initiative so AI governance and ethical literacy can become a shared industry language. Notes: Released under CC BY-NC-ND License for educational, non-commercial use. Re-use must cite the Ethical Assistant Education Initiative 2025 and Case X. Works omitting source credit will lose conceptual and ethical consistency. © User G / Ethical Assistant Education Initiative 2025 · All rights reserved. #AItraining #AIethic #AIsafety #HR #aigovernance #responsibleai #AIEducation #AIlegal #artificialintelligence #AI
To view or add a comment, sign in
-
-
Dear AI Auditors, AI Ethics and Accountability Auditing AI systems are making decisions once reserved for humans, from approving loans to screening job candidates to diagnosing patients. But as AI becomes more powerful, it also becomes more dangerous when left unchecked. Ethics and accountability must be treated as audit-critical concepts. An AI that lacks ethical oversight can cause reputational, legal, and societal harm. 📌 Define the Ethical Baseline: Auditors must first understand what “ethical AI” means in the organization’s context. Review whether governance frameworks incorporate principles of fairness, transparency, accountability, and human oversight. Check for policies aligned with global standards like the OECD AI Principles, ISO 42001, NIST AI Risk Management Framework, or the EU AI Act. 📌 Assess Governance and Oversight: AI governance must extend beyond technical performance. Confirm that an AI Ethics Committee or similar body exists to review high-risk use cases. Determine if ethical risks are assessed before model deployment and periodically re-evaluated during operation. 📌 Transparency and Explainability: Accountability requires clarity. Verify that AI decisions can be explained to impacted stakeholders, whether customers, regulators, or employees. Ensure documentation clearly describes how inputs drive outcomes, especially in regulated industries like finance or healthcare. 📌 Bias and Fairness Auditing: Audit fairness metrics and test results. Does the organization regularly check for bias in datasets and model outputs? Confirm whether teams measure disparate impact and take corrective action when bias is found. 📌 Human-in-the-Loop Controls: Even in advanced AI systems, humans should retain decision authority in critical areas. Auditors should test whether automated recommendations are reviewed by qualified personnel before final decisions are made. 📌 Accountability and Responsibility: Every AI system should have a named owner. Auditors must confirm that accountability for model outcomes is assigned, documented, and communicated, including escalation paths in place in case of errors or issues. 📌 Monitoring and Incident Handling: AI ethics is not static. Review if ethical incidents (e.g., discrimination complaints, misclassifications, or unintended outcomes) are tracked, investigated, and reported. Ensure lessons learned feed back into model improvements. 📌 Evidence for the Audit File: Collect AI governance policies, bias testing reports, explainability documentation, committee meeting minutes, and ethical incident logs. These artifacts demonstrate that the organization treats ethics as a control domain, not an afterthought. AI ethics auditing ensures that technology serves humanity, not the other way around. In an age where algorithms influence real lives, auditors are the guardians of digital conscience. #AIEthics #AIAudit #Governance #ResponsibleAI #RiskManagement #AIAccountability #AITrust #EthicalAI #CyberVerge
To view or add a comment, sign in
-
-
AdvaMed has updated its Code of Ethics to include additional guidance on Digital Health Technologies. AdvaMed’s Code of Ethics provides timely, effective guidance that supports ethical business conduct across medtech, based on the cornerstone values of innovation, education, integrity, respect, responsibility, and transparency. As Digital Health Technologies play an increasingly central role in patient care, clinical decision-making and health data management, AdvaMed’s Board of Directors unanimously approved revisions to expand guidance on responsible data practice. The updated code emphasizes ethical and transparent data use to improve patient access and outcomes, protect patient privacy, and align with best practices. “With the fast-paced development of digital health technologies transforming health care, this Code update recognizes the industry’s longstanding commitment to delivering critical technologies to patients safely and effectively,” said Peter J Arduini, President and CEO of GE HealthCare and Chairman of the AdvaMed Board of Directors. “With these updates to the Code, AdvaMed reaffirms our dedication to safeguarding patient data so that patients can confidently benefit from innovative digital health technologies that are improving patient outcomes worldwide,” said Scott Whitaker, President and CEO of AdvaMed. By following these principles, companies can navigate the evolving intersection of technology and health care while strengthening trust with patients, providers, and regulators. The updated AdvaMed Code takes effect on November 1, 2025. Learn more about AdvaMed’s Code of Ethics update here: https://lnkd.in/ehMPTG6b
To view or add a comment, sign in
-
AdvaMed’s 2025 Code of Ethics Update: Assuring Trust and Sovereigty in thr AI/Digital Health Era Effective November 1, 2025, AdvaMed has expanded its Code of Ethics on Interactions with Health Care Professionals to include new guidance on Digital Health Technologies (DHTs) — a timely step as medtech converges with data science, AI, and software-as-a-medical-device. What’s new and why it matters: • Broader digital scope — Explicit recognition that data-driven devices, algorithms, and platforms are now central to medtech innovation. • Ethical data use — Clear expectations for transparency, privacy, explainability, and responsible stewardship of health data. • Continuous governance — Stronger guidance on auditability, bias monitoring, and ongoing oversight rather than one-time compliance. • Interoperability & equity — Encourages secure data sharing that improves outcomes while protecting consent and avoiding disparities. As AI and digital systems reshape clinical care, ethical data sovereignty must anchor this transformation — ensuring that patients retain genuine ownership, consent, and agency over how their information is used, shared, and monetized. Embedding sovereignty principles safeguards privacy, legitimacy, and long-term trust. AdvaMed’s emphasis on auditability, governance, and accountability provides a robust baseline. But meaningful leadership requires going further: designing for sovereignty — not just regulation — so that trust becomes intrinsic, not optional. As medtech converges with software, companies that elevate data ethics will earn enduring credibility with clinicians, regulators, investors, and—above all—patients. Learn more: AdvaMed Code of Ethics 2025 – DHT Section Update https://lnkd.in/eruM-rpG
AdvaMed has updated its Code of Ethics to include additional guidance on Digital Health Technologies. AdvaMed’s Code of Ethics provides timely, effective guidance that supports ethical business conduct across medtech, based on the cornerstone values of innovation, education, integrity, respect, responsibility, and transparency. As Digital Health Technologies play an increasingly central role in patient care, clinical decision-making and health data management, AdvaMed’s Board of Directors unanimously approved revisions to expand guidance on responsible data practice. The updated code emphasizes ethical and transparent data use to improve patient access and outcomes, protect patient privacy, and align with best practices. “With the fast-paced development of digital health technologies transforming health care, this Code update recognizes the industry’s longstanding commitment to delivering critical technologies to patients safely and effectively,” said Peter J Arduini, President and CEO of GE HealthCare and Chairman of the AdvaMed Board of Directors. “With these updates to the Code, AdvaMed reaffirms our dedication to safeguarding patient data so that patients can confidently benefit from innovative digital health technologies that are improving patient outcomes worldwide,” said Scott Whitaker, President and CEO of AdvaMed. By following these principles, companies can navigate the evolving intersection of technology and health care while strengthening trust with patients, providers, and regulators. The updated AdvaMed Code takes effect on November 1, 2025. Learn more about AdvaMed’s Code of Ethics update here: https://lnkd.in/ehMPTG6b
To view or add a comment, sign in
-
Thanks to the medtech industry’s and AdvaMed’s longstanding commitment to leading with integrity, I’m pleased to share that our updated Code of Ethics will take effect Nov. 1, 2025. It reaffirms industry guidance on handling data responsibly, ethically, and transparently, so patients continue to have access to the latest life-improving data-driven medtech.
AdvaMed has updated its Code of Ethics to include additional guidance on Digital Health Technologies. AdvaMed’s Code of Ethics provides timely, effective guidance that supports ethical business conduct across medtech, based on the cornerstone values of innovation, education, integrity, respect, responsibility, and transparency. As Digital Health Technologies play an increasingly central role in patient care, clinical decision-making and health data management, AdvaMed’s Board of Directors unanimously approved revisions to expand guidance on responsible data practice. The updated code emphasizes ethical and transparent data use to improve patient access and outcomes, protect patient privacy, and align with best practices. “With the fast-paced development of digital health technologies transforming health care, this Code update recognizes the industry’s longstanding commitment to delivering critical technologies to patients safely and effectively,” said Peter J Arduini, President and CEO of GE HealthCare and Chairman of the AdvaMed Board of Directors. “With these updates to the Code, AdvaMed reaffirms our dedication to safeguarding patient data so that patients can confidently benefit from innovative digital health technologies that are improving patient outcomes worldwide,” said Scott Whitaker, President and CEO of AdvaMed. By following these principles, companies can navigate the evolving intersection of technology and health care while strengthening trust with patients, providers, and regulators. The updated AdvaMed Code takes effect on November 1, 2025. Learn more about AdvaMed’s Code of Ethics update here: https://lnkd.in/ehMPTG6b
To view or add a comment, sign in
-
𝗪𝗵𝗮𝘁 𝗶𝗳 𝗲𝘁𝗵𝗶𝗰𝘀 𝗶𝘀 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝗰𝗼𝗺𝗽𝗲𝘁𝗶𝘁𝗶𝘃𝗲 𝗲𝗱𝗴𝗲 𝗶𝗻 𝗔𝗜? Across governments, boardrooms, and even pulpits, the message is converging: AI must serve human dignity, not replace it. California’s new law just raised the floor. 𝗪𝗵𝗮𝘁 𝗖𝗔’𝘀 𝗦𝗕𝟱𝟯 𝗺𝗲𝗮𝗻𝘀, 𝗶𝗻 𝗽𝗹𝗮𝗶𝗻 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 On Sept 29, California enacted SB53. Large AI LLM developers are required to publish safety protocols aligned to recognized standards, log/report major incidents and face large fines. Revenue and compute thresholds define scope, but the expectations are a smart baseline for all. Leaders should expect SB53-style transparency even without a legal mandate. 𝗛𝗼𝘄 𝗜𝘁 𝗺𝗮𝗽𝘀 𝘁𝗼 𝗴𝗹𝗼𝗯𝗮𝗹 𝗴𝘂𝗶𝗱𝗲𝗽𝗼𝘀𝘁𝘀 SB53’s emphasis on disclosure, risk testing, and incident learning fits global AI frameworks (NIST, ISO 42001/23894, EU AI Act, OECD, UNESCO). The common ground is clear: transparency, accountability, human oversight, safety, security, fairness (including bias checks), privacy, and continuous improvement. Companies that align with global anchors showing how AI can act safely will move faster, face fewer surprises, and stand out. Are your teams ready to show their work on AI safety and incidents? 𝗙𝗮𝗶𝘁𝗵 𝗿𝗲𝗰𝗼𝘃𝗲𝗿𝘀 𝘁𝗵𝗲 𝗪𝗛𝗬 Multi-faith voices remind us ethics is not just compliance, it is conscience. Fears of an “impending robotocracy” and AI substituting human intelligence must be offset with transparency, privacy, accountability and spiritual connection. Faith calls us to design for people first. Ask, does this tool respect people, keep humans in charge, and help communities flourish? Then prove it. How can AI cultivate virtue, not just efficiency? 𝗡𝗲𝘄: 𝗡𝗼𝘁𝗿𝗲 𝗗𝗮𝗺𝗲’𝘀 𝗗𝗘𝗟𝗧𝗔 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 At the Notre Dame Summit on AI, Faith, and Human Flourishing (Sept 22–25), leaders launched DELTA: 𝗗𝗶𝗴𝗻𝗶𝘁𝘆, 𝗘𝗺𝗯𝗼𝗱𝗶𝗺𝗲𝗻𝘁, 𝗟𝗼𝘃𝗲, 𝗧𝗿𝗮𝗻𝘀𝗰𝗲𝗻𝗱𝗲𝗻𝗰𝗲, 𝗔𝗴𝗲𝗻𝗰𝘆. It is a practical lens any institution can use alongside NIST and ISO to judge whether a AI use helps people flourish. 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗽𝗹𝗮𝗻 𝗳𝗼𝗿 𝗲𝘅𝗲𝗰𝘀 𝗮𝗻𝗱 𝗶𝗻𝗱𝗶𝘃𝗶𝗱𝘂𝗮𝗹𝘀 (𝘁𝗵𝗶𝘀 𝗾𝘂𝗮𝗿𝘁𝗲𝗿) • 𝗣𝗼𝗹𝗶𝗰𝘆: publish a clear AI policy, tool inventory, and data rules • 𝗢𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁: run an AI review cadence with incident reports and tests • 𝗖𝗵𝗲𝗰𝗸𝘀: test your AI for risks, keep a one-pager of what it does, how to use it safely, and when to hand it to a human • 𝗠𝗲𝘁𝗿𝗶𝗰𝘀: track time saved, quality lift, bias reduced, virtue, and incidents • 𝗗𝗶𝗴𝗻𝗶𝘁𝘆 𝗰𝗵𝗲𝗰𝗸: use DELTA questions where people are directly affected 𝗕𝗼𝘁𝘁𝗼𝗺 𝗹𝗶𝗻𝗲 Ethics is not an add-on. It is how you earn trust, scale outcomes, and sleep at night. SB53 sets a floor. Your mission and values set the ceiling. Comment 𝗣𝗢𝗟𝗜𝗖𝗬 for a starter AI policy prompt that aligns to SB53 and NIST/ISO. #AI #Ethics #Governance #Trust #HumanCenteredAI #HumanAICollaboration #AILiteracy
To view or add a comment, sign in
-
-
AI ethics isn’t just about compliance frameworks — it’s about process. The way work is designed — how data is handled, how decisions are reviewed, how tasks flow between people and AI — determines whether your systems are trustworthy. Compliance sets the floor. But your processes? They set the ceiling. Curious — do your workflows today make AI easier to trust, or harder?
AI Success Advisor 💥 Helping forward thinking leaders to 100x efficiency and innovation through AI mastery, fostering a culture of continuous, AI-driven improvement. Link in About👇🏼
𝗪𝗵𝗮𝘁 𝗶𝗳 𝗲𝘁𝗵𝗶𝗰𝘀 𝗶𝘀 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝗰𝗼𝗺𝗽𝗲𝘁𝗶𝘁𝗶𝘃𝗲 𝗲𝗱𝗴𝗲 𝗶𝗻 𝗔𝗜? Across governments, boardrooms, and even pulpits, the message is converging: AI must serve human dignity, not replace it. California’s new law just raised the floor. 𝗪𝗵𝗮𝘁 𝗖𝗔’𝘀 𝗦𝗕𝟱𝟯 𝗺𝗲𝗮𝗻𝘀, 𝗶𝗻 𝗽𝗹𝗮𝗶𝗻 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 On Sept 29, California enacted SB53. Large AI LLM developers are required to publish safety protocols aligned to recognized standards, log/report major incidents and face large fines. Revenue and compute thresholds define scope, but the expectations are a smart baseline for all. Leaders should expect SB53-style transparency even without a legal mandate. 𝗛𝗼𝘄 𝗜𝘁 𝗺𝗮𝗽𝘀 𝘁𝗼 𝗴𝗹𝗼𝗯𝗮𝗹 𝗴𝘂𝗶𝗱𝗲𝗽𝗼𝘀𝘁𝘀 SB53’s emphasis on disclosure, risk testing, and incident learning fits global AI frameworks (NIST, ISO 42001/23894, EU AI Act, OECD, UNESCO). The common ground is clear: transparency, accountability, human oversight, safety, security, fairness (including bias checks), privacy, and continuous improvement. Companies that align with global anchors showing how AI can act safely will move faster, face fewer surprises, and stand out. Are your teams ready to show their work on AI safety and incidents? 𝗙𝗮𝗶𝘁𝗵 𝗿𝗲𝗰𝗼𝘃𝗲𝗿𝘀 𝘁𝗵𝗲 𝗪𝗛𝗬 Multi-faith voices remind us ethics is not just compliance, it is conscience. Fears of an “impending robotocracy” and AI substituting human intelligence must be offset with transparency, privacy, accountability and spiritual connection. Faith calls us to design for people first. Ask, does this tool respect people, keep humans in charge, and help communities flourish? Then prove it. How can AI cultivate virtue, not just efficiency? 𝗡𝗲𝘄: 𝗡𝗼𝘁𝗿𝗲 𝗗𝗮𝗺𝗲’𝘀 𝗗𝗘𝗟𝗧𝗔 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 At the Notre Dame Summit on AI, Faith, and Human Flourishing (Sept 22–25), leaders launched DELTA: 𝗗𝗶𝗴𝗻𝗶𝘁𝘆, 𝗘𝗺𝗯𝗼𝗱𝗶𝗺𝗲𝗻𝘁, 𝗟𝗼𝘃𝗲, 𝗧𝗿𝗮𝗻𝘀𝗰𝗲𝗻𝗱𝗲𝗻𝗰𝗲, 𝗔𝗴𝗲𝗻𝗰𝘆. It is a practical lens any institution can use alongside NIST and ISO to judge whether a AI use helps people flourish. 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗽𝗹𝗮𝗻 𝗳𝗼𝗿 𝗲𝘅𝗲𝗰𝘀 𝗮𝗻𝗱 𝗶𝗻𝗱𝗶𝘃𝗶𝗱𝘂𝗮𝗹𝘀 (𝘁𝗵𝗶𝘀 𝗾𝘂𝗮𝗿𝘁𝗲𝗿) • 𝗣𝗼𝗹𝗶𝗰𝘆: publish a clear AI policy, tool inventory, and data rules • 𝗢𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁: run an AI review cadence with incident reports and tests • 𝗖𝗵𝗲𝗰𝗸𝘀: test your AI for risks, keep a one-pager of what it does, how to use it safely, and when to hand it to a human • 𝗠𝗲𝘁𝗿𝗶𝗰𝘀: track time saved, quality lift, bias reduced, virtue, and incidents • 𝗗𝗶𝗴𝗻𝗶𝘁𝘆 𝗰𝗵𝗲𝗰𝗸: use DELTA questions where people are directly affected 𝗕𝗼𝘁𝘁𝗼𝗺 𝗹𝗶𝗻𝗲 Ethics is not an add-on. It is how you earn trust, scale outcomes, and sleep at night. SB53 sets a floor. Your mission and values set the ceiling. Comment 𝗣𝗢𝗟𝗜𝗖𝗬 for a starter AI policy prompt that aligns to SB53 and NIST/ISO. #AI #Ethics #Governance #Trust #HumanCenteredAI #HumanAICollaboration #AILiteracy
To view or add a comment, sign in
-
As a coder working in the outpatient space to improve provider documentation, shutting coders out of the CDI space because “coders aren’t clinical” is a HUGE mistake. I am certified as a clinical documentation expert, a risk adjustment AND a professional coder by my licensing body, and I hold a degree in Healthcare Management. This means I am versed in quality, HCC coding,E/M documentation and the ever changing policies surrounding compliance and coding/documentation. It’s my job to navigate all of that for my network providers and teach them the easiest ways to accomplish what they need to tweak in their workflows to remain compliant, support what they bill, and still keep their clinics running, not pester them with never ending queries. Give them real time, meaningful feedback that makes a difference with helpful tools that actually matter.