Critics argue that U.S. firms like Anthropic and OpenAI defend their own sprawling data collection under the banner of fair use while pushing for aggressive enforcement against foreign competitors. However, the very models (like Claude) that the Chinese companies are "distilling" were built using data that Anthropic did not have the rights to use in the first place. While the argument that American AI companies are simply "doing the same thing" serves as a powerful rhetorical counter-punch, it ultimately fails to provide a logical justification for the actions of others. This line of reasoning is a classic example of the tu quoque fallacy—Latin for "you too"—which attempts to discredit a claim by accusing the speaker of hypocrisy rather than addressing the actual merits of their argument. https://lnkd.in/gzeqqh6k
US Firms' Data Collection Practices Under Fire Amidst China AI Rivalry
More Relevant Posts
-
Critics argue that U.S. firms like Anthropic and OpenAI defend their own sprawling data collection under the banner of fair use while pushing for aggressive enforcement against foreign competitors. However, the very models (like Claude) that the Chinese companies are "distilling" were built using data that Anthropic did not have the rights to use in the first place. While the argument that American AI companies are simply "doing the same thing" serves as a powerful rhetorical counter-punch, it ultimately fails to provide a logical justification for the actions of others. This line of reasoning is a classic example of the tu quoque fallacy—Latin for "you too"—which attempts to discredit a claim by accusing the speaker of hypocrisy rather than addressing the actual merits of their argument. https://lnkd.in/gem38zej
The Hypocrisy Argument and Its Limits: Logic in the AI Distillation Debate
https://www.youtube.com/
To view or add a comment, sign in
-
This is an important question — not just legally, but structurally. When restrictions start to look punitive, it often means we’re reacting to capability before we’ve defined what valid use actually looks like. That’s the gap showing up across AI right now. Systems are becoming powerful enough to trigger intervention… but we still don’t have a clear way to determine when their outputs should be allowed to act under real conditions. Until that’s defined, governance will continue to look inconsistent — not because it’s wrong, but because the boundary itself isn’t fully established.
To view or add a comment, sign in
-
Fascinating and flawed. Bernie and Claude agree that 1) regulating AI is essential, but seem to think 2) it's not possible to bcs of the lobbying power of the AI companies, but 3) a moratorium on data centers is the way to go (as if the companies wouldn't lobby that away too).
To view or add a comment, sign in
-
🚨 The state of AI idolatry: people are diluting the meaning of 'consciousness' to fit the technical limits of what an LLM can do. My article below:
To view or add a comment, sign in
-
-
From a former senior Advisor for AI policy to the second Trump administration. “ No matter what world we build, the limitations imposed in the law on what we know today as “the government’s” use of AI will be of paramount importance. We really do want to ensure that mass surveillance and autonomous weapons/systems of control cannot be used to curtail our liberties—at least we want to try. So despite not being the focus of this piece, I applaud the AI labs for caring about these redlines. Over the coming years and decades, I expect that our liberty will be in greater peril than many of us comprehend.”
Former CDC Director & North Carolina Secretary of HHS | Health System Transformer | Driving Scalable, Sustainable Health Impact | National Advisor @Manatt Health
Wow - this is a searing and devastating piece by Dean Ball - former Senior Policy Advisor for AI in the Trump 2 White House and primary drafter of America’s AI Action Plan. A must read - dissecting the implications of the United States Department of War actions against Anthropic. https://lnkd.in/eTW6iUzZ
To view or add a comment, sign in
-
The Shadow Lexicon: Inside the Hidden Language of Modern AI Beneath the fluent text of modern chatbots lies a profound mystery: machines don't actually understand our words. Instead, they have quietly forged a secret mathematical language of their own. https://lnkd.in/dnjNtVBS
To view or add a comment, sign in
-
-
The line between "lawful" and "ethical" AI is blurring at the highest levels, creating a new battleground. We distilled 13 hours of intelligence from top experts, including The New York Times and Google DeepMind, so you don't have to. Understand the Pentagon's AI gambit and its implications for deployment reality. Discover the big shift: https://is.gd/2Xkg2k #AIethics #TechPolicy #BusinessIntelligence #FutureOfAI
To view or add a comment, sign in
-
-
Interesting and I thought this bit was the most important - We need Political, Not Technical, Solution: Corporate courage from companies like Anthropic is commendable and sets a good precedent, but it won't solve the long-term problem. Within a year or two, open-source AI models will catch up, allowing the government to bypass corporate redlines entirely. The only way to prevent an AI-powered authoritarian state is to establish strict political laws and democratic norms against the state using AI for censorship and mass surveillance. https://lnkd.in/e_nE4P_D
The most important question nobody's asking about AI
dwarkesh.com
To view or add a comment, sign in
-
The battle to be name-dropped in AI summaries might be making the internet more human. Now that the best way to win an AI’s trust is to have a long history of being cited by actual people, it is no longer about keywords, it is about word-of mouth. Essentially, we must strive to make our websites as quotable to bots as William Shakespeare, Mark Twain, and Oscar Wilde are to humans. Ironically, the best way to please the engines is to be as fleshy and bloody as possible. Link in comments.
To view or add a comment, sign in
-