From the course: Building AI Literacy and Fluency with Microsoft

Deep fakes and copyright AI

Imagine you're scrolling through your mobile phone and you come across a video of a famous person saying something controversial. You're taken aback, but then you remember it could be a deepfake. A deepfake is a fraudulent piece of content, typically audio or video, that has been manipulated or created using artificial intelligence. Deepfakes use advanced AI techniques to replace a real person's voice, image, or both with eerily similar artificial likenesses. This technology has made it increasingly challenging to discern if something you see or hear on the internet is real. Deepfakes are frequently used to spread disinformation and can be used in scams, election manipulation, social engineering attacks, and other kinds of fraud. Recognizing the urgent need to combat deepfakes, the tech sector, including companies involved in AI model creation and consumer services like Microsoft, is taking action. These efforts have evolved over time, transitioning from implementing anti-counterfeiting features to advancing digital protection technologies. Here's what the tech sector is doing. 1. Building a safe environment. Safety measures are being put in place to make sure everything runs smoothly and safely. This includes things like constant checkups, blocking bad behavior, and quick action against those who misuse the system. 2. Content Credentials To fight against fake videos, images, or audio, special marks or icons are being added to AI-created content. This helps determine where the content came from and its history. 3. services safe. Efforts are being made to spot and remove harmful and misleading content from online platforms. This helps keep the online space safe and respectful for everyone. Four, working together. In the same way that collaboration is key to achieving shared goals, when individuals in the tech industry, organizations dedicated to societal welfare, and government entities come together, they can collectively contribute to creating a more secure online environment. This unified approach can lead to innovative solutions and stronger safeguards for everyone in the digital space. 5. Updating laws for new challenges. As new challenges arise, efforts are being made to develop new laws and initiatives to protect people from misuse. 6. Educating the public. It's important for everyone to be well informed. Initiatives are underway to empower people to discern authentic content from fake content. This includes the development of new tools and educational programs for the public. These strategies aim to make things more transparent and help society be more resilient against deepfakes. While deepfakes pose ethical and security concerns due to their potential misuse in spreading misinformation and impersonation, AI technology is also making strides in detecting deepfakes with high accuracy. This leads us to another significant topic, the rise and awareness of AI-generated content. Copyright is a legal concept that grants creators and authors of original work exclusive rights to its use and distribution, ensuring they receive recognition and financial benefit from their creations. But what happens when the creator is an AI service or tool? Knowing that copyright is an important issue to address for AI-generated content, initiatives such as the Microsoft Copilot Copyright Commitment were created to extend existing intellectual property security support to commercial copilot services. This commitment addresses potential IP infringement liability that could arise from the use of the output of Microsoft's Copilots and Azure OpenAI service. To further build trust in AI-generated content, Microsoft has developed content credentials. This feature uses cryptographic methods to add an invisible digital watermark to all AI-generated images in Bing, including the time and date it was originally created. This helps determine where the content comes from and gives those who create and share content better tools to decide what to trust. Taking this step is key to making sure we use AI technology in a way that's safe and responsible. It's important to stay informed. Remember, not everything you see or hear may be the truth, so when it comes to AI-generated content, it's important to know your rights and the measures in place to protect them.

Contents