Machine-altered images and deceit: Establishing principles for responsible and legitimate application of manipulated digital content media
In the digital age, deepfakes have become a significant concern for national security, and the U.S. military is taking steps to address this emerging threat.
Researchers at MIT created a fake speech by President Richard Nixon in 2020, demonstrating the potential for manipulated internet-based media. This demonstration served as a wake-up call, highlighting the need for the military to address deepfakes in its policies and regulations.
The U.S. military's interest in deepfakes has been confirmed, with the Department of Defense (DoD) actively exploring the use of this technology. However, the potential advantages and harmful consequences of deepfakes need to be balanced carefully.
The National Defense Authorization Act (NDAA) 2024 addresses deepfake technology specifically in cybersecurity and military contexts. The NDAA formalizes the National Institute of Standards and Technology (NIST) AI Risk Management Framework for handling AI risks. It includes provisions for a National DeepFake Detection Program, criminalization of malicious deepfake use, and election security measures.
The DoD is also implementing AI adoption and control policies as part of the White House's America’s AI Action Plan. This involves talent development, prioritizing AI workflows, advancing interpretability and robustness of AI systems, and supporting technology development through agencies like DARPA.
The White House and DoD are considering legal frameworks to combat deepfake-related influence, including formalizing forensic guidelines and standards to handle synthetic media in legal and military adjudications. Laws like the Take It Down Act criminalize non-consensual creation and dissemination of deepfake content but are more focused on civilian protections.
Foreign-made deepfakes pose a significant threat to democracy and will likely complicate future U.S. elections. For instance, Russian hackers established a pro-Kremlin website in 2022 to flood the information environment with deceptive news and opinions using AI-generated profile photographs. Similarly, China published a series of videos for the so-called Wolf News network in February 2023, purporting to show American reporters praising China's contributions to geopolitical stability.
The U.S. Special Operations Command (USSOCOM) is the joint proponent and coordinating authority for internet-based information operations. USSOCOM's research, development, and testing include the pursuit of next-generation influence capabilities. However, it remains unclear if U.S. government agencies are developing deepfake technology for future operational use, or if senior leaders fully grasp the legal and policy issues associated with the use of generative AI to influence foreign audiences.
The use of deepfakes in military operations could significantly impact the conduct of military operations, as explained by Eric Talbot Jensen and Summer Crockett in their 2020 article for the Lieber Institute for Law and Warfare. On the other hand, deepfakes could potentially offer utility for military operations, as discussed by Major John C. Tramazzo, an active duty Army judge advocate and military professor at the US Naval War College's Stockton Center for International Law.
The US House Intelligence Committee held a hearing in June 2019 regarding the dangers presented by deepfakes and other generative AI applications. Actor Tom Hanks expressed openness to the potential for future filmmakers to include deepfake video and audio featuring his image, likeness, and voice after he is deceased. In June 2023, the social media platform Douyin suspended the account of a Russian soldier after his followers realized he was a deepfake controlled by a Chinese user.
In conclusion, the U.S. military policy environment is evolving with a combination of legislative criminalization of harmful deepfake uses, development of detection and mitigation programs, and formal adoption of AI regulatory standards aimed at reducing risks in influence and deception operations. The approach integrates legal, technological, and workforce readiness measures to address the emerging threat deepfakes pose to military operations and national security.
- The concern over deepfakes extends beyond national security, as they can potentially influence various aspects of life, from industry and finance to fashion-and-beauty, food-and-drink, and personal-growth.
- In the realm of finance, cybersecurity becomes paramount due to the risk of deepfakes being used for fraudulent transactions, leading to serious financial losses.
- The lifestyle industry must be vigilant against deepfakes in advertising, ensuring authenticity and honesty to maintain trust with consumers.
- Deepfakes could impact the fashion-and-beauty industry by allowing for the creation of virtual models, which raises questions about diversity and representation.
- The food-and-drink sector might see an increase in the use of deepfakes to deceive customers about the authenticity of food products or the origins of ingredients.
- Investors need to be aware of potential deepfakes in financial reports, as manipulated data can lead to major losses in the stock market.
- The home-and-garden industry may face challenges related to deepfakes in product reviews, leading to the sale of substandard goods.
- Business owners must prioritize responsible practices in data-and-cloud-computing to minimize the risk of falling victim to deepfake attacks.
- The use of deepfakes in e-commerce can lead to deceptive practices, such as concealing product flaws or using fake testimonials.
- The technology sector is investing heavily in deepfake detection and mitigation technologies to protect users and businesses alike.
- Artificial intelligence has the potential to revolutionize sectors like education-and-self-development and personal-growth by offering personalized learning experiences and virtual mentors.
- In the realm of entertainment, deepfakes can lead to ethical debates, such as whether it is acceptable to use deepfake technology to resurrect deceased performers.
- Deepfakes have the potential to significantly impact politics, as shown by their use in manipulating public opinion in various elections.
- The casino-and-gambling industry must address the potential abuse of deepfakes in gaming, as it could lead to cheating and unfair practices.
- In sports, deepfakes could be used to manipulate performance statistics, leading to an unfair advantage for teams and athletes.
- Sports-betting platforms must take measures to prevent the use of deepfakes to alter odds or game results, as it could lead to significant financial losses.
- Deepfakes could pose a threat to social-media platforms, creating a consensus crisis in which it becomes difficult to determine truth from fiction.
- The movie-and-TV industry may face challenges in ensuring authenticity in documentaries and historical dramas when using deepfake technology.
- Casinos in Las Vegas will need to adapt to the changing gambling landscape due to the rise of deepfake games and online casinos.
- The gambling-trends of the future will likely see a focus on responsible-gambling practices, as well as the development of AI-powered games that offer more immersive and personalized user experiences.
- Celebrities and public figures should be aware of the risks associated with deepfakes, as they can become targets for online harassment and manipulation.
- Sports teams and athletes might need to invest in deepfake detection technologies to protect their data and reputation against potential attacks.
- The rise of deepfakes has sparked debates within the legal community about the ethics of using such technology for personal gain or deception.
- Researchers and technologists should work collaboratively to create and refine deepfake detection algorithms, helping to combat their misuse.
- The widespread use of deepfakes highlights the need for an ongoing dialogue about responsible-gambling practices in various industries and contexts.
- Sports leagues and organizations should implement ethical guidelines for using deepfake technology in sports analysis and projections.
- In the realm of mixed-martial-arts, deepfakes could potentially be used to create false rumors or exaggerate a fighter's abilities, influencing betting odds and public perceptions.
- The weather industry might encounter challenges in the use of deepfakes to predict or manipulate weather patterns, potentially leading to significant environmental consequences.