Is Our Defense Sector as Safe as We Need It To Be?
What’s this about: The U.S. National Security Commission on Artificial Intelligence has released its Final Report that includes conclusions, observations, and recommendations in 16 highly-detailed chapters. One of the most interesting points of the entire report is it’s admission that the U.S. government is not prepared to defend the nation in an artificial intelligence (AI) era.
Go Deeper to Learn More →
The U.S. National Security Commission on Artificial Intelligence (NACAI) has recently released its 16-chapter Final Report to “advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.” One of the major topics covered in the report is the use of artificial intelligence as a weapon against the nation.
Weaponized AI is one of the greatest challenges the U.S. defense sector faces, and it is one that the country is not yet sufficiently prepared to deal with. Because of this, the defense sector must begin to funnel resources into the area, or else it risks falling behind other leading nations like China and Russia.
Identifying New and Existing Threats
Artificial intelligence is not only creating entirely new classes of threats, but it is also completely transforming and advancing existing ones. These new threats can be especially harmful to society if the right protections are not in place.
There are two existing national threats that AI could worsen:
Digital dependence: Because society is becoming increasingly dependent on digital devices, and because there is massive amounts of available data floating around, AI increases the risk of cyber intrusion virtually everywhere. It can be especially damaging to companies, universities, private organizations, the public sector, and even citizens’ homes. Many of the new sensors and data that are vulnerable are located in Internet of Things (IoT) devices, vehicles, social media platforms, and many more places.
Alternative attacks: Direct military confrontation is becoming less common among adversaries as cyber attacks, espionage, and financial tools become more important. These types of attacks, which can include the spreading of misinformation on online social media platforms, does not necessarily need AI to work, but the technology can drastically improve the efficiency and effect. This also threatens many aspects of the U.S., such as the economy, infrastructure, and social stability. For example, the head of U.S. Cyber Command testified in March that the organization had conducted over two dozen operations aimed at confronting foreign threats ahead of the 2020 U.S. elections.
Weaponized AI Systems
AI is already being weaponized in some way by nearly every capable nation, and this will only continue to increase as the technology advances. These systems enable many different attacks, all of which don’t require any human-to-human contact.
Here are some more AI threats currently being developed and weaponized:
Misinformation campaigns: One of the major threats exacerbated by AI is the spreading of misinformation. AI will take information campaigns to the next level, as it is capable of doing things like sending millions of individualized messages by processing data about a person's digital life. It is also used for deepfakes, a technique to synthesize human imagery with AI. Deepfakes are becoming extremely hard to discern from real-life individuals, and they open up a pandora’s box of potential misinformation.
Data harvesting: Bad actors can combine the massive amount of available commercial data with illicitly acquired data to advance campaigns. This data can be used to track, manipulate, and coerce individuals from both the public and private sectors, and with near-daily data breaches, the risk of identity theft and other types of criminal activity like the theft of photos, email addresses, and contact lists continues to increase. Back in 2018, Facebook became engulfed in the company’s biggest crisis following the fallout surrounding the data firm Cambridge Analytica. Obtained documents showed how the firm was improperly harvesting and utilizing data extracted from the profiles of tens of millions of users on Facebook. This data was then used to develop and sell complex psychological profiles of American voters.
Cyber attacks: AI will continue to advance cyber attacks as it is able to mutate into many forms once inside a computer system, which is called mutating polymorphic malware. This is just one form of attack out of many that adversaries can use against the United States. In one of the most recent examples, the United States was subject to the “largest and most sophisticated attack the world has ever seen,” according to Microsoft Corp President Brad Smith. In February 2021, the world learned about a highly complex hacking campaign against SolarWinds, a major information tech firm. The attack spread out to its clients, and the hack enabled foreign, state-sponsored actors to spy on thousands of companies and government offices, including the U.S. Treasury, Justice and Commerce departments, and more.
AI vs. AI: Artificial intelligence systems can be used against others as a form of adversarial A.I. There have already been documented cases of attacks that utilize evasion, data poisoning, model replications, and exploitation of software flaws to attack and render other AI systems ineffective. According to a recent survey of 28 organizations that included small- to medium-sized businesses, non-profits, and government organizations, only three responded that they have “the right tools in place to secure their ML systems.”
Biotechnology: Massive computing power combined with AI technologies can lead to all sorts of biotechnological creations that are damaging to society, such as engineered pathogens. These can be developed to target specific genetic profiles, and while this is a risk even when used in non-nefarious ways such as research, the much bigger danger comes with the purposeful misuse. As the COVID-19 pandemic continues to impact all aspects of society, there is a rise in concern over potential bio attacks from engineered pathogens in the future.
Proposed Solutions
The National Security Commission on Artificial Intelligence released many proposed solutions to address this unpreparedness for future AI threats, starting with the recommendation to create a joint interagency task force and operations center. This would be a 24-hour task force equipped with advanced technologies that could help the U.S. counter issues like foreign-based misinformation campaigns.
Here is a look at some of the other proposed solutions:
Advanced research: By providing funding to the Defense Advanced Research Projects Agency (DARPA), specifically allocated for research programs into AI-enabled misinformation campaigns, they could be detected earlier.
Studying AI: A new task force could be established to study AI uses and other similar technologies all around the world. This would include the development of standards to establish authenticity.
Improving security of AI systems: The U.S. government needs to improve its own security for AI systems, including commercial systems. Government databases should also be anonymized when possible to restrict access to personal data, which could be used by adversaries.
Develop AI-enabled defenses: National security agencies should train AI systems to detect and counter threats on their networks. By utilizing AI in cyber defense systems, they will be much more effective against large-scale attacks.
Establish national AI assurance frameworks: Government agencies should develop adversarial machine learning (ML) threat frameworks to address potential attacks and defenses for AI systems.
All of this raises the question: Is our defense sector ready for the upcoming challenges of weaponized AI? As of right now, the answer to that question is no, and the nation risks quickly falling behind top adversaries.
Weaponized AI systems could be deployed on many fronts, whether it involves the data harvesting of citizens, cyber attacks, or bio-warfare, and it has the potential to seriously disrupt society. By identifying new and existing threats, establishing guidelines and proposing solutions, and increasing support in the various areas, the defense sector will begin to close this gap.