23.2 C
New York
Friday, July 26, 2024

Buy now

spot_img

US Voluntary Code of Conduct on Artificial Intelligence and Military Applications

 

 

At a meeting held in the White House on July 21, 2023, seven technology companies with significant artificial intelligence (AI) products, including Microsoft, OpenAI, Anthropic, and Meta, made voluntary commitments regarding AI regulation.

The safety, security, and trust guiding principles serve as the foundation for these eight commitments. The code of conduct has been applied to areas and domains that artificial intelligence is likely to affect. These are voluntary, non-binding, and unenforceable, but they could serve as the foundation for a future executive order on artificial intelligence (AI), which is crucial given the growing military use of AI.

Voluntary commitments made on Artificial Intelligence

  1. Red-teaming (internal and external) products to be released for public use. Bio, chemical and radiological risks and ways in which barriers to entry can be lowered for weapons development and design are some of the top priorities. The effect on systems which have interactions and the ability to control physical systems needs to be evaluated apart from societal risks such as bias and discrimination;
  2. Information sharing amongst companies and governments. This is going to be challenging since the entire model is based on secrecy and competition;
  3. Invest in cybersecurity and safeguards to protect unreleased and proprietary model weights;
  4. Incentivize third party discovery and reporting of issues and vulnerabilities;
  5. Watermarking AI generated content;
  6. Publicly report model or system capabilities including discussions of societal risks;
  7. Accord priority to research on societal risks posed by AI systems; and
  8. Develop and deploy frontier AI systems to help address society’s greatest challenges

The eight commitments from US Big Tech companies come a few days after the UN Security Council (UNSC) for the first time called a meeting to discuss the danger that AI poses to international peace and security. The creation of a global AI watchdog with subject-matter experts who would consult with governments and administrative bodies was suggested by the UN Secretary General (UNSG).

A legally binding agreement prohibiting the use of AI in automated weapons of war must be developed by the UN by 2026, the UNSG further stated. The discussion at the UNSC can be seen as shifting the focus from the short-term threat of disinformation and propaganda posed by AI in a bilateral context between governments and Big Tech companies to a larger, global focus on advancements in AI and the need to adhere to certain common standards that are transparent, respect the privacy of individuals whose data is massively “scraped,” and ensure strong cybersecurity.

Threat by Artificial Intelligence

Since it is unclear how the technology will actually affect society in the long run, US lawmakers have been working to slow down the exponential advancements in the AI field for some time. Some have even compared AI to the atom bomb and dubbed the current stage of AI development as the “Oppenheimer moment,” named after the scientist-philosopher J. Robert Oppenheimer, in response to the so-called danger of AI. Reactions to the so-called danger of AI have been divisive.

The first atomic bomb was tested by Robert Oppenheimer, who oversaw the successful completion of the Manhattan Project. The first nuclear age, which began at this precise moment and continues to this day, was officially initiated. Consequently, the Oppenheimer moment serves as a boundary between the traditional past, the new present, and presumably the unknowable future.

P(doom) is a term that some academics, activists, and even members of the Big Tech community, known as “AI Doomers” have created in an effort to put a number on the likelihood that humanity will suffer great harm or go extinct as a result of “runaway superintelligence”. Others mention variations of the “Paperclip Maximiser,” in which humans give an AI a specific task to optimize; the AI interprets the task as maximizing the number of paperclips in the universe; and then the AI proceeds to use all the planet’s resources to produce only paperclips.

This thought experiment was used to illustrate the risks of two AI-related problems : the “orthogonality thesis,” which describes a highly intelligent AI that could interpret human goals in its own way and go on to complete tasks that have no value to humans; and “instrumental convergence,” which describes AI taking control of all matter and energy on the planet in addition to making sure that no one can stop it or change its goals.

Aside from these purported existential threats, the new wave of generative AI9, which has the potential to lower and in some cases, decimate entry barriers to content creation in text, image, audio, and video format, can have a negative impact on societies in the short- to medium-term. The age of the “superhuman,” the lone wolf who can attack government institutions at will by clicking his keyboard, could be brought about by generative AI.

Generative AI has the potential to produce mass amounts of misinformation when used by state actors, non-state actors, and motivated individuals. The majority of antagonistic actors and institutions have, up to this point, found it challenging to accomplish this because, among other things, it is challenging to zero in on particular faultlines within nations, speak the local dialect, and produce sufficiently realistic videos. Disinformation as a service (DaaS) is now affordable and available at the user’s fingertips, making the production and large-scale disinformation dissemination very simple. Due to the need for enforceable regulations that follow the legally binding safeguards agreed to by UN members for individual countries, the voluntary commitments made by US Big Tech companies are just the start of a regulatory process.

AI application in the Military

The use of AI in the military has been gaining ground gradually. Both sides of the Russia-Ukraine conflict have deployed increasingly effective AI systems. The Palantir AI Platform (AIP) is a brand-new offering from Palantir, a business that specializes in AI-based data fusion and surveillance services. In a chatbot mode, this uses large language models (LLMs) and algorithms to identify, analyze, and serve up recommendations for neutralizing adversary targets.

There is no additional information on the subject available in the public domain, despite the fact that Palantir’s website makes it clear that the system will only be deployed across classified systems and use both classified and unclassified data to create operating pictures. Additionally, the business has guaranteed on its website that it will use “industry-leading guardrails” to prevent unauthorized actions.

The fact that Palantir was not mentioned in the White House statement is significant because it is one of the very few businesses whose goods are intended for substantial military use.

On July 19, 2023, Richard Moore, the head of the United Kingdom’s (UK) MI6, announced that his team was using AI and big data analysis to find and stop the flow of weapons to Russia.

Russian engineers are testing an unmanned ground vehicle (UGV) Marker that will look for and target Leopard and Abrams tanks on the battlefield. The Marker hasn’t been used in the ongoing conflict against Ukraine, despite having been tested in a variety of environments, including forests.

Ukraine has equipped its drones with primitive AI that can perform the simplest edge processing to identify platforms like tanks and pass only the pertinent information (coordinates and nature of platform), amounting to kilobytes of data, to a vast shooter network. Mistaking objects for them presents challenges, and it becomes extremely challenging to single out and identify opponents. The Ukrainians have used facial recognition software to identify the bodies of killed Russian soldiers for propaganda purposes.

The idea of the same being applied to drone-based targeted killings is not out of the question. The problem here is obviously systemic discrimination and bias in the AI model that creeps in despite the best efforts of the data scientists and may result in the unintentional killing of civilians. Similar to this, spoofing senior commanders’ voice and text messages could result in formations receiving fictitious and lethal orders. Contrarily, the UK-led Future Combat Air System (FCAS) Tempest program envisions a fully autonomous fighter with AI integrated into both the design and development (DandD) and the identification and targeting phases of operations. At best, the human will be in a loop.

Conclusion

A byproduct of the technological advancements sweeping Silicon Valley is the application of AI in the military. As a result, in order to slow down the development of AI, more than self-censorship is needed; regulation is needed. This will be required to ensure that the modern battlefield, which is already crowded with lethal and precision-based weapons, is not adversely affected by these technologies.

 

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles