Posted by newadmin on 2025-02-03 09:05:00 |
Share: Facebook | Twitter | Whatsapp | Linkedin Visits: 18
India’s Defence Ministry recently conducted a pilot study on Lethal Autonomous Weapons Systems (LAWS), highlighting the transformative potential of these technologies. The study raises critical concerns about control, accountability, and ethical deployment. It was carried out in collaboration with the Manohar Parrikar Institute of Defence Studies and Analyses.
Artificial Intelligence plays a crucial role in modern military strategies by enabling systems to operate without human intervention. The Defence Ministry considers AI essential for maintaining strategic autonomy, recognizing that technological superiority is a key factor in military advantage. However, India’s defence manufacturers are still in the early stages of integrating AI into military platforms. The complexity of autonomous systems presents significant challenges, and international export controls on AI components further complicate development. To overcome these hurdles, India must develop sovereign capabilities in critical technologies.
Globally, more than 50 countries are formulating AI strategies for defence, including major powers and allies such as Germany, Japan, and South Korea. The integration of AI into military operations is becoming a competitive necessity, driven by shifting geopolitical landscapes and the demand for essential AI resources. In response, India has taken several strategic initiatives. The establishment of an AI task force in 2018 was followed by the creation of the Defence AI Council and the Defence AI Project Agency. These bodies work to identify priority areas for AI development in the defence sector. Currently, 75 key areas have been outlined, with armed forces collaborating with the Innovations for Defence Excellence initiative to accelerate AI integration.
India has also advocated for the responsible use of AI in military applications. At the United Nations, India has called for discussions on LAWS through a Group of Governmental Experts. Although it abstained from a 2024 UN General Assembly resolution on LAWS, India remains engaged in global conversations about responsible AI use in defence, ensuring its approach aligns with international humanitarian law.
To establish a clear framework for AI deployment, the Defence Ministry has adopted a set of principles for evaluating trustworthy AI. These principles—reliability, transparency, fairness, privacy, and safety—ensure that AI applications in defence adhere to ethical and humanitarian considerations. By taking a balanced approach, India aims to leverage AI’s military potential while upholding its commitment to responsible and lawful deployment.