No guarantee that AI-driven intel would ensure better security : The Tribune India

Join Whatsapp Channel

No guarantee that AI-driven intel would ensure better security

It’s not so much the lack of data that stifles final decision-making but human failure in arriving at the correct decisions.

No guarantee that AI-driven intel would ensure better security

Roadblock: There are difficulties in the adoption of AI for strategic conclusions and decision-making. iStock



Vappala Balachandran

Former Special Secretary, Cabinet Secretariat

THERE is high hope that artificial intelligence (AI) will improve the entire gamut of intelligence collection and interpretation, leading to prompt and correct decision-making. The present generation might not remember that there was a similar buzz in 1998-99, when then CIA Director George Tenet set up ‘In-Q-Tel’, a hybrid model of private sector venture capital firms operating with government technology procurement models. Its original name was ‘Peleus’, which was changed to ‘In-Q-Tel’ to remind us of ‘Q’ in James Bond movies.

This was to meet the difficulties in organising and sorting ‘unstructured data’ that was overwhelming American intelligence agencies. Investigative journalist Seymour Hersh exposed this problem facing the National Security Agency (NSA) in his article ‘The Intelligence Gap’ published in The New Yorker on November 28, 1999. In-Q-Tel was also aimed at helping the CIA hunt for Osama bin Laden, for which the CIA had created a special division to collate and synthesise intelligence data.

However, no reports appeared in the public domain as to how this hunt had been helped by In-Q-Tel before 9/11. On the other hand, Computer Network said on April 20, 2002, that “applications for In-Q-Tel funding have skyrocketed from about 700 during the operation’s first two-and-a-half years of existence to more than 1,000 in the last six months” after the September 11 terrorist attacks.

In this background, we need to study a November 2023 Stanford University paper, quoting intelligence expert Amy Zegart, formerly with the US National Security Council, that AI could be “incredibly useful for augmenting the abilities of humans… from large amounts of data that humans can’t connect as readily”. For example, the man-hours spent in tracking Chinese surface-to-air missiles by scrutinising hundreds of satellite images could be saved by an AI algorithm to enable analysts to do deep thinking on Chinese intentions.

This is because various intelligence agencies are now facing what she describes as ‘five mores’ (challenges): ‘more threats’ from actors irrespective of geography; ‘more data’ that is ‘drowning’ their analysts; ‘more speed’; ‘more decision-makers’ and ‘more competition’. The US remains globally vulnerable.

The third, fourth and fifth points need more explanation as some of us are not used to the American style of opinion-forming for decision-making: Zegart says that in 1962, during the Cuban missile crisis, then US President John F Kennedy had 13 days to deliberate on policy options after discovering Soviet missiles in Cuba. In 2001, then President George W Bush had to do that within 13 hours after 9/11. Today, the decision time could be 13 minutes or less.

The fourth ‘more’ in the US is that the decision-making power is not merely concentrated in the White House. Decisions are altered by the Congress, while the media and 302 million social media users swing opinion formation, unlike in other countries. The fifth is ‘more competition’: anyone with a cell phone can be an intelligence collector. Last year, France 24 stated that Mnemonic, a Berlin-based NGO documenting human rights abuses in Ukraine, had collected three million digital records since the Russian invasion.

Zegart also underlines the difficulties of AI in its adoption for strategic conclusions and decision-making. Firstly, only a handful of large private corporations are capable of making ‘frontier models’. When it is converted into governance, a question arises as to who will be in control regarding its security. The second question is: who will mitigate its risks? The third is its ethical control. She wants American academics and others to ask ‘tough questions’ about human-centred AI in national security. Would we be able to do this in India? The fourth is yet another risk that would affect the final analytical capability of AI in this background: “If you consider nuclear or financial catastrophe, how do we mitigate those risks? AI is very good at following the rules. Humans are really good at violating rules”.

To this, I would contribute another dimension to the role of AI in decision-making in national security situations. As someone who has studied several cases of the so-called ‘intelligence failure’, I have found that it is not so much the lack of data that stifled final decision-making but human failure in arriving at the correct decisions. How will AI remedy that?

A 1974 study by the Strategic Studies Institute of the Army War College, Pennsylvania, on the 1941 Pearl Harbour attack — which killed around 2,400 soldiers, destroyed eight battleships, three cruisers and 188 aircraft — had found that the decision-makers had nine prior indicators that, if considered seriously, could have led to preventive measures. In the 1973 Yom Kippur War (first phase), the Agranat Commission had found several advance indicators that were not considered by then Israeli Prime Minister Golda Meir. The New York Times (December 1, 2023) made the same observation about the October 7 Hamas attack.

On October 23, 1983, vehicle bombings killed 241 American marines and 58 French soldiers in Beirut (Lebanon). It was treated as an intelligence failure till 2001, when a 1983 alert by the US NSA emerged in the District Court of Columbia, during a civil damages suit, linking Iran with the bombing with a mention of Ali Akbar Mohtashamipour, then Iranian Ambassador to Syria.

In the 1999 Kargil War, our Army think tank, the Centre for Land Warfare Studies, found that between June 1998 and May 1999, the Army, Intelligence Bureau and the Research and Analysis Wing had issued 43 alerts over Pakistani intentions. However, the National Security Council, which was set up on November 19, 1998, could meet only on June 8, 1999, a month after the incursion was formally noticed.

In the case of the 9/11 attacks, the US National Commission had chastised US decision-makers for not taking cognisance of prior indicators. Similarly, 16 prior intelligence alerts before the 26/11 terror attacks did not spur the Maharashtra Government to ensure foolproof coastal patrolling.

In such circumstances, where is the guarantee that AI-induced intelligence products will result in better security management?

Views are personal

#Artificial Intelligence AI


Top News

3 Indian women killed in US as overspeeding SUV goes airborne, crashes into trees

3 Indian women killed in US as overspeeding SUV goes airborne, crashes into trees

The women who were related, belonged to Vasna and Kavitha vi...

IAF helicopter roped in to douse forest fires in Uttarakhand

Massive forest fires rage in Uttarakhand's Nainital; IAF called in

As many as 31 fresh incidents of forest fires are reported f...

2 CRPF personnel killed in militant attack in Manipur

2 CRPF personnel killed in militant attack in Manipur’s Bishnupur

Militants attacked India Reserve Battalion camp at Naransein...

Mamata Banerjee slips and falls while boarding helicopter in Paschim Bardhaman’s Durgapur

Mamata Banerjee slips and falls while boarding helicopter in Paschim Bardhaman’s Durgapur

West Bengal CM was on way to Kulti for an election rally whe...


Cities

View All