Hackers Could Decrypt Your GSM Phone Calls

Researchers have discovered a flaw in the GSM standard used by AT&T and T-Mobile that would allow hackers to listen in.

Most mobile calls around the world are made over the Global System for Mobile Communications standard; in the US, GSM underpins any call made over AT&T or T-Mobile networks. But at the DefCon security conference in Las Vegas on Saturday, researchers from BlackBerry are presenting an attack that can intercept GSM calls as they’re transmitted over the air and then decrypt them to listen back to what was said. What’s more, this vulnerability has been around for decades.

Regular GSM calls aren’t fully end-to-end encrypted for maximum protection, but they are encrypted at many steps along their path, so random people can’t just tune into phone calls over the air like radio stations. The researchers found, though, that they can target the encryption algorithms used to protect calls and listen in on basically anything.

“GSM is a well-documented and analyzed standard, but it’s an aging standard and it’s had a pretty typical cybersecurity journey,” says Campbell Murray, the global head of delivery for BlackBerry Cybersecurity. “The weaknesses we found are in any GSM implementation up to 5G. Regardless of which GSM implementation you’re using there is a flaw historically created and engineered that you’re exposing.”

The problem is in the encryption key exchange that establishes a secure connection between a phone and a nearby cell tower every time you initiate a call. This exchange gives both your device and the tower the keys to unlock the data that is about to be encrypted. In analyzing this interaction, the researchers realized that the way the GSM documentation is written, there are flaws in the error control mechanisms governing how the keys are encoded. This makes the keys vulnerable to a cracking attack.

“It’s a really good example of how the intention is there to create security, but the security engineering process behind that implementation failed.” – CAMPBELL MURRAY, BLACKBERRY

As a result, a hacker could set up equipment to intercept call connections in a given area, capture the key exchanges between phones and cellular base stations, digitally record the calls in their unintelligible, encrypted form, crack the keys, and then use them to decrypt the calls. The findings analyze two of GSM’s proprietary cryptographic algorithms that are widely used in call encryption—A5/1 and A5/3. The researchers found that they can crack the keys in most implementations of A5/1 within about an hour. For A5/3 the attack is theoretically possible, but it would take many years to actually crack the keys.

“We spent a lot of time looking at the standards and reading the implementations and reverse engineering what the key exchange process looks like,” Murray says. “You can see how people believed that this was a good solution. It’s a really good example of how the intention is there to create security, but the security engineering process behind that implementation failed.”

The researchers emphasize that because GSM is such an old and thoroughly analyzed standard, there are already other known attacks against it that are easier to carry out in practice, like using malicious base stations, often called stingrays, to intercept calls or track a cell phone’s location. Additional research into the A5 family of ciphers over the years has turned up other flaws as well. And there are ways to configure the key exchange encryption that would make it more difficult for attackers to crack the keys. But Murray adds that the theoretical risk always remains.

Short of totally overhauling the GSM encryption scheme, which seems unlikely, the documentation for implementing A5/1 and A5/3 could be revised to make key interception and cracking attacks even more impractical. The researchers say that they are in the early phases of discussing the work with the standards body GSMA.

The trade association said in a statement to WIRED: “Details have not been submitted to the GSMA under our coordinated vulnerability programme. When the technical details are known to the GSMA’s Fraud and Security Group we will be better placed to consider the implications and the necessary mitigation actions.”

Though it may not be that surprising at this point that GSM has security issues, it’s still the cellular protocol used by the vast majority of the world. And as long as it’s around, real call privacy issues remain too.

Source | https://www.wired.com/story/gsm-decrypt-calls/?verso=true

RM67.6 million lost to cyber crimes in Q1 2019

LABUAN: Cyber crimes involving losses of RM67.6 million in 2,207 cases were reported in the first three months of this year, according to a senior officer of the Communications and Multimedia Ministry (KKMM) today.

Its deputy secretary-general (policy), Shakib Ahmad Shakir, said the ministry and agencies under it were concerned over the large amounts of money lost through such scams.

The three most common types of cyber crimes were cheating via telephone calls which recorded 773 cases with RM26.8 million in losses, cheating in online purchases with 811 cases totaling RM4.2 million and the ‘African Scam’ with 371 cases totaling RM14.9 million.

E-financial fraud recorded 212 cases involving losses of RM21.5 million, he said when opening a Labuan-level briefing on awareness to combat cyber crimes and human trafficking, here.

He said the losses were reported in online scams, credit card frauds, identity thefts and data breaches.

“KKMM is determined to combat cyber crimes in view of the concerns raised on the rise in cyber crimes committed through various means.

“Cyber crimes are a serious threat to the people as these frauds can cause them to lose hundreds of thousands of ringgit of their hard-earned money,” he said.

The briefing is part of the commitment of KKMM to create public awareness on cyber crimes through education and promotion and publicity campaigns.

Shakib said that according to the Commercial Crime Investigation Department, 13,058 cheating cases were reported in 2017 compared to 10,394 last year.

“I was told that telecommunication fraud is the most common form of (cyber) crime in Labuan with 16 complaints in 2017 and 19 complaints last year, a 35 per cent increase,” he said.

Shakib said the ministry would continue to cooperate with its strategic partners like the media, police, the Malaysian National News Agency (Bernama) and Information Department to combat the menace. – Bernama

Source | https://www.nst.com.my/news/crime-courts/2019/04/482208/rm676-million-lost-cyber-crimes-q1-2019

Israel Neutralizes Cyber Attack by Blowing Up A Building With Hackers

The Israel Defense Force (IDF) claims to have neutralized an “attempted” cyber attack by launching airstrikes on a building in Gaza Strip from where it says the attack was originated.

As shown in a video tweeted by IDF, the building in the Gaza Strip, which Israeli fighter drones have now destroyed, was reportedly the headquarters for Palestinian Hamas military intelligence, from where a cyber unit of hackers was allegedly trying to penetrate Israel’s cyberspace.

“We thwarted an attempted Hamas cyber offensive against Israeli targets. Following our successful cyber defensive operation, we targeted a building where the Hamas cyber operatives work. HamasCyberHQ.exe has been removed,” said the Israeli Defence Forces on Twitter.

However, the Israel Defense Force has not shared any information about the attempted cyber attack by the Hamas group, saying it would reveal the country’s cyber capabilities.

According to Judah Ari Gross of Times of Israel, the commander of the IDF’s Cyber Division said, “We were a step ahead of them the whole time,” and “this was one of the first times where Israeli soldiers had to fend off a cyber attack while also fighting a physical battle.”

However, it’s not the first time when a country retaliates to a cyberattack with a physical attack. In 2015-16, the U.S. military reportedly killed two ISIS hackers—Siful Haque Sujan and Junaid Hussainof Team Poison hacking group—using drone strikes in Syria.

The commander did not reveal the name of the target, but did say that the cyber attack by Hamas was aimed at “harming the way of life of Israeli citizens.”

The tension between Israel and Hamas has increased over the last year, with the latest conflict began on Friday after Hamas militants launched at least 600 rockets and mortars at Israel and shot two Israeli soldiers

In retaliation to the violence by Hamas, the Israel military has carried out their own strikes on what it claimed were hundreds of Hamas and Islamic Jihad targets in the coastal enclave.

So far, at least 27 Palestinians and 4 Israeli civilians have been killed, and over 100 of them have been injured.

The IDF said its airstrike targeted and killed Hamed Ahmed Abed Khudri, who the Israel military reportedly accused of funding the Hamas rocket fire attacks by transferring money from Iran to armed factions in Gaza.

“Transferring Iranian money to Hamas and the PIJ [Palestinian Islamic Jihad] doesn’t make you a businessman. It makes you a terrorist,” IDF wrote in a tweet that included an image of a Toyota car in flames.

In a new development, Israel has stopped its air strikes on the Palestinian territory and lifted all protective restrictions imposed near the Gaza area, after Palestinian officials offered a conditionalceasefire agreement with Israel to end the violence.

Source : https://thehackernews.com/2019/05/israel-hamas-hacker-airstrikes.html?m=1

Happy Ramadan Kareem 2019

السلام عليك

“May the RAMADAN bring you peace and prosperity, good health and wealth, and brighten your life forever”

Best Regards,

SNC INNOVATION FAMILY

MORE DEDICATED CYBER-SECURITY STAFF NEEDED IN HEALTHCARE INDUSTRY

  • Industry that deals with copious amounts of personal, exploitable data
  • Organisation-wide education and awareness are crucial

AS THE adoption of digital technology in the healthcare industry accelerates, there is an increasing need to protect another side of patients’ and healthcare organisations’ well-being – the security of their personal data.

This emphasis on protecting data and mitigating cyber-threats is reflected in the industry’s significant investment into cyber-security.

According to a recent survey by Palo Alto Networks, about 70% of healthcare organisations in Asia-Pacific say that 5% to 15% of their organisation’s IT budget is allocated to cyber-security.

However, despite substantial budgets, there seems to be a need for the healthcare industry to catch-up with industry peers in terms of cyber-security talent, with only 78% having a team in their organisations dedicated to IT security, the lowest among other industries surveyed. This is also well-below the industry-wide average of 86%.

“As an industry that deals with copious amounts of personal, exploitable data, it can be disastrous if this data enters the wrong hands.

“Healthcare organisations need to ensure they are always updated on new security measures, and change their mindset from a reactive approach to a prevention-based approach instead, akin to how they remind patients that prevention is better than cure,” says Sean Duca, vice president and regional chief security officer for Asia-Pacific, Palo Alto Networks.

Risk factors

Aside from monetary loss associated with data breaches and availability of connected devices which monitor patient lives, healthcare professionals are most worried about the loss of clients’ contacts, financial or medical information – 30% have cited loss of details as key.

Fear of damaging the company’s reputation among clients comes next at 22%, followed by 17% citing company downtime while a breach is being fixed as a concern.

Cyber-security risks in healthcare organisations are also amplified with BYOD (Bring Your Own Device), with 78% of organisations allowing employees to access work-related information with their own personal devices such as their mobile phones and computers.

In addition to this, 69% of those surveyed say they are allowed to store and transfer their organisation’s confidential information through their personal devices.

While 83% claimed there are security policies in place, only 39% admit to reviewing these policies more than once a year – lower than the 51% of respondents from the finance industry, a sector also known to hold sensitive client data.

Call to get in shape for the future

As more healthcare organisations fall prey to cyber-attacks, such as ransomware, a lapse in data security is a real threat to the industry, hence organisation-wide education and awareness are crucial towards ensuring that the right preventive measures are implemented and enforced.

Fifty-four percent of the respondents have cited an inability to keep up with the evolving solutions being a barrier to ensuring cyber-security in their organisations, and 63% of respondents attributed this to an ageing internet infrastructure as the likely main reason for cyber-threats, should they happen.

Here are some tips for healthcare organisations:

Ensure that medical devices are equipped with up-to-date firmware and security patches to address cyber-security risks. Medical devices are notoriously vulnerable to cyber-attacks because security is often an afterthought when the devices are designed and maintained by the manufacturer. These precautionary measures may include having an inventory on all medical devices, accessing network architecture and determining patch management plan for medical devices, as well as developing a plan to migrate medical devices to the medical device segment.

Apply a zero-trust networking architecture for hospital networks, making security ubiquitous throughout, not just at the perimeter. Healthcare organisations should look to segment devices and data based on their risk, inspecting network data as it flows between segments, and requiring authentication to the network and to any application for any user on the network.

Practices such as BYOD and some employees’ ability to store and transfer confidential information through their personal devices put them at a higher risk of phishing attacks. To prevent this, healthcare providers should ensure that staff undergo regular end-user security training to reduce successful phishing. Cyber-security best practices can be taught as a new hire class for every employee.

As healthcare organisations migrate portions of their critical infrastructure and applications to the cloud, it becomes imperative for an advanced and integrated security architecture to be deployed to prevent cyber-attacks on three-prongs: the network, the endpoint and the cloud. Traditional antivirus will not be effective in guarding against advanced malware such as ransomware which continuously changes to avoid detection.

Source | https://www.digitalnewsasia.com/digital-economy/more-dedicated-cyber-security-staff-needed-healthcare-industry

Top 10 operational risks for 2019

The biggest op risks for 2019, as chosen by industry practitioners

We present annual ranking of the biggest op risks for the year ahead, based on a survey of operational risk practitioners across the globe and in-depth interviews with a selection of industry personnel. The risks are listed in order of magnitude of threat, with this year’s largest risk being data compromise.

#1: Data compromise  

The threat of data loss through cyber attack, combined with an awareness among managers that defences are vulnerable, has made data compromise a perennial concern for op risk practitioners of all stripes. But the advent of strict new data protection regulation has intensified those fears, helping propel the category to the top of our annual survey for the first time.

Collecting multiple datasets and storing them in one place presents a single, tempting target for hackers. Companies have responded by compartmentalising data and storing it across several locations in an effort to reduce the potential loss from a single breach.

“You have to assume hackers will get through, and what do you do then? It can be just making sure you are storing data in several places, splitting your data so [hackers] getting into one file won’t get what they need,” says one senior risk practitioner.

The EU’s General Data Protection Regulation (GDPR), introduced in May 2018, aims to tighten consumer safeguards around data disclosure. No prosecution has yet used the full scope of penalties – the regulation allows a fine of up to 4% of global revenue – but companies are wary of a sizeable additional loss associated with, for example, a major data breach due to negligence.

Other areas of GDPR may have attracted less attention, but still pose significant potential sources of operational risk. Companies must provide customers with access to their own data, including the ability to correct or erase it in some cases; and they must report a data breach within 72 hours.

New regulations are also offering up enticing targets for hackers, though: their targets are broadening beyond financial services firms to encompass intermediaries and even the official sector. For example, the EU’s Mifid II markets regime requires trading platforms and investment firms to collect personal information on the counterparties to every trade – not just a potential privacy issue, but a new and worrying point of entry to would-be hackers. As the data is passed from firm to platform and from platform to regulator, it becomes exposed to attack.

Some banks are taking advantage of the new market in cyber crime to adopt a more proactive defence strategy. Cyber criminals use the unindexed “dark” web to offer stolen data for sale. By monitoring this black market, institutions may gain advance warning of attacks, or even discover stolen data whose theft had gone unnoticed.

An active defence should also include penetration testing, both online and physical. Often the critical weakness in a cyber security plan sits, as IT managers put it, between chair and keyboard.

In a landmark case in October 2018, US authorities fined fund manager Voya Financial $1 million after a security breach allowed hackers to steal the personal details of thousands of customers. The hackers gained access by making repeated phone requests for password changes, pretending to be Voya subcontractors. Resetting the passwords was explicitly banned by Voya’s policies, but its employees did it nonetheless.

#2: IT disruption

Cyber attacks conjure images of masked figures gaining access to the IT network of a company or government and making away with millions, yet the reality is often more prosaic. Malware designed merely for nuisance value can cripple firms’ operations, while the origin of attack is often not rogue criminal but state entity: the WannaCry and NotPetya ransomware events of 2017 were widely attributed to state-sponsored sources.

“Hackers are more organised and some countries have malicious, not criminal intent,” says an operational risk consultant. “They might not get anything out of it apart from bringing systems down and causing disruption.”

The past year has not seen as many high-profile disruptive cyber attacks as the previous one, which may go some way to explaining why IT disruption slips to second place in Risk.net’s 2019 survey.

However, risk experts still see cyber attacks as an ever-present menace.

Distributed denial of service (DDoS) is one of the most common forms of attack. DDoS data from two security specialists provides a conflicting picture: Kaspersky Lab reports a decline in overall attacks by 13% from 2017 to 2018. Corero says that among its customers, the number of events in 2018 was up 16% year-on-year.

Banks remain vulnerable, even the largest. In April 2018, it was revealed that a co-ordinated DDoS attack had disrupted services at seven major UK lenders, including Barclays, HSBC, Lloyds and RBS. The National Crime Agency and international partners responded by shutting down a website linked to the attacks that offered DDoS services for a small fee.

As banks shift more of their retail and commercial activity online, a growing fear is that a widespread cyber event could cripple an institution’s activity. Dwindling branch networks are reducing the “hard” infrastructure that lenders could previously rely on to maintain essential services.

“Banks may be taking channels offline as firms move away from the high street and close their branches,” says the head of operational risk at a bank. “So one route they have which offers them a certain type of resilience may not be there in a few years’ time and they may be wholly dependent on the digital side.”

#3: IT failure

Though usually overshadowed by its attention-grabbing cousin – the threat of a cyber attack – the risk of an internal IT failure is never far off risk managers’ minds. When such failures happen, their financial, reputational and regulatory consequences can easily rival the damage from high-profile data theft.

It is probably no coincidence that the danger of a self-imposed IT debacle is the third-largest operational risk in 2019’s survey: it follows a year in which a botched system migration cost UK bankTSB more than £300 million ($396 million) in related charges and an unknowable sum in lost customers.

And it’s a risk that is only likely to grow in importance, op risk managers acknowledge: “The more we interconnect, the more we have online banking and direct [digital] interaction between our clients and ourselves – the more IT structures can be disrupted,” says a senior op risk executive at a major European bank, summing up a view expressed by several risk managers.

The Basel Committee on Banking Supervision is co-ordinating various national and international efforts to improve cyber risk management. Last year it set up the Operational Resilience Working Group – its first goal has been “to identify the range of existing practice in cyber resilience, and assess gaps and possible policy measures to enhance banks’ broader operational resilience going forward”, the committee said in a November 2018 document.

On a national level, operational resilience – including against IT failures – is an area of focus for the Bank of England. The central bank defines it as “the ability of firms and the financial system as a whole to absorb and adapt to shocks”. In July, it published a joint discussion paper on operational resilience with the UK’s Prudential Regulation Authority and Financial Conduct Authority.

Speaking at the OpRisk Europe conference in June, the PRA’s deputy chief executive Lyndon Nelson said: “It is likely that the [BoE] will set a minimum level of service provision it expects for the delivery of key economic functions in the event of a severe but plausible operational disruption.”

#4: Organisational change  

Organisational change – sometimes called ‘strategic execution risk’ – refers to the grab bag of things that can go sideways in the midst of any transition: switching to a new system from an old one, new strategic objectives, adjustments to new management edifices, errors or just bad decisions, etc.

The catalyst can come from any number of directions – mergers or acquisitions, divisional reorganisations, a strategic change in business mix. Unfortunately for financial firms, none of these are mutually exclusive ­– most are largely unavoidable.

Banks and buy-side firms are subject to the currents of consumer taste and the need to keep pace with rivals. Often, firms might be prompted into action by a shift in the nature of the threats they face: witness cyber risk’s long journey from the domain of IT to the risk team.

New regulation may also force change, requiring a company to divert resources, redeploy personnel or create new departments entirely – as in the case of the Fundamental Review of the Trading Book, for instance.

Problems arising during technology upgrades or changes are perhaps the most often mentioned risks in this threat category. But geopolitical rumblings can add to the difficulties in changes to a hierarchy or embarking on a new business strategy, says one risk professional. One senior op risk consultant says the atmosphere it produces can lead to dangerous operational mis-steps.

Brexit will soon probably provide many such examples. With a disorderly exit by the UK from the European Union this month almost a certainty, banks and brokers are setting up new entities on mainland Europe at a breakneck speed that almost guarantees problems – some as simple as staffing up and resource management.

“With political and economic risk increased, especially by Brexit, the time available to handle change is squeezed,” says the consultant. “That leads to potential errors in execution.”

#5: Theft and fraud

Despite slipping a place on this year’s list, theft and fraud is still many operational risk managers’ worst nightmare. The idea of a massive heist by enterprising hackers, mercenary employees or plain old bank robbers, possibly followed by fines and penalties, keeps the category near the top of the op risk survey year after year.

Inside jobs made up the top three of 2018’s biggest publicly reported op risk losses: Beijing-based Anbang Insurance lost a shattering $12 billion to embezzlement; in Ukraine, $5.5 billion vanished from PrivatBank in a ‘loan-recycling’ scheme; and in New Delhi, the Punjab National Bank lost $2.2 billion to wayward employees working with a fugitive diamond dealer.

These top losses were the result of old-fashioned crimes in the emerging world. At US and European banks though, it’s the cyber component of theft and fraud that looms large – despite the absence of even a single incident on the top 10 list.

“You can commit theft and fraud anonymously. You can go multicurrency, bitcoin,” comments a senior operational risk executive who says theft and fraud make up the biggest loss at the North American bank where he works. “You can be on the other side of the world, funds in hand, before anyone realises the money is missing.”

According to ORX News, the total of publicly reported losses attributable to cyber-related data breaches and instances of fraud and business disruption was $935 million worldwide in financial services last year. Over half those incidents involved fraud.

Cyber fraud comes generally in one of two sorts: one sows chaos, then grabs data en masse in the ensuing turmoil; the other zeros in on individuals to drain their accounts.

A large-scale attack could consist of millions of small transactions, like a $1 charge on a credit card, each likely unnoticed by the cardholder. In a targeted attack, thieves try to pry loose enough data from a customer’s social media persona to get access to their bank account. Other, more sophisticated schemes look for the weak points in authentication systems like biometrics. Some apps, for instance, can replicate a person’s voice patterns and fool voice ID systems.

“Equifax taught us that you need to move away from knowledge-based authentication to more activity-based identification,” says an op risk head at a second North American bank, for instance, something like asking people what their last two transactions were. In 2017, hackers stole data such as names, birthdates and Social Security numbers on nearly 148 million people from Equifax’s online systems.

#6: Outsourcing and third-party risk

Outsourcing key infrastructure or services to third parties is a tantalising prospect for many firms. The incentive is to harness the expertise of specialist providers, or to save costs. Or, ideally, a combination of the two.

The trade-off for many risk managers is a lingering concern about losing oversight of vital business functions. The prevalence of breaches via third parties and growing regulatory scrutiny of this area, not to mention the build-up of risk in certain systemically important platforms, are the focus of anxiety.

“If cloud platforms are correctly configured, they can enhance security, as well as creating efficiencies and reducing costs for customers,” says a UK cyber insurance executive. “However, if there was an incident that took down a cloud provider such as AWS or Azure, or a component part of the cloud infrastructure, this could cause an outage for thousands of individual companies.”

Regulators are zeroing in on outsourcing risk, too. The European Banking Authority (EBA) finalised outsourcing guidelines in February 2019, with a view to providing a single framework for financial firms’ contracts with third and fourth parties.

Financial institutions are also concerned about their reliance on crucial financial market infrastructuresuch as trading venues and clearing houses. Unlike IT or payroll systems, these are services that are difficult if not impossible to replicate in-house – as banks have tried to do with some troublesomevendor relationships.

Successful trading venues and clearing houses typically achieve a critical mass of liquidity that makes it very difficult for viable competitors to thrive. Without a credible threat to leave CCPs, banks lack the leverage to persuade the service providers to supply information on data or cyber security practices that might allow risk managers to properly assess threats.

#7: Regulatory risk

This year, the usual complement of regulation plus roiling new issues placed regulatory risk in seventh position on the list.

Chief among shifting regulatory expectations, anti-money laundering (AML) compliance has taken centre stage since the Danske Bank Estonian episode came to light in 2017. As much as €200 billion ($226.1 billion) in ‘non-resident’ money coursed through Danske’s modest Tallinn branch from 2007 to 2015.

Danske’s chief and chairman were ousted. The Danish financial regulator has imposed higher capital requirements, and the US Department of Justice has begun a criminal investigation. The EBA is looking into whether regulators in Denmark and Estonia were remiss. Estonia has ordered Danske to shut the branch.

“On AML, there are huge regulatory expectations there,” says one operational risk executive at an international bank. “We have a huge programme in the group to try and comply with their requirements.”

Elsewhere, changes to data protection legislation presents its own matrix of requirements for banks spanning continents, beginning with the EU’s GDPR.

“There are so many privacy regulations that raise issues from a regulatory risk standpoint. It’s a patchwork of regulations at the state and federal levels,” says an operational risk executive at one North American bank.

Banks are also warily eyeing further regulatory intervention from the Basel Committee on operational resilience – a broad initiative that sets out regulators’ expectations on a number of business continuity topics, including a minimum response time to return to normal operations after a platform outage.

#8: Data management

A conversation with any op risk manager will land, sooner or later, on the issue of data management. It could be concerns about data quality, particularly of historical data stored on legacy systems, which carries with it problems such as format and reliability. Or it could be the risk of missteps when handling customer data – inappropriate checks on storage, use or permissioning – that now come with the added threat of eye-watering fines from regulators.

Taken together, it’s no surprise that data management has made it into the top 10 op risks as a discrete risk category for the first time this year. It is considered separately from the threat of data compromise, where data breaches share the common driver of a malicious external threat.

Much of the impetus behind firms’ drive to beef up standards around the storage and transfer of personal data stems from the tightening of regulatory supervision on data privacy and security around the world – most obviously GDPR. Firms operating within the EU or holding data on EU citizens – which puts just about every firm around the world in scope, to some degree – may be heavily fined for falling foul of the regime, for instance, by failing to explicitly gain consent from individuals to retain and use their data.

As data management and compliance headaches multiply, the financial sector is pushing to use machine learning to augment the modelling of everything from loan approvals to suspicious transactions. In a sense, the methods offer a fix to downplay human errors. However, dealers have acknowledged machine learning models’ predictive power leaves them open to potentially unethical biases, such as inadvertently discriminating against certain customer groups because the bank’s data shows a higher risk of non-payment based on other customers historically served there.

Poor data management has consequences for everyday compliance exercises, such as filling in mandatory quarterly risk control self-assessment forms to the satisfaction of regulators. Banks “are missing robust data management processes to ensure that data is reliable, complete and up to date, and that reports can be generated [in a timely manner]”, the head of op risk at one Asian bank tells Risk.net.

#9: Brexit

Brexit covers such a wide range of possible risk events that some participants in this year’s survey disputed whether it should be included as a standalone chapter at all; but a significant number argued strongly that it should, with its collective drivers likely engendering a common set of specific risks for banks and financial firms for years to come.

At the time of writing, the UK is a fortnight away from leaving the EU, although speculation about a delay ranging from two months to two years is growing. Nor is there any clarity on the state of the UKEU relationship after the March 29 deadline. Anything from a long delay or a cancellation to an abrupt “no-deal” crash exit remains possible; this may have changed by lunchtime on the day this article is published.

Many financial firms whose business is affected by Brexit have given up waiting for lawmakers to finalise negotiations over the terms of the split and are pushing ahead with contingency plans. Banks and brokers are setting up new entities in mainland Europe, a process that is fraught with operational risk, particularly given the accelerated timescale for its completion.

Third-party risk from new supplier relationships; legal risk from repapering numerous financial contracts; people risk from hiring and training new personnel; these and other effects of the relocation will put additional strain on the operational resilience of companies.

Particularly in the case of a Brexit with no deal, industry practitioners fear a general increase in stress on almost every aspect of operations. One survey respondent points out: “If you have a hard Brexit, how resilient are your operation processes in terms of new requirements? If you think about it, overnight you go into new tariff regimes. So you have a portfolio with every operational risk you’ve ever seen.”

#10: Mis-selling

Mis-selling drops a few places on this year’s top 10 op risks, a reflection – or perhaps a shared hope among risk managers – that the era of mega-fines for crisis-era misdeeds among US and European banks might finally be over. They would do well to check their optimism, however: as the recent public inquiry into Australia’s financial sector that has excoriated the reputation of the nation’s banks shows, another mis-selling scandal is never far away.

Firms have shelled out a scarcely credible $607 billion in fines for conduct-related misdemeanours since 2010, the bulk of them related to fines and redress over mis-selling claims. 2011 and 2012 saw the heaviest losses, with the bulk of the fines for residential mortgage to payment protection insurance (PPI) mis-selling concentrated here.

The cumulative impact of fines and settlements has taken a huge toll on bank capital: as a recent Risk Quantum analysis shows, op risk now accounts for a third of risk-weighted assets (RWAs) among the largest US banks, while UK lenders still face hefty Pillar 2 capital top-ups from the Bank of England, largely as a result of legacy conduct issues.

Under the advanced measurement approach to measuring op risk capital which most US banks use, sizeable op risk losses can heavily skew a model’s outputs. But from a capital point of view, there are hopeful signs that with the severity and frequency of losses decreasing, RWAs are starting to see agradual rolldown for most banks – though the US Federal Reserve has privately made clear it will not sign off any more changes to bank op risk models, leaving their methodologies frozen in time.

While Australia’s banks emerged relatively unscathed from the 2008 global financial crisis, they too are now feeling the sting of public ire following a series of mis-selling and conduct-related scandals, the first of which claimed the scalp of Commonwealth Bank Of Australia chief executive Ian Narev last year, dealing a severe blow to the bank’s reputation.

The Royal Commission enquiry it helped spark had far wider ramifications beyond the bank. The fallout is still being felt, with National Australia Bank announcing on February 7 that its chief executive Andrew Thorburn and chairman Ken Henry would both step down.

Source | https://www.risk.net/risk-management/6470126/top-10-op-risks-2019

Three films about corporate cybersecurity and cyberwar

While stories about breaches and cyberattacks have only become commonplace in the news relatively recently, Hollywood has had an interest in cybersecurity for some time now. To coincide with the Oscars, we’re taking a look at several popular films that dealt with cyberattacks on companies or government institutions, industrial espionage, and cyberwar, in order to take away some lessons for businesses.

Endpoint security and the problem with critical infrastructure.

In Skyfall (2012), one of the latest James Bond films, the British Intelligence Service, MI6, is under attack, and is trying to stop vital information from being leaked to the public. In turn, Bond is fighting to survive, and struggling to stay relevant in a world where the figure of the field agent is becoming less important thanks to technological advances, and where popular services such as social networks can put an agent’s privacy at risk. Silva, a cybercriminal, and the film’s bad guy, manages to interfere with satellite signals, attack the London Underground, tamper with elections in several African countries, and destabilize the stock market… All from a computer.

Although the film contains such important concepts as the protection of critical infrastructures, and is the first Bond film to use a cyberattack as a lethal weapon, there is one serious error that needs to be highlighted. The employees of MI6 get their hands on a computer belonging to Silva, the criminal hacker, and connect it to the intelligence service’s network to extract information from it.

Accessing the network via an infected endpoint endangers the organization’s entire infrastructure, and is an important example of how simple mistakes in a business environment can put our privacy at risk. Despite this slip up, Q, the technology expert at MI6, says, one might say quite rightly: “I’ll hazard I can do more damage on my laptop, sitting in my pyjamas before my first cup of Earl Grey than you can do in a year in the field.”

The documentary Zero Days (2016) investigates the by now well-known sophisticated computer worm Stuxnet, which is suspected to have been developed by the United States and Israel in order to sabotage the Iranian nuclear program in 2010. Stuxnet also managed to make its way onto a private network via an infected endpoint – in this case a pen drive – which injected malicious code onto the programmable logic controllers (PLC) used to automate the nuclear power station’s processes.

The worm took over more than 1,000 machines in the industrial environment, and forced them to self-destruct. This attack became the first known digital weapon in international cyberwar, the first virus capable of paralyzing functioning hardware.

The malware leveraged multiple zero day vulnerabilities in order to infect Windows computers, specifically targeting nuclear centrifuges used to produce the uranium needed for weapons and nuclear reactors. Despite being created specifically to affect nuclear facilities in 2010, it seems that Stuxnet has mutated and spread to different organizations outside the industrial sector. 

Human error in cyberwar

In the film Blackhat (2015), after attacks on nuclear power stations in Hong Kong and on the Chicago Stock Exchange, the US and Chinese governments are forced to cooperate in order to protect themselves. In light of these new threats, the FBI turns to a convicted cybercriminal, Hathaway, to help discover who is behind the IT attacks: a black hat hacker seeking to get rich by bringing down the stock market.

In this case, several of the attacks are carried out by the black hat using a RAT (Remote Access Trojan), a piece of malware that can take over a system via a remote connection.Those collaborating with the FBI also fall back on two important weapons to attack corporate networks: an email with an attached PDF containing a keylogger.

This tool is used to access a piece of software exclusive to the National Security Agency (NSA), which is not willing to collaborate with the FBI. As with the other two films discussed here, they also use an infected pen drive as an attack vector, in this case to gain access to a bank’s network and drain the accounts of the cybercriminal who is wreaking so much havoc.

Cybersecurity lessons

These three examples from the film industry can provide us with some valuable tips for a business environment:

  • Pen drives must never be inserted in our systems if you don’t know where they come from, or without first running a malware analysis. To carry out a scan like this, advanced platforms such as Panda Adaptive Defense provide a detailed vision of all endpoints. It’s also vital to scan files that come in as attachments.

  • Attachments from unknown senders or people who aren’t in our address books must never be opened.
  • We need to make sure that our employees know how to deal with social engineeringattacks and such common mistakes as connecting unknown devices to the corporate network.

Source |https://www.pandasecurity.com/mediacenter/news/7-best-cybersecurity-films/

AirAsia’s Face Recognition System for Flight Boarding Will Be Rolled Out This 2019!

According to Free Malaysia Today, AirAsia Deputy group CEO Aireen Omar recently stated that the low-cost carrier will be implementing a face recognition system for flight boarding in selected airports across Malaysia this year (2019).

AirAsia Deputy group CEO Aireen Omar

A pilot test for this system is currently being carried out at the Senai airport in Johor Bahru which began in February 2018. The system, known as the Fast Airport Clearance Experience System (FACES), is Malaysia’s first airport facial recognition system with self-boarding gates.

FACES is able to identify guests as they approach these automated boarding gates and they can easily board flights without presenting any travel documents.

Speaking about this system when the pilot test was first launched last year, çsaid,

“Airports are typically the worst part of flying. FACES marks our latest effort to make the on-ground experience more seamless and less stressful by using cutting edge biometric technology to authenticate guests.”

“With FACES, your face is your passport, making it a breeze to clear the gate and board your flight.”

Group CEO of AirAsia and Co-Group CEO of AirAsia X Tan Sri Tony Fernandes

“I want to thank Senai International Airport for once again supporting our efforts to improve the travel experience for our guests through digital innovation, as they did when they became the first airport in Malaysia to implement self-service baggage check-in. We hope the success of FACES here will serve as an inspiration and we are keen to work with other airports in Malaysia to revolutionise the way people travel with this technology and make flying enjoyable again.”

Meanwhile, the airline’s deputy group CEO added that after a year of pilot testing, the technology has been refined and improved and that more airports would be selected for the next phase of this project. Well, we’re excited to see this new system being implemented at more airports this year!

SOURCE | https://www.worldofbuzz.com/airasia-face-recognition-system-for-flight-boarding-will-be-rolled-out-this-2019/

What is SCADA ? How does SCADA Works ?

SCADA stands for Supervisory Control and Data Acquisition but it is a term often used for data collection and presentation.

What is SCADA ?

SCADA is normally a software package designed to display information, log data and show alarms.

This can be graphical and tabular and can involve words and pictures (or mimics).

The software would normally be installed on a computer and all the various signals would be wired back to the central point (CPU), or marshalled and gathered using some form of bus system or direct wired.

SCADA can be used to monitor and control plant or equipment. The control may be automatic, or initiated by operator commands. The data acquisition is accomplished firstly by the RTU’s (remote Terminal Units).

The central host will scan the RTU’s or the RTU’s will report in Data can be of three main types.

Analogue data (i.e. real numbers) will be trended (ie placed in graphs). Digital data (on/off) may have alarms attached to one state or the other. Pulse data (e.g. counting revolutions of a meter) is normally accumulated or counted

Supervisory control and data acquisition – SCADA refers to ICS (industrial control systems) used to control infrastructure processes (Utilities, water treatment, wastewater treatment, gas pipelines, wind farms, etc), facility-based processes (airports, space stations, ships, etc,) or industrial processes (production, manufacturing, refining, power generation, etc).

The following subsystems are usually present in SCADA systems:

  • The apparatus used by a human operator; all the processed data are presented to the operator
  • A supervisory system that gathers all the required data about the process
  • Remote Terminal Units (RTUs) connected to the sensors of the process, which helps to convert the sensor signals to the digital data and send the data to supervisory stream.
  • Programmable Logic Controller (PLCs) used as field devices
  • Communication infrastructure connects the Remote Terminal Units to supervisory system.

Generally, a SCADA system does not control the processes in real time – it usually refers to the system that coordinates the processes in real time.

SCADA Systems Concepts

SCADA refers to the centralized systems that control and monitor the entire sites, or they are the complex systems spread out over large areas.

Nearly all the control actions are automatically performed by the remote terminal units (RTUs) or by the programmable logic controllers (PLCs).

The restrictions to the host control functions are supervisory level intervention or basic overriding.

For example, the PLC (in an industrial process) controls the flow of cooling water, the SCADA system allows any changes related to the alarm conditions and set points for the flow (such as high temperature, loss of flow, etc) to be recorded and displayed.

Data acquisition starts at the PLC or RTU level, which includes the equipment status reports, and meter readings. Data is then formatted in such way that the operator of the control room can make the supervisory decisions to override or adjust normal PLC (RTU) controls, by using the HMI.

SCADA systems mostly implement the distributed databases known as tag databases, containing data elements called points or tags. A point is a single output or input value controlled or monitored by the system. Points are either ‘soft’ or ‘hard’.

The actual output or input of a system is represented by a hard point, whereas the soft point is a result of different math and logic operations applied to other points.

These points are usually stored as timestamp-value pairs. Series of the timestamp-value pairs gives history of the particular point.

Storing additional metadata with the tags is common (these additional data can include comments on the design time, alarm information, path to the field device or the PLC register).

Human Machine Interface

The HMI, or Human Machine Interface, is an apparatus that gives the processed data to the human operator. A human operator uses HMI to control processes.

The HMI is linked to the SCADA system’s databases, to provide the diagnostic data, management information and trending information such as logistic information, detailed schematics for a certain machine or sensor, maintenance procedures and troubleshooting guides.

The information provided by the HMI to the operating personnel is graphical, in the form of mimic diagrams.

This means the schematic representation of the plant that is being controlled is available to the operator.

For example, the picture of the pump that is connected to the pipe shows that this pump is running and it also shows the amount of fluid pumping through the pipe at the particular moment.

The pump can then be switched off by the operator. The software of the HMI shows the decrease in the flow rate of fluid in the pipe in the real time.

Mimic diagrams either consist of digital photographs of process equipment with animated symbols, or schematic symbols and line graphics that represent various process elements.

HMI package of the SCADA systems consist of a drawing program used by the system maintenance personnel or operators to change the representation of these points in the interface.

These representations can be as simple as on-screen traffic light that represents the state of the actual traffic light in the area, or complex, like the multi-projector display that represents the position of all the trains on railway or elevators in skyscraper.

SCADA systems are commonly used in alarm systems. The alarm has only two digital status points with values ALARM or NORMAL. When the requirements of the Alarm are met, the activation will start.

For example, when the fuel tank of a car is empty, the alarm is activated and the light signal is on. To alert the SCADA operators and  managers, text messages and emails are sent along with alarm activation.

SCADA Hardware

SCADA system may have the components of the Distributed Control System. Execution of easy logic processes without involving the master computer is possible because ‘smart’ PLCs or RTUs.IEC61131-39(Ladder Logic) is used, (this is a functional block programming language, commonly used in creating programs running on PLCs and RTUs.)

IEC 61131-3 has very few training requirements, unlike procedural languages like FORTRAN and C programming language.

The SCADA system engineers can perform implementation and design of programs being executed on PLC or RTU. The compact controller, Programmable automation controller (PAC), combines the capabilities and features of a PC-based control system with a typical PLC.

’Distributed RTUs’, in various electrical substation SCADA applications, use station computers or information processors for communicating with PACs, protective relays, and other I/O devices.

Almost all big PLC manufacturers offer integrated HMI/SCADA systems, since 1998.

Many of them are using non-proprietary and open communication protocols. Many skilled third party HMI/SCADA packages have stepped into the market, offering in-built compatibility with several major PLCs, which allows electrical engineers, mechanical engineers or technicians to configure HMIs on their own, without requiring software-developer-written custom-made program.

Remote Terminal Unit (RTU)

The RTU is connected to the physical equipment. Often, the RTU converts all electrical signals coming from the equipment into digital values like the status- open/closed – from a valve or switch, or the measurements like flow, pressure, current or voltage.

By converting and sending the electrical signals to the equipment, RTU may control the equipment, like closing or opening a valve or a switch, or setting the speed of the pump.

Supervisory Station

A ‘supervisory Station’ refers to the software and servers responsible for communication with the field equipment (PLCs, RTUs etc), and after that, to HMI software running on the workstations in the control room, or somewhere else.

A master station can be composed of only one PC (in small SCADA systems). Master station can have multiple servers, disaster recovery sites and distributed software applications in larger SCADA systems.

SCADA Operational Philosophy

The costs resulting from control system failures are very high. Even lives may be lost. For a few SCADA systems, hardware is ruggedized, to withstand temperature, voltage and vibration extremes, and reliability is increased, in many critical installations, by including communications channels and redundant hardware.

A part which is failing can be identified and the functionality taken over automatically through backup hardware. It can be replaced without any interruption of the process.

Communication Methods and Infrastructure

SCADA systems initially used modem connections or combinations of direct and radio serial to meet communication requirements, even though IP and Ethernet over SONET/SDH can also be used at larger sites like power stations and railways. The monitoring function or remote management of the SCADA system is called telemetry.

SCADA protocols have been designed to be extremely compact and to send information to the master station only when the RTU is polled by the master station. Typically, the legacy of SCADA protocols consists of Conitel, Profibus, Modbus RTU and RP-570.

These protocols of communication are specifically SCADA-vendor. Standard protocols are IEC 61850, DNP3 and IEC 60870-5-101 or 104. These protocols are recognized and standardized by all big SCADA vendors. Several of these protocols have extensions for operating through the TCP/IP.

The development of many automatic controller devices and RTUs had started before the advent of industry standards for the interoperability.

For better communication between different software and hardware, PLE for Process Control is a widely accepted solution that allows communication between the devices that originally weren’t intended to be part of the industrial network.

SCADA Architectures

Monolithic: The First Generation

In the first generation, mainframe systems were used for computing. At the time SCADA was developed, networks did not exist.

Therefore, the SCADA systems did not have any connectivity to other systems, meaning they were independent systems. Later on, RTU vendors designed the Wide Area Networks that helped in communication with RTU.

The usage of communication protocols at that time was proprietary. If the mainframe system failed, there was a back-up mainframe, connected at the bus level.

Distributed: The Second Generation

The information between multiple stations was shared in real time through LAN and the processing was distributed between various multiple stations.

The cost and size of the stations were reduced in comparison to the ones used in the first generation. The protocols used for the networks were still proprietary, which caused many security issues for SCADA systems.

Due to the proprietary nature of the protocols, very few people actually knew how secure the SCADA installation was.

Networked: The Third Generation

The SCADA system used today belong to this generation. The communication between the system and the master station is done through the WAN protocols like the Internet Protocols (IP).

Since the standard protocols used and the networked SCADA systems can be accessed through the internet, the vulnerability of the system is increased.

However, the usage of security techniques and standard protocols means that security improvements can be applied in SCADA systems.

SCADA Trends

In the late 1990s instead of using the RS-485, manufacturers used open message structures like Modbus ASCII and Modbus RTU (both developed by Modicon). By 2000, almost all I O makers offered fully open interfacing like Modbus TCP instead of the IP and Ethernet.

SCADA systems are now in line with the standard networking technologies. The old proprietary standards are being replaced by the TCP/IP and Ethernet protocols. However, due to certain characteristics of frame-based network communication technology, Ethernet networks have been accepted by the majority of markets for HMI SCADA.

The ‘Next Generation’ protocols using XML web services and other modern web technologies, make themselves more IT supportable. A few examples of these protocols include Wonderware’s SuiteLink, GE Fanuc’s Proficy, I Gear’s Data Transport Utility, Rockwell Automation’s FactoryTalk and OPC-UA.

Some vendors have started offering application-specific SCADA systems that are hosted on remote platforms all over the Internet. Hence, there is no need to install systems at the user-end facility. Major concerns are related to the Internet connection reliability, security and latency.

The SCADA systems are becoming omnipresent day by day. However, there are still some security issues.

SCADA Security Issues

Security of SCADA-based systems is being questioned, as they are potential targets to cyberterrorism/cyberwarfare attacks.

There is an erroneous belief that SCADA networks are safe enough because they are secured physically. It is also wrongly believed that SCADA networks are safe enough because they are disconnected from the Internet.

SCADA systems also are used for monitoring and controlling physical processes, like distribution of water, traffic lights, electricity transmissions, gas transportation and oil pipelines and other systems used in the modern society. Security is extremely important because destruction of the systems would have very bad consequences.

There are two major threats. The first one is unauthorized access to software, be it human access or intentionally induced changes, virus infections or other problems that can affect the control host machine. The second threat is related to the packet access to network segments that host SCADA devices.

In numerous cases, there remains less or no security on actual packet control protocol; therefore, any person sending packets to SCADA device is in position to control it.

Often, SCADA users infer that VPN is sufficient protection, and remain oblivious to the fact that physical access to network switches and jacks related to SCADA provides the capacity to bypass the security on control software and control SCADA networks.

SCADA vendors are addressing these risks by developing specialized industrial VPN and firewall solutions for SCADA networks that are based on TCP/IP. Also, white-listing solutions have been implemented due to their ability to prevent unauthorized application changes.

APPLICATIONS OF SCADA:

1)Application In Power Plants:

A group of Hydro and Gas generation plants when the load demand exceeds the generating capacity, These plants are considered as peak load plants because these plants can start in no time and deliver power to the grid.

These plants are located in the romote locations. These plants are controlled by opening and closing the valves of turbines so that they can deliver the power in peak conditions and can be kept on standby during normal load conditions.

2)Application In Oil & Gas Plants:

Many process control parameters, motors, pumps, valves are spread over the wide area in the field.

Control and monitoring applications include turning on and off motors, pumps, valves and gathering information of process parameters(like flow rate, pressure, temperature) continuously and taking certain decisions can be done through SCADA systems.

3)Applications In Pipelines:

Pipelines carrying oil, gas, chemicals and water which are located at varying distances from the plant needs continuous monitoring and control.

Control includes opening and closing the valves, starting and stopping the pumps. Monitoring the flowrate and other parameters to avoid leakage in the pipelines by acquiring the data and carrying out suitable controls is done through SCADA systems.

4)Applications In Power Transmission:

Electrical power transmission which is spread over thousands of kilometers can be controlled by opening and closing the circuit breakers and other functions, This is done in master control substation which can control the other substations through SCADA systems.

5)Applications In Irrigation Systems:

Irrigation systems which are spread over wide area can be controlled by closing and opening the valves, gathering the meter values of amount of water supplied and taking the control actions can be done through SCADA systems.

SOURCE | https://instrumentationtools.com/overview-of-scada-system/

Weaponised AI is coming. Are algorithmic forever wars our future?

The US military is creating a more automated form of warfare – one that will greatly increase its capacity to wage war everywhere forever.

ast month marked the 17th anniversary of 9/11. With it came a new milestone: we’ve been in Afghanistan for so long that someone born after the attacks is now old enough to go fight there. They can also serve in the six other places where we’re officially at war, not to mention the 133 countries where special operations forces have conductedmissions in just the first half of 2018.

The wars of 9/11 continue, with no end in sight. Now, the Pentagon is investing heavily in technologies that will intensify them. By embracing the latest tools that the tech industry has to offer, the US military is creating a more automated form of warfare – one that will greatly increase its capacity to wage war everywhere forever.

On Friday, the defense department closes the bidding period for one of the biggest technology contracts in its history: the Joint Enterprise Defense Infrastructure (Jedi). Jedi is an ambitious project to build a cloud computing system that serves US forces all over the world, from analysts behind a desk in Virginia to soldiers on patrol in Niger. The contract is worth as much as $10bn over 10 years, which is why big tech companies are fighting hard to win it. (Not Google, however, where a pressure campaign by workers forcedmanagement to drop out of the running.)

At first glance, Jedi might look like just another IT modernization project. Government IT tends to run a fair distance behind Silicon Valley, even in a place as lavishly funded as the Pentagon. With some 3.4 million users and 4 million devices, the defense department’s digital footprint is immense. Moving even a portion of its workloads to a cloud provider such as Amazon will no doubt improve efficiency.

But the real force driving Jedi is the desire to weaponize AI – what the defense department has begun calling “algorithmic warfare”. By pooling the military’s data into a modern cloud platform, and using the machine-learning services that such platforms provide to analyze that data, Jedi will help the Pentagon realize its AI ambitions.

The scale of those ambitions has grown increasingly clear in recent months. In June, the Pentagon established the Joint Artificial Intelligence Center (JAIC), which will oversee the roughly 600 AI projects currently under way across the department at a planned cost of $1.7bn. And in September, the Defense Advanced Research Projects Agency (Darpa), the Pentagon’s storied R&D wing, announced it would be investing up to $2bn over the next five years into AI weapons research.

So far, the reporting on the Pentagon’s AI spending spree has largely focused on the prospect of autonomous weapons – Terminator-style killer robots that mow people down without any input from a human operator. This is indeed a frightening near-future scenario, and a global ban on autonomous weaponry of the kind sought by the Campaign to Stop Killer Robots is absolutely essential.

But AI has already begun rewiring warfare, even if it hasn’t (yet) taken the form of literal Terminators. There are less cinematic but equally scary ways to weaponize AI. You don’t need algorithms pulling the trigger for algorithms to play an extremely dangerous role.

To understand that role, it helps to understand the particular difficulties posed by the forever war. The killing itself isn’t particularly difficult. With a military budget larger than that of China, Russia, Saudi Arabia, India, France, Britain and Japan combined, and some 800 bases around the world, the US has an abundance of firepower and an unparalleled ability to deploy that firepower anywhere on the planet.

The US military knows how to kill. The harder part is figuring out whom to kill. In a more traditional war, you simply kill the enemy. But who is the enemy in a conflict with no national boundaries, no fixed battlefields, and no conventional adversaries?

This is the perennial question of the forever war. It is also a key feature of its design. The vagueness of the enemy is what has enabled the conflict to continue for nearly two decades and to expand to more than 70 countries – a boon to the contractors, bureaucrats and politicians who make their living from US militarism. If war is a racket, in the words of marine legend Smedley Butler, the forever war is one the longest cons yet.

But the vagueness of the enemy also creates certain challenges. It’s one thing to look at a map of North Vietnam and pick places to bomb. It’s quite another to sift through vast quantities of information from all over the world in order to identify a good candidate for a drone strike. When the enemy is everywhere, target identification becomes far more labor-intensive. This is where AI – or, more precisely, machine learning – comes in. Machine learning can help automate one of the more tedious and time-consuming aspects of the forever war: finding people to kill.

The Pentagon’s Project Maven is already putting this idea into practice. Maven, also known as the Algorithmic Warfare Cross-Functional Team, made headlines recently for sparking an employee revolt at Google over the company’s involvement. Maven is the military’s “pathfinder” AI project. Its initial phase involves using machine learning to scan drone video footage to help identify individuals, vehicles and buildings that might be worth bombing.

“We have analysts looking at full-motion video, staring at screens 6, 7, 8, 9, 10, 11 hours at a time,” says the project director, Lt Gen Jack Shanahan. Maven’s software automates that work, then relays its discoveries to a human. So far, it’s been a big success: the software has been deployed to as many as six combat locations in the Middle East and Africa. The goal is to eventually load the software on to the drones themselves, so they can locate targets in real time.

Won’t this technology improve precision, thus reducing civilian casualties? This is a common argument made by higher-ups in both the Pentagon and Silicon Valley to defend their collaboration on projects like Maven. Code for America’s Jen Pahlka puts it in terms of “sharp knives” versus “dull knives”: sharper knives can help the military save lives.

In the case of weaponized AI, however, the knives in question aren’t particularly sharp. There is no shortage of horror stories of what happens when human oversight is outsourced to faulty or prejudiced algorithms – algorithms that can’t recognize black faces, or that reinforce racial bias in policing and criminal sentencing. Do we really want the Pentagon using the same technology to help determine who gets a bomb dropped on their head?

But the deeper problem with the humanitarian argument for algorithmic warfare is the assumption that the US military is an essentially benevolent force. Many millions of people around the world would disagree. In 2017 alone, the US and allied strikes in Iraq and Syria killed as many as 6,000 civilians. Numbers like these don’t suggest a few honest mistakes here and there, but a systemic indifference to “collateral damage”. Indeed, the US government has repeatedly bombed civilian gatherings such as weddings in the hopes of killing a high-value target.

Further, the line between civilian and combatant is highly porous in the era of the forever war. A report from the Intercept suggests that the US militarylabels anyone it kills in “targeted” strikes as “enemy killed in action”, even if they weren’t one of the targets. The so-called “signature strikes” conducted by the US military and the CIA play similar tricks with the concept of the combatant. These are drone attacks on individuals whose identities are unknown, but who are suspected of being militants based on displaying certain “signatures” – which can be as vague as being a military-aged male in a particular area.

The problem isn’t the quality of the tools, in other words, but the institution wielding them. And AI will only make that institution more brutal. The forever war demands that the US sees enemies everywhere. AI promises to find those enemies faster – even if all it takes to be considered an enemy is exhibiting a pattern of behavior that a (classified) machine-learning model associates with hostile activity. Call it death by big data.

AI also has the potential to make the forever war more permanent, by giving some of the country’s largest companies a stake in perpetuating it. Silicon Valley has always had close links to the US military. But algorithmic warfare will bring big tech deeper into the military-industrial complex, and give billionaires like Jeff Bezos a powerful incentive to ensure the forever war lasts forever. Enemies will be found. Money will be made.

Source| https://www.theguardian.com/commentisfree/2018/oct/11/war-jedi-algorithmic-warfare-us-military