RM67.6 million lost to cyber crimes in Q1 2019

LABUAN: Cyber crimes involving losses of RM67.6 million in 2,207 cases were reported in the first three months of this year, according to a senior officer of the Communications and Multimedia Ministry (KKMM) today.

Its deputy secretary-general (policy), Shakib Ahmad Shakir, said the ministry and agencies under it were concerned over the large amounts of money lost through such scams.

The three most common types of cyber crimes were cheating via telephone calls which recorded 773 cases with RM26.8 million in losses, cheating in online purchases with 811 cases totaling RM4.2 million and the ‘African Scam’ with 371 cases totaling RM14.9 million.

E-financial fraud recorded 212 cases involving losses of RM21.5 million, he said when opening a Labuan-level briefing on awareness to combat cyber crimes and human trafficking, here.

He said the losses were reported in online scams, credit card frauds, identity thefts and data breaches.

“KKMM is determined to combat cyber crimes in view of the concerns raised on the rise in cyber crimes committed through various means.

“Cyber crimes are a serious threat to the people as these frauds can cause them to lose hundreds of thousands of ringgit of their hard-earned money,” he said.

The briefing is part of the commitment of KKMM to create public awareness on cyber crimes through education and promotion and publicity campaigns.

Shakib said that according to the Commercial Crime Investigation Department, 13,058 cheating cases were reported in 2017 compared to 10,394 last year.

“I was told that telecommunication fraud is the most common form of (cyber) crime in Labuan with 16 complaints in 2017 and 19 complaints last year, a 35 per cent increase,” he said.

Shakib said the ministry would continue to cooperate with its strategic partners like the media, police, the Malaysian National News Agency (Bernama) and Information Department to combat the menace. – Bernama

Source | https://www.nst.com.my/news/crime-courts/2019/04/482208/rm676-million-lost-cyber-crimes-q1-2019

MORE DEDICATED CYBER-SECURITY STAFF NEEDED IN HEALTHCARE INDUSTRY

  • Industry that deals with copious amounts of personal, exploitable data
  • Organisation-wide education and awareness are crucial

AS THE adoption of digital technology in the healthcare industry accelerates, there is an increasing need to protect another side of patients’ and healthcare organisations’ well-being – the security of their personal data.

This emphasis on protecting data and mitigating cyber-threats is reflected in the industry’s significant investment into cyber-security.

According to a recent survey by Palo Alto Networks, about 70% of healthcare organisations in Asia-Pacific say that 5% to 15% of their organisation’s IT budget is allocated to cyber-security.

However, despite substantial budgets, there seems to be a need for the healthcare industry to catch-up with industry peers in terms of cyber-security talent, with only 78% having a team in their organisations dedicated to IT security, the lowest among other industries surveyed. This is also well-below the industry-wide average of 86%.

“As an industry that deals with copious amounts of personal, exploitable data, it can be disastrous if this data enters the wrong hands.

“Healthcare organisations need to ensure they are always updated on new security measures, and change their mindset from a reactive approach to a prevention-based approach instead, akin to how they remind patients that prevention is better than cure,” says Sean Duca, vice president and regional chief security officer for Asia-Pacific, Palo Alto Networks.

Risk factors

Aside from monetary loss associated with data breaches and availability of connected devices which monitor patient lives, healthcare professionals are most worried about the loss of clients’ contacts, financial or medical information – 30% have cited loss of details as key.

Fear of damaging the company’s reputation among clients comes next at 22%, followed by 17% citing company downtime while a breach is being fixed as a concern.

Cyber-security risks in healthcare organisations are also amplified with BYOD (Bring Your Own Device), with 78% of organisations allowing employees to access work-related information with their own personal devices such as their mobile phones and computers.

In addition to this, 69% of those surveyed say they are allowed to store and transfer their organisation’s confidential information through their personal devices.

While 83% claimed there are security policies in place, only 39% admit to reviewing these policies more than once a year – lower than the 51% of respondents from the finance industry, a sector also known to hold sensitive client data.

Call to get in shape for the future

As more healthcare organisations fall prey to cyber-attacks, such as ransomware, a lapse in data security is a real threat to the industry, hence organisation-wide education and awareness are crucial towards ensuring that the right preventive measures are implemented and enforced.

Fifty-four percent of the respondents have cited an inability to keep up with the evolving solutions being a barrier to ensuring cyber-security in their organisations, and 63% of respondents attributed this to an ageing internet infrastructure as the likely main reason for cyber-threats, should they happen.

Here are some tips for healthcare organisations:

Ensure that medical devices are equipped with up-to-date firmware and security patches to address cyber-security risks. Medical devices are notoriously vulnerable to cyber-attacks because security is often an afterthought when the devices are designed and maintained by the manufacturer. These precautionary measures may include having an inventory on all medical devices, accessing network architecture and determining patch management plan for medical devices, as well as developing a plan to migrate medical devices to the medical device segment.

Apply a zero-trust networking architecture for hospital networks, making security ubiquitous throughout, not just at the perimeter. Healthcare organisations should look to segment devices and data based on their risk, inspecting network data as it flows between segments, and requiring authentication to the network and to any application for any user on the network.

Practices such as BYOD and some employees’ ability to store and transfer confidential information through their personal devices put them at a higher risk of phishing attacks. To prevent this, healthcare providers should ensure that staff undergo regular end-user security training to reduce successful phishing. Cyber-security best practices can be taught as a new hire class for every employee.

As healthcare organisations migrate portions of their critical infrastructure and applications to the cloud, it becomes imperative for an advanced and integrated security architecture to be deployed to prevent cyber-attacks on three-prongs: the network, the endpoint and the cloud. Traditional antivirus will not be effective in guarding against advanced malware such as ransomware which continuously changes to avoid detection.

Source | https://www.digitalnewsasia.com/digital-economy/more-dedicated-cyber-security-staff-needed-healthcare-industry

Top 10 operational risks for 2019

The biggest op risks for 2019, as chosen by industry practitioners

We present annual ranking of the biggest op risks for the year ahead, based on a survey of operational risk practitioners across the globe and in-depth interviews with a selection of industry personnel. The risks are listed in order of magnitude of threat, with this year’s largest risk being data compromise.

#1: Data compromise  

The threat of data loss through cyber attack, combined with an awareness among managers that defences are vulnerable, has made data compromise a perennial concern for op risk practitioners of all stripes. But the advent of strict new data protection regulation has intensified those fears, helping propel the category to the top of our annual survey for the first time.

Collecting multiple datasets and storing them in one place presents a single, tempting target for hackers. Companies have responded by compartmentalising data and storing it across several locations in an effort to reduce the potential loss from a single breach.

“You have to assume hackers will get through, and what do you do then? It can be just making sure you are storing data in several places, splitting your data so [hackers] getting into one file won’t get what they need,” says one senior risk practitioner.

The EU’s General Data Protection Regulation (GDPR), introduced in May 2018, aims to tighten consumer safeguards around data disclosure. No prosecution has yet used the full scope of penalties – the regulation allows a fine of up to 4% of global revenue – but companies are wary of a sizeable additional loss associated with, for example, a major data breach due to negligence.

Other areas of GDPR may have attracted less attention, but still pose significant potential sources of operational risk. Companies must provide customers with access to their own data, including the ability to correct or erase it in some cases; and they must report a data breach within 72 hours.

New regulations are also offering up enticing targets for hackers, though: their targets are broadening beyond financial services firms to encompass intermediaries and even the official sector. For example, the EU’s Mifid II markets regime requires trading platforms and investment firms to collect personal information on the counterparties to every trade – not just a potential privacy issue, but a new and worrying point of entry to would-be hackers. As the data is passed from firm to platform and from platform to regulator, it becomes exposed to attack.

Some banks are taking advantage of the new market in cyber crime to adopt a more proactive defence strategy. Cyber criminals use the unindexed “dark” web to offer stolen data for sale. By monitoring this black market, institutions may gain advance warning of attacks, or even discover stolen data whose theft had gone unnoticed.

An active defence should also include penetration testing, both online and physical. Often the critical weakness in a cyber security plan sits, as IT managers put it, between chair and keyboard.

In a landmark case in October 2018, US authorities fined fund manager Voya Financial $1 million after a security breach allowed hackers to steal the personal details of thousands of customers. The hackers gained access by making repeated phone requests for password changes, pretending to be Voya subcontractors. Resetting the passwords was explicitly banned by Voya’s policies, but its employees did it nonetheless.

#2: IT disruption

Cyber attacks conjure images of masked figures gaining access to the IT network of a company or government and making away with millions, yet the reality is often more prosaic. Malware designed merely for nuisance value can cripple firms’ operations, while the origin of attack is often not rogue criminal but state entity: the WannaCry and NotPetya ransomware events of 2017 were widely attributed to state-sponsored sources.

“Hackers are more organised and some countries have malicious, not criminal intent,” says an operational risk consultant. “They might not get anything out of it apart from bringing systems down and causing disruption.”

The past year has not seen as many high-profile disruptive cyber attacks as the previous one, which may go some way to explaining why IT disruption slips to second place in Risk.net’s 2019 survey.

However, risk experts still see cyber attacks as an ever-present menace.

Distributed denial of service (DDoS) is one of the most common forms of attack. DDoS data from two security specialists provides a conflicting picture: Kaspersky Lab reports a decline in overall attacks by 13% from 2017 to 2018. Corero says that among its customers, the number of events in 2018 was up 16% year-on-year.

Banks remain vulnerable, even the largest. In April 2018, it was revealed that a co-ordinated DDoS attack had disrupted services at seven major UK lenders, including Barclays, HSBC, Lloyds and RBS. The National Crime Agency and international partners responded by shutting down a website linked to the attacks that offered DDoS services for a small fee.

As banks shift more of their retail and commercial activity online, a growing fear is that a widespread cyber event could cripple an institution’s activity. Dwindling branch networks are reducing the “hard” infrastructure that lenders could previously rely on to maintain essential services.

“Banks may be taking channels offline as firms move away from the high street and close their branches,” says the head of operational risk at a bank. “So one route they have which offers them a certain type of resilience may not be there in a few years’ time and they may be wholly dependent on the digital side.”

#3: IT failure

Though usually overshadowed by its attention-grabbing cousin – the threat of a cyber attack – the risk of an internal IT failure is never far off risk managers’ minds. When such failures happen, their financial, reputational and regulatory consequences can easily rival the damage from high-profile data theft.

It is probably no coincidence that the danger of a self-imposed IT debacle is the third-largest operational risk in 2019’s survey: it follows a year in which a botched system migration cost UK bankTSB more than £300 million ($396 million) in related charges and an unknowable sum in lost customers.

And it’s a risk that is only likely to grow in importance, op risk managers acknowledge: “The more we interconnect, the more we have online banking and direct [digital] interaction between our clients and ourselves – the more IT structures can be disrupted,” says a senior op risk executive at a major European bank, summing up a view expressed by several risk managers.

The Basel Committee on Banking Supervision is co-ordinating various national and international efforts to improve cyber risk management. Last year it set up the Operational Resilience Working Group – its first goal has been “to identify the range of existing practice in cyber resilience, and assess gaps and possible policy measures to enhance banks’ broader operational resilience going forward”, the committee said in a November 2018 document.

On a national level, operational resilience – including against IT failures – is an area of focus for the Bank of England. The central bank defines it as “the ability of firms and the financial system as a whole to absorb and adapt to shocks”. In July, it published a joint discussion paper on operational resilience with the UK’s Prudential Regulation Authority and Financial Conduct Authority.

Speaking at the OpRisk Europe conference in June, the PRA’s deputy chief executive Lyndon Nelson said: “It is likely that the [BoE] will set a minimum level of service provision it expects for the delivery of key economic functions in the event of a severe but plausible operational disruption.”

#4: Organisational change  

Organisational change – sometimes called ‘strategic execution risk’ – refers to the grab bag of things that can go sideways in the midst of any transition: switching to a new system from an old one, new strategic objectives, adjustments to new management edifices, errors or just bad decisions, etc.

The catalyst can come from any number of directions – mergers or acquisitions, divisional reorganisations, a strategic change in business mix. Unfortunately for financial firms, none of these are mutually exclusive ­– most are largely unavoidable.

Banks and buy-side firms are subject to the currents of consumer taste and the need to keep pace with rivals. Often, firms might be prompted into action by a shift in the nature of the threats they face: witness cyber risk’s long journey from the domain of IT to the risk team.

New regulation may also force change, requiring a company to divert resources, redeploy personnel or create new departments entirely – as in the case of the Fundamental Review of the Trading Book, for instance.

Problems arising during technology upgrades or changes are perhaps the most often mentioned risks in this threat category. But geopolitical rumblings can add to the difficulties in changes to a hierarchy or embarking on a new business strategy, says one risk professional. One senior op risk consultant says the atmosphere it produces can lead to dangerous operational mis-steps.

Brexit will soon probably provide many such examples. With a disorderly exit by the UK from the European Union this month almost a certainty, banks and brokers are setting up new entities on mainland Europe at a breakneck speed that almost guarantees problems – some as simple as staffing up and resource management.

“With political and economic risk increased, especially by Brexit, the time available to handle change is squeezed,” says the consultant. “That leads to potential errors in execution.”

#5: Theft and fraud

Despite slipping a place on this year’s list, theft and fraud is still many operational risk managers’ worst nightmare. The idea of a massive heist by enterprising hackers, mercenary employees or plain old bank robbers, possibly followed by fines and penalties, keeps the category near the top of the op risk survey year after year.

Inside jobs made up the top three of 2018’s biggest publicly reported op risk losses: Beijing-based Anbang Insurance lost a shattering $12 billion to embezzlement; in Ukraine, $5.5 billion vanished from PrivatBank in a ‘loan-recycling’ scheme; and in New Delhi, the Punjab National Bank lost $2.2 billion to wayward employees working with a fugitive diamond dealer.

These top losses were the result of old-fashioned crimes in the emerging world. At US and European banks though, it’s the cyber component of theft and fraud that looms large – despite the absence of even a single incident on the top 10 list.

“You can commit theft and fraud anonymously. You can go multicurrency, bitcoin,” comments a senior operational risk executive who says theft and fraud make up the biggest loss at the North American bank where he works. “You can be on the other side of the world, funds in hand, before anyone realises the money is missing.”

According to ORX News, the total of publicly reported losses attributable to cyber-related data breaches and instances of fraud and business disruption was $935 million worldwide in financial services last year. Over half those incidents involved fraud.

Cyber fraud comes generally in one of two sorts: one sows chaos, then grabs data en masse in the ensuing turmoil; the other zeros in on individuals to drain their accounts.

A large-scale attack could consist of millions of small transactions, like a $1 charge on a credit card, each likely unnoticed by the cardholder. In a targeted attack, thieves try to pry loose enough data from a customer’s social media persona to get access to their bank account. Other, more sophisticated schemes look for the weak points in authentication systems like biometrics. Some apps, for instance, can replicate a person’s voice patterns and fool voice ID systems.

“Equifax taught us that you need to move away from knowledge-based authentication to more activity-based identification,” says an op risk head at a second North American bank, for instance, something like asking people what their last two transactions were. In 2017, hackers stole data such as names, birthdates and Social Security numbers on nearly 148 million people from Equifax’s online systems.

#6: Outsourcing and third-party risk

Outsourcing key infrastructure or services to third parties is a tantalising prospect for many firms. The incentive is to harness the expertise of specialist providers, or to save costs. Or, ideally, a combination of the two.

The trade-off for many risk managers is a lingering concern about losing oversight of vital business functions. The prevalence of breaches via third parties and growing regulatory scrutiny of this area, not to mention the build-up of risk in certain systemically important platforms, are the focus of anxiety.

“If cloud platforms are correctly configured, they can enhance security, as well as creating efficiencies and reducing costs for customers,” says a UK cyber insurance executive. “However, if there was an incident that took down a cloud provider such as AWS or Azure, or a component part of the cloud infrastructure, this could cause an outage for thousands of individual companies.”

Regulators are zeroing in on outsourcing risk, too. The European Banking Authority (EBA) finalised outsourcing guidelines in February 2019, with a view to providing a single framework for financial firms’ contracts with third and fourth parties.

Financial institutions are also concerned about their reliance on crucial financial market infrastructuresuch as trading venues and clearing houses. Unlike IT or payroll systems, these are services that are difficult if not impossible to replicate in-house – as banks have tried to do with some troublesomevendor relationships.

Successful trading venues and clearing houses typically achieve a critical mass of liquidity that makes it very difficult for viable competitors to thrive. Without a credible threat to leave CCPs, banks lack the leverage to persuade the service providers to supply information on data or cyber security practices that might allow risk managers to properly assess threats.

#7: Regulatory risk

This year, the usual complement of regulation plus roiling new issues placed regulatory risk in seventh position on the list.

Chief among shifting regulatory expectations, anti-money laundering (AML) compliance has taken centre stage since the Danske Bank Estonian episode came to light in 2017. As much as €200 billion ($226.1 billion) in ‘non-resident’ money coursed through Danske’s modest Tallinn branch from 2007 to 2015.

Danske’s chief and chairman were ousted. The Danish financial regulator has imposed higher capital requirements, and the US Department of Justice has begun a criminal investigation. The EBA is looking into whether regulators in Denmark and Estonia were remiss. Estonia has ordered Danske to shut the branch.

“On AML, there are huge regulatory expectations there,” says one operational risk executive at an international bank. “We have a huge programme in the group to try and comply with their requirements.”

Elsewhere, changes to data protection legislation presents its own matrix of requirements for banks spanning continents, beginning with the EU’s GDPR.

“There are so many privacy regulations that raise issues from a regulatory risk standpoint. It’s a patchwork of regulations at the state and federal levels,” says an operational risk executive at one North American bank.

Banks are also warily eyeing further regulatory intervention from the Basel Committee on operational resilience – a broad initiative that sets out regulators’ expectations on a number of business continuity topics, including a minimum response time to return to normal operations after a platform outage.

#8: Data management

A conversation with any op risk manager will land, sooner or later, on the issue of data management. It could be concerns about data quality, particularly of historical data stored on legacy systems, which carries with it problems such as format and reliability. Or it could be the risk of missteps when handling customer data – inappropriate checks on storage, use or permissioning – that now come with the added threat of eye-watering fines from regulators.

Taken together, it’s no surprise that data management has made it into the top 10 op risks as a discrete risk category for the first time this year. It is considered separately from the threat of data compromise, where data breaches share the common driver of a malicious external threat.

Much of the impetus behind firms’ drive to beef up standards around the storage and transfer of personal data stems from the tightening of regulatory supervision on data privacy and security around the world – most obviously GDPR. Firms operating within the EU or holding data on EU citizens – which puts just about every firm around the world in scope, to some degree – may be heavily fined for falling foul of the regime, for instance, by failing to explicitly gain consent from individuals to retain and use their data.

As data management and compliance headaches multiply, the financial sector is pushing to use machine learning to augment the modelling of everything from loan approvals to suspicious transactions. In a sense, the methods offer a fix to downplay human errors. However, dealers have acknowledged machine learning models’ predictive power leaves them open to potentially unethical biases, such as inadvertently discriminating against certain customer groups because the bank’s data shows a higher risk of non-payment based on other customers historically served there.

Poor data management has consequences for everyday compliance exercises, such as filling in mandatory quarterly risk control self-assessment forms to the satisfaction of regulators. Banks “are missing robust data management processes to ensure that data is reliable, complete and up to date, and that reports can be generated [in a timely manner]”, the head of op risk at one Asian bank tells Risk.net.

#9: Brexit

Brexit covers such a wide range of possible risk events that some participants in this year’s survey disputed whether it should be included as a standalone chapter at all; but a significant number argued strongly that it should, with its collective drivers likely engendering a common set of specific risks for banks and financial firms for years to come.

At the time of writing, the UK is a fortnight away from leaving the EU, although speculation about a delay ranging from two months to two years is growing. Nor is there any clarity on the state of the UKEU relationship after the March 29 deadline. Anything from a long delay or a cancellation to an abrupt “no-deal” crash exit remains possible; this may have changed by lunchtime on the day this article is published.

Many financial firms whose business is affected by Brexit have given up waiting for lawmakers to finalise negotiations over the terms of the split and are pushing ahead with contingency plans. Banks and brokers are setting up new entities in mainland Europe, a process that is fraught with operational risk, particularly given the accelerated timescale for its completion.

Third-party risk from new supplier relationships; legal risk from repapering numerous financial contracts; people risk from hiring and training new personnel; these and other effects of the relocation will put additional strain on the operational resilience of companies.

Particularly in the case of a Brexit with no deal, industry practitioners fear a general increase in stress on almost every aspect of operations. One survey respondent points out: “If you have a hard Brexit, how resilient are your operation processes in terms of new requirements? If you think about it, overnight you go into new tariff regimes. So you have a portfolio with every operational risk you’ve ever seen.”

#10: Mis-selling

Mis-selling drops a few places on this year’s top 10 op risks, a reflection – or perhaps a shared hope among risk managers – that the era of mega-fines for crisis-era misdeeds among US and European banks might finally be over. They would do well to check their optimism, however: as the recent public inquiry into Australia’s financial sector that has excoriated the reputation of the nation’s banks shows, another mis-selling scandal is never far away.

Firms have shelled out a scarcely credible $607 billion in fines for conduct-related misdemeanours since 2010, the bulk of them related to fines and redress over mis-selling claims. 2011 and 2012 saw the heaviest losses, with the bulk of the fines for residential mortgage to payment protection insurance (PPI) mis-selling concentrated here.

The cumulative impact of fines and settlements has taken a huge toll on bank capital: as a recent Risk Quantum analysis shows, op risk now accounts for a third of risk-weighted assets (RWAs) among the largest US banks, while UK lenders still face hefty Pillar 2 capital top-ups from the Bank of England, largely as a result of legacy conduct issues.

Under the advanced measurement approach to measuring op risk capital which most US banks use, sizeable op risk losses can heavily skew a model’s outputs. But from a capital point of view, there are hopeful signs that with the severity and frequency of losses decreasing, RWAs are starting to see agradual rolldown for most banks – though the US Federal Reserve has privately made clear it will not sign off any more changes to bank op risk models, leaving their methodologies frozen in time.

While Australia’s banks emerged relatively unscathed from the 2008 global financial crisis, they too are now feeling the sting of public ire following a series of mis-selling and conduct-related scandals, the first of which claimed the scalp of Commonwealth Bank Of Australia chief executive Ian Narev last year, dealing a severe blow to the bank’s reputation.

The Royal Commission enquiry it helped spark had far wider ramifications beyond the bank. The fallout is still being felt, with National Australia Bank announcing on February 7 that its chief executive Andrew Thorburn and chairman Ken Henry would both step down.

Source | https://www.risk.net/risk-management/6470126/top-10-op-risks-2019

NSA might shut down phone snooping program, whatever that means!

The US National Security Agency (NSA) has created a boatload of buzz over the past few days with these two headline-makers:

First, a senior Republican congressional aide suggested over the weekend that the agency might be shuttering its phone metadata slurping program instead of renewing it in December (suppress your glee: the news is less encouraging for surveillance-adverse citizenry than it appears at first blush) and….

…Second, by releasing Ghidra, a free software reverse engineering tool that the agency had been using internally for well over a decade.

First, the political cat-and-mouse game:

Will the USA Patriot Act really die?

News of the NSA potentially killing off its mass phone-spying program – exposed by whistleblower Edward Snowden in 2013 – came on Saturday in the form of a Lawfare podcast interview with Luke Murry, national security advisor to House minority leader Kevin McCarthy.

At 5 minutes in, Murry said that the NSA hasn’t been using its metadata collecting system for spying on US citizens for the past six months, due to “problems with the way in which that information was collected, and possibly collecting on US citizens.” The program is due for Congressional reauthorization in December 2019, but Murry suggested that the administration might not bother:

I’m not actually certain that the administration will want to start that back up given where they’ve been in the last six months.

News outlets jumped on the notion that the NSA might end a widely disliked spying program: one that courts have dubbed illegal, that privacy advocates have protested, and which legislators havefilibustered against, given that it indiscriminately snoops on America’s own citizens.

If you’re wondering which spying program Murry was talking about, join the club. Was it the USA Patriot Act, whose Section 215 supported the NSA’s bulk collection of telephone records, which resulted in the agency having collected the phone records of millions of US persons not suspected of any crime? Or was it the USA Freedom Act, signed into law in 2015 as what was supposed to be a way to clip the NSA’s powers?

Section 215 expired at the end of May 2015 but was re-enabled through to the end of 2019 via the USA Freedom Act, which passed the following month, as well as being extended via various otherlegal maneuvers.

In the interview with Lawfare, Murry muddled the two laws. When asked about national security topics coming up this year, he said:

One which may be must-pass, may actually not be must-pass, is Section 215 of USA Freedom Act, where you have this bulk collection of, basically metadata on telephone conversations – not the actual content of the conversations but we’re talking about length of call, time of call, who’s calling – and that expires at the end of this year.

Again, Section 215 is actually from the Patriot Act. But whatever law Murry referred to, we shouldn’t be too excited by the notion that it will go away, because if history is any guide, it won’t. Rather, it will likely be reinterpreted and spring up in a new form. The Register has done a thorough rundown of how the NSA works that, and it’s well worth a read.

For example, Section 215 goes far beyond authorizing the collection of phone metadata, but the truth is that the secretive NSA hasn’t told us about the other 97% of data collection it authorizes. From the Register:

In 2014, for example, there were 180 orders authorized by the US government’s special FISA Court under Section 215, but only five of them related to metadata; the rest cover, well, the truth is that we don’t know what they cover because it remains secret.

It could be that Section 215 covers collection of emails and instant messages, search engine searches, and video uploads, for example. The law says that the NSA can collect “tangible things”, which could mean just about anything.

After the blanket surveillance program was reauthorized in 2015, the Office of the Director of National Intelligence (ODNI) issued an official statement that sure did sound good: the NSA would stop analyzing old bulk telephony metadata and start deleting it. What it would shift to, the DNI said, was the Freedom Act’s new, “targeted production” of records.

It turns out that the phone data collection didn’t stop, however. In a June 2018 statement, the ODNI said that the NSA had begun deleting all the call detail records that it had gotten its hands on – afterthat new, “targeted” approach.

The NSA blamed “technical irregularities in some data received from telecommunications service providers” for the junking of the phone records – problems that, it promised, had been resolved, clearing the way for yet more future records collection.

Murry said the program never got rebooted, though, and that he doesn’t believe it will. This undoubtedly has something to do with strenuous efforts by two US senators, Ron Wyden and Rand Paul, who’ve both been waging war against the NSA’s spying.

During their wrangling, which has gone on for over a year and has focused on getting more control of Section 702 of the Foreign Intelligence Surveillance Act (FISA), the NSA has avoided answering Rand’s questions (PDF), such as whether the NSA is collecting domestic communications. It’s also gotten creative with coming up with secret interpretations of the law.

The Register suggests that the fact that the public only knows about the telephone metadata aspects of the far broader Section 215 could be an advantage to the NSA, as it continues to find ways to keeping getting the data it wants. From the Register:

If the NSA offers to give up its phone metadata collection voluntarily, it opens up several opportunities for the agency. For one, it doesn’t have to explain what its secret legal interpretations of the law are and so can continue to use them. Second, it can repeat the same feat as in 2015 – give Congress the illusion of bringing the security services to heel. And third, it can continue to do exactly what it was doing while looking to everyone else that it has scaled back.

On a far more security-crowd-pleasing note, there’s the NSA’s release of Ghidra:

Ghidra

The NSA released Ghidra, a software reverse engineering tool, at the RSA security conference on Wednesday. It marked the first public demonstration of the tool, which the agency has been using internally and which helps to analyze malicious code and malware tracks down potential vulnerabilities in networks and systems.

ZDNet, reporting from the conference, said that the NSA’s plan is to get security researchers comfortable working with the tool before they apply for government cybersecurity positions, be those jobs at the NSA or at the other government intelligence agencies with which the NSA has privately shared Ghidra.

At this point, Ghidra is available for download only through its official website, but the NSA also plans to release its source code under an open source license on GitHub.

The initial reviews have been, overall, positive, in large measure because “free” is a lot cheaper than the alternative tool, IDA Pro. The commercial license for IDA Pro costs thousands of US dollars per year.

If you haven’t tried out Ghidra yet, you can get more information on the official website or on theGitHub repo.

Source : https://nakedsecurity.sophos.com/2019/03/07/nsa-might-shut-down-phone-snooping-program-whatever-that-means/

5 Important Augmented And Virtual Reality Trends For 2019 Everyone Should Read

Alongside AI and automation, virtual reality (VR) and its closely related cousin augmented reality (AR) have been touted for several years now as technologies likely to have a profoundly transformative effect on the way we live and work.

Solutions which allowing humans to explore fully immersive computer-generated worlds (in VR), and overlay computer graphics onto our view of our immediate environment (AR) are both increasingly being adopted in both entertainment and industry.

Over the next year, both VR and AR applications will become increasingly sophisticated, as devices get more powerful and capable of creating higher quality visuals. Our understanding of how humans can usefully navigate and interact within virtual or augmented environments will also evolve, leading to the creation of more “natural” methods of interacting and exploring virtual space.

Here are the 5 key trends I see for 2019:

  1.  AR and VR increasingly enhanced with AI

In a collision of two-letter abbreviations unlike anything that has come before it, AR and VR developers will increasingly build smart, cognitive functionality into their apps.

Computer vision – an AI (artificial intelligence) technology which allows computers to understand what they are “seeing” through cameras, is essential to the operation of AR, allowing objects in the user’s field of vision to be identified and labeled. We can expect the machine learning algorithms that enable these features to become increasingly sophisticated and capable.

The Snapchat and Instagram filters we are used to, to, e.g. overlay bunny ears and cat whiskers on selfies, are a very consumer-facing application of AI tech combined with AR. Their popularity in these and various other applications of image enhancement functionality isn’t likely to dwindle in 2019.

For more scientific use cases, there’s Google’s machine learning-enabled microscope to look forward to, which can highlight tissue which it suspects could be a cancerous tumor growth as a pathologist is looking at samples through the viewfinder.

VR is about putting people inside virtual environments and those environments – and their inhabitants – are likely to become increasingly intelligent over the next year. This is likely to include more voice control stemming from AI natural language processing, increasing immersion by reducing the reliance on icons and menus intruding into the virtual world. Gamers in VR will also face more challenging opponents as computer-controlled players will more effectively react and adapt to individual play styles.

2.  VR and AR will increasingly be used in training and teaching

Both technologies have obvious use cases in education. Virtual environments allow students to practice anything from construction to flight to surgery without the risks associated with real-world training. While augmented environments mean, information can be passed to the student in real time on objectives, hazards or best-practice.

This year Walmart announced that it is using 17,000 Oculus Go headsets to train its employees in skills ranging from compliance to customer service. In particular, training in the use of new technology is a focus for the retailer, with staff learning to use the new Pickup Tower automated vending units in virtual environments before they were deployed in stores.

Additionally, the US Army has announced a deal with Microsoft to use its HoloLens technology in military training, meaning soldiers will get real-time readings on their environment. Currently, this includes readouts to provide real-time metrics on soldier performance such as data about heart and breathing rates, but research objectives are to develop pathfinding, target acquisition and mission planning.

As VR and AR both continue to prove their worth at reducing risk and cost associated with training, it is likely we will see an increasingly rapid pace of adoption in industries involving work with expensive tools and equipment, or hazardous conditions, throughout 2019.

3.  Consumer Entertainment VR hits the mainstream

Ok, this one has been predicted for a couple of years now. VR adoption in homes has been steady since consumer headsets hit the market a couple of years ago, but hardware and application developers haven’t quite hit the sweet spot yet when it comes to creating the VR “killer app.”

But some significant developments are coming up that could mean 2019 is the year we start to see the real action here. Previous generations of VR headsets have been limited in one of two ways. Either by the user having to be tethered to a big, expensive computer to power the “experience,” hence limiting our mobility and therefore the sense of immersion. Or by relying on relatively low-powered mobile tech to control stand-alone headsets, meaning graphics quality is limited – another immersion-breaker.

This year, stand-alone headsets incorporating powerful, dedicated computer technology will hit the shelves, from both Vive and Oculus. Confident that their users will now be unrestricted by cables or low-powered displays, VR developers will create more realistic and accurate simulations of our real world within their virtual worlds. This will mean more immersive entertainment experiences and an unprecedented level of realism within VR games.

As well as being mobile, the new generation of headsets will improve the technology powering the virtual experience, by including features such as eyeball-tracking and increased field-of-view. Again, this will help users feel they can interact and explore in more natural ways.

Of course, it isn’t just the major players who are innovating – in a market like VR there’s always room for an underdog to shake things up. Amazon lists over 200 different VR headsets available to buy, many of them being created by startups promising new features and functionality that could end up being game-changers.

4.  VR and AR environments becoming increasingly collaborative and social

Facebook’s purchase of Oculus in 2016 showed that the social media giant believed virtual reality would become vital to the way we build shared online environments. Whether it’s for virtual “conference calls” where participants can see and interact with each other, or socializing and relaxing with friends.

Pioneers such as Spatial are leading the way with AR tools for the boardroom and office, where users can see virtual whiteboards and pin boards, as well as collaboratively work on design documents overlaid on real-world objects.

This year, I am also expecting to see Facebook’s VR Spaces platform, which allows users to meet and socialize in VR, move out of beta, and Tencent has announced that it is looking into adding VR to its WeChat mobile messaging system – the most widely used messenger app in the world.

business and future technology concept – handsome man with futuristic glasses

Combined with the predicted increase in sales of VR and AR headsets, this could mean that 2019 is the year we experience meeting and interacting with realistic representations of our friends and family in VR, for the first time.

5.  AR increasingly finding its way into vehicles

Fully (level 5) autonomous cars may still be a few years away from becoming an everyday reality for most of us, but automobile manufacturers have plenty of other AI tech to dazzle us with in the meantime.

Two of the most significant trends in new vehicles in 2019 will be voice assistants – with most major manufacturers implementing their takes on Alexa and Siri – and in-car AR.

Powered by machine learning, Nvidia’s DriveAR platform uses a dashboard-mounted display overlaying graphics on camera footage from around the car, pointing out everything from hazards to historic landmarks along the way. Audi, Mercedes-Benz, Tesla, Toyota, and Volvo have all signed up to work with the technology.

Alibaba-backed startup WayRay takes the route of projecting the AR data directly onto the car windshield, giving navigation prompts, right-of-way information, lane identification, and hazard detection.

In-car AR has the potential to improve safety – by allowing the driver to keep their eyes on the road as they read feedback that would previously have been given on a sat-nav or phone screen, as well as increase comfort and driver convenience. In a few years, it’s likely we will wonder how we ever lived without it.

FOR MALAYSIA VIRTUAL REALITY/AUGMENTED REALITY EXPERTS/PROVIDER, Please refer here:

INTEGRASI ERAT SDN. BHD | Website http://www.i-erat.com/

Source | https://www.forbes.com/sites/bernardmarr/2019/01/14/5-important-augmented-and-virtual-reality-trends-for-2019-everyone-should-read/#73410c5b22e7

Cyber espionage warning: The most advanced hacking groups are getting more ambitious

The top 20 most notorious cyber-espionage operations have increased their activity by a third in recent years – and are looking to conduct more attacks, according to a security company.

The most advanced hacking groups are becoming bolder when conducting campaigns, with the number of organisations targeted by the biggest campaigns rising by almost a third

A combination of new groups emerging and threat actors developing successful strategies for breaking into networks has seen the average number of organisations targeted by the most active hacking groups rise from 42 between 2015 and 2017 to an average of 55 in 2018.

The figures detailed in Symantec’s annual Internet Security Threat Report suggest that the top 20 most prolific hacking groups are targeting more organisations as the attackers gain more confident in their activities.

Groups like Chafer, DragonFly, Gallmaker and others are all conducting highly targeted hacking campaigns as they look to gather intelligence against businesses they think hold valuable information.

Once attackers might have needed the latest zero-days to gain access to corporate networks, but now it’s spear-phishing emails laced with malicious content that are most likely to provide attackers with the initial entry they need.

And because these espionage groups are so proficient at what they do, they have well tried-and-tested means of conducting activity once they’re inside a network.

“It’s like they have steps which they go through, which they know are effective to get into networks, then for lateral movement across networks to get what they want,” Orla Cox, director of Symentec’s security response unit told ZDNet.

“It makes them more efficient and, for organizations, it makes them harder to spot because a lot of the activity looks like traditional enterprise activity,” she added.

In many of the cases detailed in the report, attackers are deploying what Symantec refers to as ‘living-off-the-land’ tactics: the attackers uses everyday enterprise tools to help them travel across corporate networks and steal data, making the campaigns more difficult to discover.

Not only is the number of targeted campaigns on the rise, but there’s a larger variety in the organisations being targeted. Organisations in sectors like utilities, government and financial services have regularly found themselves targets of organised cyber-criminal gangs, but increasingly, these groups are expanding their attacks to new targets.

“Often in the past they’d have a clear focus on one sector, but now we see these campaigns can focus on a wide variety of targets, ranging from telecoms companies, hotels, universities. It’s harder to pinpoint exactly what their end goal is,” said Cox.

While intelligence gathering remains the key goal of many of these campaigns, some are beginning to expand by also displaying an interest in compromising systems.

This is a particularly worrying trend, because while stealing data in itself is bad enough, attackers with the ability to operate cyber-physical systems could be much worse.

One group Symantec has observed conducting this activity is a hacking operation dubbed Thrip, which expressed particular interest in gaining control of satellite operations — something that could potentially cause major disruption.

In the face of a rise in targeted attacks, governments are increasingly pointing the finger not just at nations but individuals believed to be involved in cyber espionage. For example, the United States named individuals it claims are responsible for conducting cyber attacks: they include citizens of Russia, North Korea, Iran and China. Symantec’s report suggests the indictment might disrupt some targeted operations, but it’s unlikely that cyber espionage campaigns will be disappearing anytime soon.

Source | https://www.zdnet.com/article/cyber-espionage-warning-the-most-advanced-hacking-groups-are-getting-more-ambitious/

Russia is set to DISCONNECT from the internet temporarily as part of preparations for a potential cyber war

  • Brief test ‘disconnecting’ Russia from the internet set to take place before April 1
  • Reports claim move is part of preparations for a potential cyber-war in the future
  • Russia has been accused of carrying out a series of cyber-attacks in recent years, prompting NATO and its allies to threaten the country with sanctions 

Russia is set to disconnect from the internet temporarily as part of preparations for a potential cyber-war in the future, it has been claimed.

The test – set to take place before April – will see data passing between organisations and Russian citizens remain inside the country instead of being routed internationally.

It comes after a law was introduced to Russia’s parliament last year mandating technical changes required to allow Russia’s internet to operate independently.

April 1 has reportedly been set as the deadline for submitting amendments to the draft law – dubbed the Digital Economy National Program – but the timing of the test has yet to be set in stone, it has been reported.

Under the law, Russia’s internet service providers (IPSs) would be required to ensure the independence of the country’s Runet internet space should foreign powers attempt to isolate the nation online.

Russia has been accused of carrying out a series of cyber-attacks in recent years,  prompting NATO and its allies to threaten sanctions.

The country’s ISPs are said to be broadly supportive of the goals of the law but disagree over how it could be implemented.

There are, however, fears among the providers that such a test could also cause ‘major disruption’, according to ZDNet.

The law could also see Russia creating its own version of the internet’s address system, or DNS, with the idea being it could still operate if links to servers located abroad are disconnected.

A dozen organisations oversee the root servers for DNS – none of them based in Russia, the BBC reports.

In October, Britain publicly accused Russia’s military intelligence service of carrying out a campaign of reckless and destabilising cyber-attacks across the world.

Foreign Secretary Jeremy Hunt said the Kremlin had been working in secret to wage indiscriminate and illegal cyber-attacks on democratic institutions and businesses.

In a damning charge sheet, the Government firmly pinned the blame for a string of cyber-attacks on the GRU, the organisation also accused of poisoning double agent Sergei Skripal.

The Foreign Office said the National Cyber Security Centre had assessed with ‘high confidence’ that the GRU was ‘almost certainly’ responsible for multiple attacks which have cost economies millions of pounds.

It added: ‘Given the high confidence assessment and the broader context, the UK Government has made the judgment that the Russian government – the Kremlin – was responsible.’

Hacks included those on the governing body of the Democratic Party in the US, the World Anti-Doping Agency, metro systems and airports in Ukraine, Russia’s central bank and two Russian media outlets.

Source | https://www.dailymail.co.uk/news/article-6691735/Russia-set-DISCONNECT-internet-temporarily-preparations-potential-cyber-war.html?ito=social-facebook

Weaponised AI is coming. Are algorithmic forever wars our future?

The US military is creating a more automated form of warfare – one that will greatly increase its capacity to wage war everywhere forever.

ast month marked the 17th anniversary of 9/11. With it came a new milestone: we’ve been in Afghanistan for so long that someone born after the attacks is now old enough to go fight there. They can also serve in the six other places where we’re officially at war, not to mention the 133 countries where special operations forces have conductedmissions in just the first half of 2018.

The wars of 9/11 continue, with no end in sight. Now, the Pentagon is investing heavily in technologies that will intensify them. By embracing the latest tools that the tech industry has to offer, the US military is creating a more automated form of warfare – one that will greatly increase its capacity to wage war everywhere forever.

On Friday, the defense department closes the bidding period for one of the biggest technology contracts in its history: the Joint Enterprise Defense Infrastructure (Jedi). Jedi is an ambitious project to build a cloud computing system that serves US forces all over the world, from analysts behind a desk in Virginia to soldiers on patrol in Niger. The contract is worth as much as $10bn over 10 years, which is why big tech companies are fighting hard to win it. (Not Google, however, where a pressure campaign by workers forcedmanagement to drop out of the running.)

At first glance, Jedi might look like just another IT modernization project. Government IT tends to run a fair distance behind Silicon Valley, even in a place as lavishly funded as the Pentagon. With some 3.4 million users and 4 million devices, the defense department’s digital footprint is immense. Moving even a portion of its workloads to a cloud provider such as Amazon will no doubt improve efficiency.

But the real force driving Jedi is the desire to weaponize AI – what the defense department has begun calling “algorithmic warfare”. By pooling the military’s data into a modern cloud platform, and using the machine-learning services that such platforms provide to analyze that data, Jedi will help the Pentagon realize its AI ambitions.

The scale of those ambitions has grown increasingly clear in recent months. In June, the Pentagon established the Joint Artificial Intelligence Center (JAIC), which will oversee the roughly 600 AI projects currently under way across the department at a planned cost of $1.7bn. And in September, the Defense Advanced Research Projects Agency (Darpa), the Pentagon’s storied R&D wing, announced it would be investing up to $2bn over the next five years into AI weapons research.

So far, the reporting on the Pentagon’s AI spending spree has largely focused on the prospect of autonomous weapons – Terminator-style killer robots that mow people down without any input from a human operator. This is indeed a frightening near-future scenario, and a global ban on autonomous weaponry of the kind sought by the Campaign to Stop Killer Robots is absolutely essential.

But AI has already begun rewiring warfare, even if it hasn’t (yet) taken the form of literal Terminators. There are less cinematic but equally scary ways to weaponize AI. You don’t need algorithms pulling the trigger for algorithms to play an extremely dangerous role.

To understand that role, it helps to understand the particular difficulties posed by the forever war. The killing itself isn’t particularly difficult. With a military budget larger than that of China, Russia, Saudi Arabia, India, France, Britain and Japan combined, and some 800 bases around the world, the US has an abundance of firepower and an unparalleled ability to deploy that firepower anywhere on the planet.

The US military knows how to kill. The harder part is figuring out whom to kill. In a more traditional war, you simply kill the enemy. But who is the enemy in a conflict with no national boundaries, no fixed battlefields, and no conventional adversaries?

This is the perennial question of the forever war. It is also a key feature of its design. The vagueness of the enemy is what has enabled the conflict to continue for nearly two decades and to expand to more than 70 countries – a boon to the contractors, bureaucrats and politicians who make their living from US militarism. If war is a racket, in the words of marine legend Smedley Butler, the forever war is one the longest cons yet.

But the vagueness of the enemy also creates certain challenges. It’s one thing to look at a map of North Vietnam and pick places to bomb. It’s quite another to sift through vast quantities of information from all over the world in order to identify a good candidate for a drone strike. When the enemy is everywhere, target identification becomes far more labor-intensive. This is where AI – or, more precisely, machine learning – comes in. Machine learning can help automate one of the more tedious and time-consuming aspects of the forever war: finding people to kill.

The Pentagon’s Project Maven is already putting this idea into practice. Maven, also known as the Algorithmic Warfare Cross-Functional Team, made headlines recently for sparking an employee revolt at Google over the company’s involvement. Maven is the military’s “pathfinder” AI project. Its initial phase involves using machine learning to scan drone video footage to help identify individuals, vehicles and buildings that might be worth bombing.

“We have analysts looking at full-motion video, staring at screens 6, 7, 8, 9, 10, 11 hours at a time,” says the project director, Lt Gen Jack Shanahan. Maven’s software automates that work, then relays its discoveries to a human. So far, it’s been a big success: the software has been deployed to as many as six combat locations in the Middle East and Africa. The goal is to eventually load the software on to the drones themselves, so they can locate targets in real time.

Won’t this technology improve precision, thus reducing civilian casualties? This is a common argument made by higher-ups in both the Pentagon and Silicon Valley to defend their collaboration on projects like Maven. Code for America’s Jen Pahlka puts it in terms of “sharp knives” versus “dull knives”: sharper knives can help the military save lives.

In the case of weaponized AI, however, the knives in question aren’t particularly sharp. There is no shortage of horror stories of what happens when human oversight is outsourced to faulty or prejudiced algorithms – algorithms that can’t recognize black faces, or that reinforce racial bias in policing and criminal sentencing. Do we really want the Pentagon using the same technology to help determine who gets a bomb dropped on their head?

But the deeper problem with the humanitarian argument for algorithmic warfare is the assumption that the US military is an essentially benevolent force. Many millions of people around the world would disagree. In 2017 alone, the US and allied strikes in Iraq and Syria killed as many as 6,000 civilians. Numbers like these don’t suggest a few honest mistakes here and there, but a systemic indifference to “collateral damage”. Indeed, the US government has repeatedly bombed civilian gatherings such as weddings in the hopes of killing a high-value target.

Further, the line between civilian and combatant is highly porous in the era of the forever war. A report from the Intercept suggests that the US militarylabels anyone it kills in “targeted” strikes as “enemy killed in action”, even if they weren’t one of the targets. The so-called “signature strikes” conducted by the US military and the CIA play similar tricks with the concept of the combatant. These are drone attacks on individuals whose identities are unknown, but who are suspected of being militants based on displaying certain “signatures” – which can be as vague as being a military-aged male in a particular area.

The problem isn’t the quality of the tools, in other words, but the institution wielding them. And AI will only make that institution more brutal. The forever war demands that the US sees enemies everywhere. AI promises to find those enemies faster – even if all it takes to be considered an enemy is exhibiting a pattern of behavior that a (classified) machine-learning model associates with hostile activity. Call it death by big data.

AI also has the potential to make the forever war more permanent, by giving some of the country’s largest companies a stake in perpetuating it. Silicon Valley has always had close links to the US military. But algorithmic warfare will bring big tech deeper into the military-industrial complex, and give billionaires like Jeff Bezos a powerful incentive to ensure the forever war lasts forever. Enemies will be found. Money will be made.

Source| https://www.theguardian.com/commentisfree/2018/oct/11/war-jedi-algorithmic-warfare-us-military

 

CJ: Artificial intelligence for sentencing, virtual hearings, holograms in tomorrow’s courts

PUTRAJAYA, Jan 11 ― Malaysia’s courts will soon see judges using artificial intelligence when deciding on punishments for convicted criminals, and may even adopt the use of advanced technology such as virtual courtrooms and holograms, the Chief Justice said today.

Tan Sri Richard Malanjum was listing out the upcoming technology expected to be used to drive reforms in the Malaysian judiciary.

“I know it sounds too high up, but that is in the pipeline. So hopefully, one day we will be talking about the use of virtual court and hologram technology instead of video-conferencing,” he said in a speech at  the opening of the legal year 2019.

A video shown to the audience at the event tagged the virtual court proposal under the heading “journey towards 2025”.

As for the use of artificial intelligence, Malanjum said a data-driven feature called “data sentencing” is currently being fine-tuned before it is introduced in the courts.

“This will help or guide judges and officers with the sentencing process so that there would be less disparity of the same offences when it comes before the courts, which have always been a problem.

“So hopefully this will come out soon by the end of this month,” he said.

The public has at times compared the penalties meted out for similar offences, questioning for example if thefts involving  items of smaller value were being punished more severely than those involving higher sums.

“Technology is coming to the legal profession and we must embrace technology. There is no option, either we adapt or be dropped,” he added.

Malanjum also highlighted two upcoming additional technological innovations that would serve to remind both lawyers and judges of upcoming deadlines.

Lawyers no longer need to be fearful of forgetting to file the relevant court documents by the due dates with the introduction of the “auto-alert” system.

“Very soon, before June, the alert system will be introduced in peninsular Malaysia, this has already been introduced in Sabah and Sarawak.

“So if you have a case where you are supposed to file a defence, the system will alert you at least one day before due date that you have a defence or affidavit to file,” he said.

Malanjum said judges will soon be using a colour-coded monitoring system where they will be alerted if they have pending judgments, jokingly saying they would not be able to “plead amnesia” when such pending judgments pile up.

Malanjum today listed the reforms that the judiciary has implemented in the past few months in response to recommendations by the Institutional Reforms Committee.

This includes a shift from the typical top-down approach in government departments to a collective management approach where the top four judges discuss and make decision based on the majority.

Malanjum said the Chief Justice now no longer chooses the panel of judges in Federal Court cases to avoid perceived bias especially in high-profile cases, with the selection of judges instead done randomly via a software under the e-balloting system.

The judiciary has also now formed a consultative committee together with representatives from the Attorney-General’s Chambers and the three Bars, with regular meetings held on matters related to the courts.

He said a queue management system has been introduced in the Palace of Justice where lawyers can wait for their turn outside of the courtroom, and can monitor when their cases would be called through the screen display in the canteen and via their mobile phones.

Malanjum said the online case management system had been in use in east Malaysia in the past years, adding that this “e-review” system was introduced  in the appellate courts.

Lawyers now no longer have to come to the appellate courts and face congested carparks just to get hearing dates, adding that this online system that allows case management from the comfort of their offices or homes will be extended to other courts by this March.

He envisioned a time when the court process would be “paper-less”, with lawyers being able to work from anywhere globally without renting an office and wielding a tablet instead of thick bundles of physical documents.

Noting the traffic jams between Kuala Lumpur and Shah Alam as well as rising flight ticket and hotel prices, Malanjum said video conferencing for court matters will be available in Kuala Lumpur, Shah Alam and Penang by the end of this month, with expansion to other areas soon.

Chief Judge of Sabah and Sarawak Datuk Seri David Wong Dak Wah earlier today acknowledged his predecessor Malanjum’s successful transformation of the courts in east Malaysia in the last 10 years from a manual court system to an award-winning modern IT-based system.

Wong noted that courts in Sabah and Sarawak had since 2007 conducted hearings using video conferencing between the main towns with time and costs saved for all involved, adding that the innovation would be further enhanced with the launch next week of three new mobile apps in the opening of the legal year in Kota Kinabalu, Sabah.

Malanjum was previously Chief Judge of Sabah and Sarawak from 2006 to July 2018, during which he introduced  the use of buses as mobile courts to reach those in rural areas in east Malaysia.

Wong said the mobile court rolled out about 10 years ago was a more cost-effective method than a permanent court building with permanent staff, adding that it has also helped facilitate 87,345 birth certification cases ― usually linked to late birth registration in remote areas ― as of December 31, 2018.

Source | https://www.malaymail.com/s/1711625/cj-artificial-intelligence-for-sentencing-virtual-hearings-holograms-in-tom

The U.S. Army Is Turning to Robot Soldiers

Right now, they’re used for reconnaissance and explosives. Soon, they’ll be on the battlefield alongside troops. Then comes the hard part.

From the spears hurled by Romans to the missiles launched by fighter pilots, the weapons humans use to kill each other have always been subject to improvement. Militaries seek to make each one ever-more lethal and, in doing so, better protect the soldier who wields it. But in the next evolution of combat, the U.S. Army is heading down a path that may lead humans off the battlefield entirely.

Over the next few years, the Pentagon is poised to spend almost $1 billion for a range of robots designed to complement combat troops. Beyond scouting and explosives disposal, these new machines will sniff out hazardous chemicals or other agents, perform complex reconnaissance and even carry a soldier’s gear.

“Within five years, I have no doubt there will be robots in every Army formation,” said Bryan McVeigh, the Army’s project manager for force protection. He touted a record 800 robots fielded over the past 18 months. “We’re going from talking about robots to actually building and fielding programs,” he said. “This is an exciting time to be working on robots with the Army.”

But that’s just the beginning.

The Pentagon has split its robot platforms into light, medium and heavy categories. In April, the Army awarded a $429.1 million contract to two Massachusetts companies, Endeavor Robotics of Chelmsford and Waltham-based QinetiQ North America, for small bots weighing fewer than 25 pounds. This spring, Endeavor also landed two contracts worth $34 million from the Marine Corps for small and midsized robots.

In October, the Army awarded Endeavor $158.5 million for a class of more than 1,200 medium robots, called the Man-Transportable Robotic System, Increment II, weighing less than 165 pounds. The MTRS robot, designed to detect explosives as well as chemical, biological, radioactive and nuclear threats, is scheduled to enter service by late summer 2019. The Army plans to determine its needs for a larger, heavier class of robot later this year.

“It’s a recognition that ground robots can do a lot more, and there’s a lot of capabilities that can and should be exploited,” said Sean Bielat, Endeavor’s chief executive officer. Specifically, he points to “the dull, the dirty and the dangerous” infantry tasks as those best suited to robotics.

During combat operations in Iraq and Afghanistan, the Defense Department amassed an inventory of more than 7,000 robots, with much of the hardware designed to neutralize improvised explosive devices (IEDs). Military brass were trying to quickly solve a vexing problem that was killing troops, but the acquisition strategy led to a motley assortment of devices that trade journal Defense News last year called “a petting zoo of various ground robots.”

This approach also meant that each “pet” was essentially a one-off device used for a single task. The Army’s current approach is to field more inter-operable robots with a common chassis, allowing different sensors and payloads to be attached, along with standardized controllers for various platforms, said McVeigh, a retired Army colonel.

This strategy is also geared toward affordability. “If we want to change payloads, then we can spend our money on changing the payloads and not having to change the whole system,” he said. While it ramps up to use its newer robots, the Army will retain about 2,500 of the medium and small robots from the older fleet.

Amid their many capacities, none of the current or planned U.S. infantry robots is armed—yet. Armed robots are hardly new, of course, with South Korea deploying sentry gun-bots in the demilitarized zone fronting North Korea and various countries flying drones equipped with a variety of weapons.

“Just strapping a conventional weapon onto a robot doesn’t necessarily give you that much” for ground troops, said Bielat, the Endeavor Robotics CEO.“There is occasional interest in weaponizing robots, but it’s not particularly strong interest. What is envisioned in these discussions is always man-in-the-loop, definitely not autonomous use of weapons.”

Yet, depending on one’s perspective, machines that kill autonomously are either a harbinger of a “Terminator”-style dystopia or a logical evolution of warfare. This new generation of weaponry would be armed and able to “see” and assess a battle zone faster and more thoroughly than a human—and react far more quickly. What happens next is where the topic veers into a moral, perhaps existential, morass.

“It seems inevitable that technology is taking us to a point where countries will face the question of whether to delegate lethal decision-making to machines,” said Paul Scharre, a senior fellow and director of the technology and national security program at the Center for a New American Security.

Last year, 116 founders of robotics and artificial intelligence, including Elon Musk, the billionaire founder of Tesla Inc. and SpaceX, sent a letter to the United Nations urging a ban on lethal autonomous weapons.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter stated, warning of a “Pandora’s box” being opened with such systems.

To date, 26 countries have joined calls for a ban on fully autonomous weapons, including 14 nations in Latin America, according to the Campaign to Stop Killer Robots. Notably absent from this list are nations with robust defense industries that research AI and robotics—countries such as the U.S., Russia, Israel, France, Germany, South Korea and the United Kingdom.

The campaign was launched five years ago by activists alarmed at the prospect of machines wielding “the power to decide who lives or dies on the battlefield.”

“If you buy into the notion that it’s a moral and humanitarian issue—that you have machines making life-and-death decisions on the battlefield—then it’s a very simple issue,” said Steve Goose, director of Human Rights Watch’s arms division and a co-founder of the campaign. “People have a sense of revulsion over this.”

Not long ago, such futuristic software seemed, if not quite impossible, at least 30 years away. Given the pace of research, however, that’s no longer the case—a fact that has given the effort by Musk, Goose and others new urgency.

“It seems that each year, that estimate has come down,” Goose said. Autonomous weapons systems are “years, not decades” hence, he said in an interview last month from Geneva, where a UN group convened its fifth annual conference on Lethal Autonomous Weapons Systems.

Much of the recent discussion has focused on defining the terms of debate and where human control for lethal decisions should lie. There are also questions as to how quickly such machines will proliferate and how to deal with such technology in the hands of rogue, non-state actors.

Over time, Goose said, the campaign will “convince these governments that every nation is going to be better off if no nation has these weapons.” But Scharre said there’s no chance the UN will agree to a legally binding treaty to ban autonomous weapons. He predicts that “a critical mass” of nations supporting some type of ban could pursue an agreement outside the UN.

While proponents may argue that autonomous robot soldiers will shield soldiers from harm, they will also remove the bloody consequences of armed conflict, a knowledge that “puts a valuable brake on the horrors of war,” said Scharre, a former Army Ranger.

“There’s a value of someone being able to appreciate the human consequences of war,” he said. “A world without that could be potentially more harmful. If we went to war and no one slept uneasy at night, what does that say about us?”

Source | https://www.bloomberg.com/news/articles/2018-05-18/the-u-s-army-is-turning-to-robot-soldiers