NASA hit by data breach

It has been reported that NASA has been hit by a data breach. The hack took place in October this year, and NASA notified its employees with an internal memo just recently. NASA have said that an ‘unknown intruder gained access to one of its servers storing the personal data of current and former employees’. Social Security numbers have also been compromised. It is also reported that the hack was discovered on October 23rd, so NASA waited almost two months to notify employees. This isn’t the first hack that the US space agency has suffered as similar security breaches happened in 2011 and 2016.

Dr Guy Bunker, SVP of Products, Clearswiftcomments on NASA data breach:

“The first thing to note here is that this occurred in the USA and impacted US employees, so the rules and regulations governing data breaches is different to that in the UK and Europe. In the USA and in some instances, there is an approach which is to leave the attackers alone when first discovered in order to better understand exactly what they have done to the network. In this way, specialist cyber forensic analysts can watch to see all the activity rather than just the obvious. This ensures that when they close off the vulnerability, they can also close off any other backdoors which might have been installed. It also means that more facts can be communicated. Of course, while this works well for the organisation the employees are at increased risk for a prolonged time. The sooner you know there is a potential problem, the sooner monitoring services can be set up to watch for fraudulent use of bank details etc. Within the USA, compromised Social Security Numbers in conjunction with other personal details (PII) puts the individual at high risk of identity theft.

Unfortunately for NASA this isn’t their first breach and questions will be asked as to why this has happened again, and what went wrong in the mitigation plans put in place after the previous two breaches. The increasingly sophisticated IT environment means that there are increased opportunities for vulnerabilities to be found by cyber attackers and so there needs to be increased vigilance on systems, their interconnectivity and data flows.”

(44)

Share

Florida Circuit Court Rules DNA Evidence Produced by STRmix™ Analysis Is Admissible in First-Degree Murder Case

A Florida Circuit Court has ruled that evidence produced through the use of STRmix™ – the sophisticated forensic software used to resolve mixed DNA profiles previously thought to be too complex to interpret – is admissible in Florida v. Reshaunte Jermaines Anglin (Case No. 2017-CF-7816-XX, Section F9), a 2016 case in which the defendant is charged with first-degree murder, robbery with a firearm, and evidence tampering.

 

Denying a defense motion to exclude opinion evidence, Judge Jalal A. Harb of the Tenth Judicial Circuit Court for Polk County, FL, ruled, “The unrefuted testimony [of Senior DNA Analyst Cristina Rentas, who tested the DNA samples in the case] supports that the STRmix™ analysis has been sufficiently tested and accepted in the scientific community for DNA forensics.”

 

Judge Harb continued, “The basic underlying principles [of STRmix™] are merely an evolution of well-established DNA testing. STRmix™ utilizes this existing DNA science to provide a probabilistic ratio. The mathematics from which this is derived dates back to at least the 1980s, with over three million scholarly articles on the subject.”

 

Judge Harb concluded that the DNA analysis produced by STRmix™ was admissible under both the Frye and Daubert standards – the two legal metrics used to assess the validity of scientific techniques or methods.

 

The Frye standard requires that a new or novel scientific technique be generally accepted in the relevant scientific community, and that the particular evidence derived from the technique and used in an individual case has a foundation that is scientifically reliable.

 

The Daubert standard is used to assess whether an expert’s scientific testimony is based on reasoning or methodology that is scientifically valid and can properly be applied to the facts at issue. Factors considered in determining the validity of a methodology include whether it has been subjected to rigorous testing and validation, published and peer reviewed, and generally accepted in the scientific community, as well as in federal and state courts throughout the U.S.

 

To date, there have been at least two dozen successful admissibility hearings for STRmix™ in the U.S., while DNA evidence interpreted with STRmix™ has been successfully used in numerous court cases.

 

Forty U.S. forensic labs now routinely use STRmix™ to resolve DNA profiles, including the FBI, the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF), and multiple state and local agencies. STRmix™ is also in various stages of installation, validation, and training in more than 50 other U.S. labs.

 

Internationally, STRmix™ has been used in casework since 2012. Currently in use in 14 labs in Australia, New Zealand, England, Scotland, Ireland, and Canada, STRmix™ has been used to interpret DNA evidence in more than 100,000 cases.

 

A new version of STRmix™, STRmix™ v2.6, was introduced in August 2018. The new version features a user interface that has been completely redeveloped and refreshed, providing users with vastly improved usability and workflow. Version 2.6 also enables a range of contributors to be entered when performing a deconvolution, and any type of stutter to be added and configured.

 

STRmix™ was developed by John Buckleton, DSc, FRSNZ, and Jo-Anne Bright of the New Zealand Institute of Environmental Science and Research (ESR), and Duncan Taylor from Forensic Science South Australia (FSSA).

 

For more information about STRmix™ visit www.strmix.com.

(58)

Share

Living with Machines: Alan Turing Institute and British Library awarded £9.2 million for a major new project set to revolutionise research

The Alan Turing Institute and the British Library, together with researchers from a range of universities, have been awarded £9.2 million from the UKRI’s Strategic Priorities Fund for a major new project. ‘Living with Machines’, which will take place over five years, is set to be one of the biggest and most ambitious humanities and science research initiatives ever to launch in the UK.

A new kind of scholarship

‘Living with Machines’ will see data scientists working with curators, historians, geographers and computational linguists with the goal to devise new methods in data science and artificial intelligence that can be applied to historical resources, producing tools and software to analyse digitised collections at scale for the first time.

In recognition of the significant changes currently underway in technology, notably in artificial intelligence, the project will use the century following the first Industrial Revolution, and the changes brought about by the advance of technology across all aspects of society during this period as its focus point.

Initial research plans involve scientists from The Alan Turing Institute collaborating with curators and researchers to build new software to analyse data drawn initially from millions of pages of out-of-copyright newspaper collections from within the archive in the British Library’s National Newspaper Building, and from other digitised historical collections, most notably government collected data, such as the census and registration of births, marriages and deaths. The resulting new research methods will allow computational linguists and historians to track societal and cultural change in new ways during this transformative period in British history. Crucially, these new research methods will place the lives of ordinary people centre-stage, rather than privileging the perspectives of decision-makers and public commentators.

‘Living with Machines’ will take a radical approach to collaboration, breaking down barriers between academic traditions, bringing together data scientists and software engineers from The Alan Turing Institute and curators from the British Library as well as computational linguists, digital humanities scholars and historians from universities including Exeter, University of East Anglia, Cambridge and Queen Mary University of London.

The research methodologies and tools developed as a result of the project will transform how researchers can access and understand digitised historic collections in the future.

The fourth industrial revolution

The insights from the project will provide vital context for the present-day debates about the future of work, prompted by the social change caused by the so-called ‘fourth industrial revolution’ of artificial intelligence and robotics.  For example, data-driven findings relating to how attitudes to machines and mechanisation changed during the nineteenth century could help present-day researchers and policy-makers to understand and unpick public understanding around current attitudes towards new technologies, for example autonomous vehicles or the use of artificial intelligence and robotics in everyday transactions.

Key starting points for the project include marshalling data, developing workflows and methods for ensuring data quality developing intuitive interfaces to facilitate collaboration with historians, and the launch of the collaborative research agenda through project ‘laboratories’. There will be future calls and opportunities for researchers to get involved.

Professor Roey Sweet, Director of Partnerships and Engagement at the Arts and Humanities Research Council, commented:

“Living with Machines represents a hugely exciting and innovative development in arts and humanities research. The collaboration between historians and data scientists, exploiting the remarkable growth of digital archives, will open up dramatic new perspectives on the well-known story of the industrial revolution and the history of society’s relationship with machines and technology since the eighteenth century.”

Roly Keating, Chief Executive of the British Library, commented:

“We are delighted to be awarded this funding to embark on an ambitious programme of research with our colleagues from the Alan Turing Institute and from partner universities. By opening up our unrivalled collections to this unique collaboration between historians and data scientists, we hope to not only aid researchers and communities in their understanding of our shared past, but to pave the way towards revolutionising the future of historical research.”

Adrian Smith, Director of The Alan Turing Institute, commented:

“Data science and artificial intelligence have the potential to supercharge the science and humanities. We can analyse vast amounts of data at a huge scale and uncover new insights and questions, as in this timely project with the British Library which will apply data-driven techniques to examine the human, social and cultural impact of the first industrial revolution. It is thrilling to bring together this diverse range of experts to work on this important research problem and deliver tools and techniques which will benefit scholars for generations to come.”

Dr Ruth Ahnert, Senior Lecturer in Renaissance Studies at Queen Mary University of London and lead researcher on the project, commented:

“For me this is more than just a research project. It is also a bold proposal for a new research paradigm. That paradigm is defined by radical collaboration that seeks to close the gap between computational sciences and the arts and humanities by creating a space of shared understanding, practices, and norms of publication and communication. We want to create both a data-driven approach to our cultural past, and a human-focused approach to data science.”

(41)

Share

Large disparity in cybersecurity skills and spending across NHS; reveals Redscan FOI campaign

Redscan, the penetration testing, threat detection and incident response specialist, today announced the results of a series of Freedom of Information (FOI) requests submitted to NHS trusts in the UK.

Redscan found that, on average, trusts have just one member of staff with professional security credentials per 2,628 employees. Some large trusts (with up to 16,000 total employees) have no formally qualified security professionals whatsoever. Expenditure on cybersecurity training over the last 12 months ranged from less than £250 to nearly £80,000 per trust, with no apparent link between the size of trust and money spent. A significant proportion of trusts have spent nothing on specialist cybersecurity or GDPR training for staff, requiring only that all their employees complete free Information Governance (IG) training provided by NHS Digital.

“These findings shine a light on the cyber security failings of the NHS, which is struggling to implement a cohesive security strategy under difficult circumstances,” explained Redscan director of cybersecurity, Mark Nicholls. “Individual trusts are lacking in-house cybersecurity talent and many are falling short of training targets; meanwhile investment in security and data protection training is patchy at best. The extent of discrepancies is alarming, as some NHS organisations are far better resourced, funded and trained than others.”

“WannaCry severely disrupted critical healthcare services across the country in 2017, costing the NHS an estimated £92m. The Government has subsequently increased funding for cybersecurity in the NHS by £150m, while introducing a number of new security policies. There are certainly green shoots of progress, but this doesn’t mask the fact that the NHS is under tremendous financial pressure, is struggling to recruit the skills it needs and must continue to refine its cybersecurity strategy across the UK.”

A breakdown of key stats is as follows:

Cybersecurity qualifications – Trusts were asked how many members of staff held professional data security and/or cybersecurity qualifications: On average, NHS trusts employ one qualified security professional per 2,582 employees. Nearly a quarter of trusts have no employees with security qualifications (24 out of 108 trusts), despite some employing as many as 16,000 full and part-time personnel. Several NHS organisations that employee no qualified cybersecurity professionals reported having staff members in the process of obtaining relevant security qualifications – perhaps an indication of the difficulties hiring trained professionals.

Nicholls: “The cybersecurity skills gap continues to grow and it’s incredibly hard for organisations across all sectors to find enough people with the right knowledge and experience. It’s even tougher for the NHS, which must compete with the private sector’s bumper wages. Not to mention the fact that trusts outside of traditional tech hubs like London and Cambridge have a smaller talent pool from which to choose from.”

“It’s true that NHS trusts outsource key security functions to NHS Digital and other third-party specialists, but I would still expect to see more security professionals employed in-house. No doubt resources are being strained further still if you assume that staff with security qualifications are part of IT teams responsible for far more than just cyber security.

Money spent – Trusts were asked how much money they had spent on data security training during the last 12 months, including any GDPR-related training: Trusts spent an average of £5,356 on data security training, although it’s worth noting that a significant proportion conducted such training in-house at no cost or only used free NHS Digital training tools. GDPR-related training was the most common course type procured for staff. Other training programmes cited included: BCS Practitioner Certificate in Data Protection, Senior Information Risk Owner and ISO27001 Practitioner.

Spending on training varied significantly between trusts, from £238 to £78,000. However, the size of each trust was not always a determining factor. For example, of mid-sized trusts with 3000-4000 employees, training expenditure ranged from £500 to £33,000.

Nicholls: “The figures suggest that some trusts may be lacking the budget required to adequately train their staff on cybersecurity and data protection. While this will not surprise anyone, the extent of the disparity between trusts might. Some trusts are outspending others by a factor of twenty. I worry that this clear divide will have a significant bearing on which trusts are better prepared to prevent, detect and respond to cybersecurity incidents. In any case, the NHS must make efforts to redress this severe imbalance.”

NHS Digital training targets – Trusts were asked to provide data on the total number of full-time and part-time employees to have completed security training over the last 12 months: NHS Digital’s mandatory information governance training requirements state that 95% of all staff must pass IG training every 12 months. The FOI responses revealed that, currently, only 12% of trusts had met the >95% training target and the majority of trusts had trained between 80% and 95% of their staff. A quarter of trusts had trained less than 80% of their staff (some reporting that less than 50% had been trained).

A separate FOI request was also sent to NHS Digital, which declined to provide data on how many trusts had met its Information Governance targets, or how many IT staff and board members had completed dedicated training. NHS Digital did however reveal that 139 Trusts had now undertaken a Data Security Onsite Assessment. This is a marked improvement on the figure released in July 2018 (60), showing that NHS trusts are taking these assessments more seriously and that measures are being implemented at trust level.

Nicholls: “These numbers are definitely more promising, and I’m sure there has been a marked improvement in security training over the last five years, especially since WannaCry. However, it is important to note that gaps still exist. People remain the weakest link in the cyber security chain. Despite IG training raising awareness of security risks and common pitfalls, you can never fully mitigate the risks of employees making mistakes or falling for social engineering scams.

“In order to effectively identify and respond to the latest threats, organisations need to develop a better understanding of hackers’ tactics, techniques and procedures. Only dedicated professionals that closely assess and monitor the threat landscape day-to-day and properly understand how an organisation’s infrastructure is architected can begin to work out how to mitigate evolving risks.”

(75)

Share

Lords to ask senior judges about use of forensic science in the Criminal Justice System

On Tuesday 18th December the House of Lords Science and Technology Select Committee will question senior judges about the use of Forensic Science in courts in England and Wales and its contribution to the delivery of justice.

The Committee will ask the witnesses how judges can ensure that the analysis and interpretation of forensic evidence presented in court has a firm scientific basis. The Committee will also ask whether there are effective channels for the communication of advice on science and technology to the judiciary.

The Session will begin at 4:25pm in Committee Room 4A of the House of Lords. Giving evidence will be:

  • Lord Hughes of Ombersley, former Justice of the Supreme Court
  • His Honour Judge Wall QC, Circuit Judge
  • Sir Brian Leveson, President of the Queen’s Bench Division and Head of Criminal Justice

Questions the Committee are likely to ask include:

  • What is the level of understanding of forensic science within the Criminal Justice System amongst lawyers, judges and juries?
  • When a case that relies on forensic evidence comes before you how do you ensure that any expert witness is sufficient qualified to speak about the subject?
  • Is enough being done to prepare for the increasing role that digital forensics will have in the Criminal Justice System in the future?
  • Is there a source of responsive and ongoing independent, balanced and accessible analysis of science and technology in relation to legal issues available to the judiciary?

(28)

Share

The printer hacking threat

By Louella Fernandes, Quocirca

The recent news that 50,000 printers were hacked and produced unsolicited print-outs highlights the potential ease with which hackers can access internet connected printers.  According to reports, the hacker scanned the Internet to find the list of vulnerable printers with port 9100 open, exploiting them to output a message to urge people to subscribe to popular blogger PewDiePie’s YouTube Channel. This is not the first time this type of printer hack has occurred; in 2017 thousands of printers were hacked through the same port.

While this was arguably a relatively harmless stunt, it brings attention to the fact that printers do present security vulnerabilities. Like any other network endpoint, they need to be protected, and left unsecured they can be the weakest link in the IT security chain. Today’s smart multifunction printers (MFPs) have many points of vulnerabilities.

Along with the capabilities to capture, process, store and output information, most print devices also run embedded software. Information is therefore susceptible at a device, document and network level. Not only can confidential or sensitive data be accessed by unauthorised users,  today’s evolving Internet of Things (IoT) threat landscape, hackers that target printers with lax security can wreak havoc on a company’s network.  Data stored on print devices can be used for fraud and identity theft and once hackers have a foothold, the unsecured print device provides an open door to the network. Compromised devices can be harnessed as botnets and used as launch pads for malware propagation, DDoS attacks and devastating ransomware attacks.

To address these threats, print devices need to include robust security protection. The challenge is that while more manufacturers are embedded security in new generation devices, most organisations have a mixed fleet of devices, old and new, from different manufacturers.

Organisations must identify, patch, update and replace vulnerable print devices on their networks and a good place to start is with a print security threat assessment. Such assessments are commonly offered under a managed print service (MPS) contract, and seek to uncover security vulnerabilities.

As both internal and external threats continue to evolve, a multi-layered approach to print security is essential to combat the security vulnerabilities that are inherent in today’s networked printers. Unless an organisation regularly tests its defences, it will be at risk of leaving a part of the print infrastructure exposed, enabling a skilled hacker to penetrate the network.

This most recent hack is certainly a warning about the risk of leaving network ports open on the internet. While no malicious intent was involved on this occasion, it is not to say that more sophisticated attacks that seek to access the network via a printer will not occur.   Given the potential repercussions of any serious data breach, businesses need to take action to protect and secure their printers.

(75)

Share

Toyota Builds Open-Source Car-Hacking Tool

https://www.darkreading.com/vulnerabilities—threats/toyota-builds-open-source-car-hacking-tool/d/d-id/1333415

Toyota Builds Open-Source Car-Hacking Tool
‘PASTA’ testing platform specs will be shared via open-source.
BLACK HAT EUROPE 2018 – London – A Toyota security researcher on his flight
from Japan here to London carried on-board a portable steel attaché case that
houses the carmaker’s new vehicle cybersecurity testing tool.

Takuya Yoshida, a member of Toyota’s InfoTechnology Center, along with his
Toyota colleague Tsuyoshi Toyama, are part of the team that developed the new
tool, called PASTA (Portable Automotive Security Testbed), an open-source
testing platform for researchers and budding car hacking experts. The
researchers here today demonstrated the tool, and said Toyota plans to share
the specifications on Github, as well as sell the fully built system in Japan
initially.

What makes the tool so intriguing – besides its 8 kg portable briefcase size –
is that automobile manufacturers long had either ignored or dismissed
cybersecurity research exposing holes in the automated and networked features
in their vehicles. Toyota’s building this tool and sharing its specifications
via open source is a major shift for an automaker.

Toyota’s Tsuyoshi Toyama (left) and Takuya Yoshida (right) show off the PASTA
testing platform at Black Hat Europe.
“There was a delay in the development of cybersecurity in the automobile
industry; [it’s] late,” Toyama said in the pair’s talk here today. Now
automakers including Toyota are preparing for next-generation attacks, he
said, but there remains a lack of security engineers that understand auto
technology.

That was a driver for the tool: to help researchers explore how the car’s
engine control units (ECUs) operate, as well as the CAN protocol used for
communicating among elements of the vehicle, and to test out vulnerabilities
and exploits.

Toyama said the tool isn’t meant for the live, moving-car hacking that Charlie
Miller and Chris Valasek performed: the goal was to offer a safe platform for
researchers who may not have the expertise of Miller and Valasek, for example.
It simulates remote operation of wheels, brakes, windows, and other car
features rather than “the real thing,” for safety reasons. “It’s small and
portable so users can study, research, and hack with it anywhere.”

The PASTA platform holds four ECUs inside, as well as LED panels that are
controllable by the researcher to run any tests of the car system operation,
or attacks such as injecting CAN messages. It includes ODBII and RS232C ports,
as well as a port for debugging or binary hacking, he said.

“You can modify the programming of ECUs in C” as well, he said.

The researchers integrated the tool with a driving simulator program, as well
as with a model car to demonstrate some ways it can be used. PASTA also can be
used for R&D purposes with real vehicles: that would allow a carmaker to test
how a third party feature would affect the vehicle and its security, or
reprogram firmware, for example.

Toyota plans to later add to PASTA Ethernet, LIN, and CAN FD, as well as
Wi-Fi, Bluetooth, and cellular communications features for testing.

PASTA soon will be available on Github, the researchers said.

(212)

Share

Lords to hear evidence on standards and accreditation in Forensic Science

On Tuesday 11th December the House of Lords Science and Technology Select Committee will continue taking evidence for its inquiry into the UK’s use of forensic science and its contribution to the delivery of justice.

This week’s session will focus on standards and accreditation in forensic science and whether the current standards used across laboratories are appropriate for the interpretation of forensic evidence.

The Session will begin at 3:25pm in Committee Room 4A of the House of Lords. Giving evidence will be:

  • Lorraine Turner, Business Development and Technical Director, United Kingdom Accreditation Service (UKAS)
  • Katherine Monnery, Forensic Accreditation Specialist, UKAS;
  • Sara Walton, Governance and Resilience Market Development Manager, British Standards Institution (BSI)
  • Steve Brunige, Head of Industry and Government Engagement, BSI

Questions the Committee are likely to ask include:

  • How closely do you work with the Forensic Science Regulator in setting standards and awarding accreditation?
  • Would there be benefit in developing standards especially for Forensic Science?
  • In written evidence we’ve heard that compliance with the ISO standards amongst some disciplines is currently quite low (such as fingerprints and digital forensic science). Why do you think this is?
  • How do you account for the importance of experience and expertise that is necessary at every stage of the forensic process (crime scene, lab and court) as well as enforcing strong procedural standards for the analysis of specimens?

(62)

Share

Marriott’s Breach- Comment

Tony Pepper, CEO of Egress Software Technologies comments:

“Marriott has admitted that a data breach incident within its Starwood guest reservation database may have led to the personal information of up to 500 million guests who made a reservation, being compromised.

This data breach clearly enters and surpasses the ‘mega breach’ parameter. Using figures from Ponemon Institute’s ‘Cost of a Data Breach’ study, these types of breaches are projected to cost companies between $40 million and $350 million respectively. Marriott has revealed that its ongoing investigation showed that an unauthorised party had first copied encrypted information in 2014, and that it had taken steps towards removing this data.

Aside from the scale of the breach, what is equally alarming is the period of time taken for Marriott to identify the vulnerability and to report the incident to the appropriate authorities. Given the sensitive nature of the data that has been compromised, it will be extremely difficult to gauge the true impact on their customers.

With organisations under increasing pressure to maintain compliance with data protection and retention legislation such as GDPR, as well as preserve business continuity, they have a responsibility to ensure that the information they share and store, is appropriately protected. To achieve this, businesses must first understand the sensitivity of the data they manage and then apply a combination of encryption, rights management and policy-based access control. If such an approach had been taken in this case, it is likely that the breach could have been identified far more quickly and the risk mitigated.

Not only does this incident raise concerns for Marriott, but it also serves as a reminder to all organisations that they must constantly be working to enhance their data security systems and protocols to avoid similar breaches on this scale. We now await with interest the full response from the ICO.”

(40)

Share

Cybersecurity in Europe is Improving: Thank You GDPR?

By Jake Olcott, VP of Strategic Partnerships at BitSight
After years of debate over whether to impose new cybersecurity regulations on companies, General Data Protection Regulation (GDPR) laws went into effect in May 2018. Already we’ve seen several data breach victims ordered to pay fines under the new rules and cookie disclosure notices are popping up on more websites than ever.
Everyone is waiting with bated breath for the first report from the Information Commissioner’s Office (ICO), to be issued after the implementation of GDPR, in order to gain an understanding of the magnitude of breach reporting.
The most recent report from the Information Commissioner’s Office (ICO) has revealed a 29% increase in the number of reported data security incidents, from 3146 between April and June 2018, to 4056 from July to September 2018. This demonstrates a 490% increase compared to the same quarter in 2017. This doesn’t necessarily mean that organisations are experiencing more incidents, but it does means that more are now being reported, as organisations try to tread carefully.
This has inevitably been fuelled by GDPR, as well as the significant data breach incidents that recognisable brands have suffered. However, this increase is also likely due to the new data breach notification requirements under GDPR, which require organisations to report incidents within 72 hours of becoming aware of them.
Drilling into the statistics, most data breach incidents are down to people, processes and inadequate policies. These frequently involve internal users making mistakes, including the incorrect disclosure of data; this accounted for 62% of all data incidents between July and September 2018.
In terms of monetary penalties, £875,000 of fines were issued under the UK’s Data Protection Act (DPA), between July and September 2018, down from £1,030,000 between April and June 2018. It should be noted that from GDPR’s enforcement on 25th May to the beginning of October 2018, fines reached £1,425,000, with organisations undoubtedly falling foul of the new regulations as they work towards achieving full compliance.
But let’s think about the bigger picture. Is GDPR working? How would we know?
For years, global policymakers have struggled to develop effective responses to cyber threats, in part because they just don’t have the data to understand what’s happening in cyberspace. Think about it — if you are a policymaker considering how to address unemployment, you can turn to the Office for National Statistics (ONS) – which measures labour market activity, working conditions and the impact of economic activity – in addition to comprehensive census data on personal and socio-demographic and economic issues.
When it comes to cybersecurity, the UK Government’s National Cyber Security Centre (NCSC) has taken the leading role in significantly raising awareness of the evolving cybersecurity risks facing all UK businesses with a digital footprint, as well as the threat to the UK’s Critical National Infrastructure (CNI). This includes a comprehensive bank of guidance on a variety of topics, alongside extensive education and research papers, insights, alerts and advisories, and recommended certified cybersecurity products.
BitSight is taking a different approach to cybersecurity and risk management, enabling it to profile and identify specific threats. Thanks to its extensive data collection and processing techniques and capabilities, BitSight is able to collect, evaluate, and measure cybersecurity performance across global organisations, providing unique and valuable insight into global, regional, and sectoral performance trends across organisations of varying sizes.
When BitSight recently analysed the security performance of more than 140,000 organisations worldwide, the findings were surprising. While its research revealed a steady decrease in security performance across all worldwide regions, organisations within continental Europe actually improved their security performance over the last year.Some of the areas that organisations have improved on include the implementation of stronger controls to reduce Internet exposed services (open ports).
Security performance data may be useful to policymakers as they consider the impact of existing regulations like GDPR, but also future policies and regulations. Policymakers around the world will continue to consider implementing regulations based on GDPR that will protect citizens from poor data security management.
The industry has already seen many calls to adopt similar legislation elsewhere around the world, including Apple’s Tim Cook who, in October 2018 at the Conference of Data Protection and Privacy Commissioners in Belgium, proposed that the U.S enact a policy like GDPR. This summer, California passed the California Consumer Privacy Act that imposes stronger privacy regulations for companies doing business in the state, with this also being discussed across the United States.
How will policymakers judge the necessity or effectiveness of these efforts? In what sectors should they spend their time and focus? On what sized companies? What data will they use? How will they model the impact of introducing such policies?
Global policymakers must begin thinking about the essential elements that will be necessary to build a lasting legal and policy framework to address these significant cyber risks. The ONS was established over 20 years’ ago; as we look ahead to the next two decades, the transformational changes that will occur worldwide as a result of technological and connectivity developments will inevitably present a new wave of cybersecurity challenges, making quantitative cybersecurity more crucial than ever.

(32)

Share