Data sharing between Health apps apparently the norm despite there being a lack of transparency- Comment

A new study published in the British Medical Journal found the sharing of user data from health-related mobile apps on the Android platform was routine and yet far from transparent. Lead author Dr Quinn Grundy said health apps were a “booming market”, but one with many privacy failings. The study follows a recent report from the Wall Street Journal which found several apps, including period tracker Flo Health, were sending sensitive user data – including weight, blood pressure and ovulation status – to Facebook.

Full story here: https://thenewdaily.com.au/life/tech/2019/03/21/health-apps-data-share/

Commenting on the news are the following security professionals:

Lamar Bailey,  Director of Security Research and Development at Tripwire:

“Although it is well known and documented that apps use customers’ data as a currency, it is particularly troubling when that data includes sensitive information such as medical records and health metrics. The wealth of information that health apps collect and store can be an appealing target for cybercriminals. It is paramount that these apps clearly state in their registration process if they plan to divulge their customers’ information to third parties, so that subscribers are able to opt out. All too often these terms on usage are buried in the user agreement and the only way to opt out is to not use the app.”

Warren Poschman,  Senior Solution Architect at comforte AG:

“In today’s data-driven economy, sharing and selling data is a reality and one that won’t likely change despite any amount of regulation.  What’s key though is how these companies secure that data and maintain our privacy.  Though that seems at odds with sharing and selling data, if done properly it can coexist.  Companies that offer data need to share it anonymized but usable – and adopting a data-centric security model using technology like tokenization can do just that: the analytical value of the data isn’t affected even though it is anonymized.  Want to know what meds I’m taking or what procedures I’ve had so it can be cross-referenced and insights gained, absolutely!  Want to know that it was me specifically that takes that medication or has had those procedures, absolutely not!  Regulatory bodies need to start ensuring that companies anonymize the data so that it can be safely used no matter where it travels to.”

(44)

Share

Hydro cyber attack – comment from cyber security director

Commenting, Tom Kranz, Head of Cyber Lab at 6point6 :
“Having switched to manual operations, it would appear at this stage to be an IoT attack that has gone for their control equipment. Yet while we often discuss IoT attacks in terms of botnets, the cyber attack on Norsk Hydro throws into sharp relief that we do not put enough focus on the supply chain disruption that can be caused. In this case, not just aluminium smelting, but the construction of actual components for wider industry has been shut down. With the global push towards “Just in Time” manufacturing and more efficient mass-production processes, an IoT attack of this scale against a single company has the potential to have a disruptive and harmful impact to multiple industries on a worldwide scale.
Machines and devices across the Industrial Internet of Things (IIoT) network need to be treated in the same way as any other untrusted, insecure device; namely as a segregated network, with ingress and egress filtering and monitoring. There should be no direct access to the general Internet, and indirect access must use encryption with a high level of logging and monitoring to mitigate risks of cyber attack. As IIoT devices have such simple communications and data flows, configuring SIEM and TVM solutions to keep closer scrutiny on the IIoT segregated network and it’s data flows is also essential. Security must be front and centre, especially when it comes to inter-reliant industries and production lines.”
   

(157)

Share

STRmix™ Wins Prestigious National Science Award

Developed in New Zealand by two scientists at the New Zealand Institute of Environmental Science and Research (ESR), John Buckleton and Jo-Anne Bright, in conjunction with Duncan Taylor from Forensic Science South Australia, STRmix™ is now the most widely used DNA interpretation software in New Zealand, Australia, the UK, and North America.

Since its introduction in 2012, STRmix™ has been used to interpret DNA evidence in more than 100,000 cases worldwide. This includes numerous court cases and 28 successful admissibility hearings in the U.S.

STRmix™ is currently being used by 43 federal, state, and local agencies in the U.S., including the FBI and the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF). It is in various stages of installation, validation, and training in more than 60 other U.S. labs.

The Prize is for a transformative scientific discovery or achievement which has had a significant economic, health, social, and/or environmental impact on New Zealand or internationally in the last five years. The award will go toward supporting the ongoing work of the STRmix™ team, a group of 16 researchers, scientists, and software developers at ESR, many of whom also have a background in forensic science.

Dr. Buckleton notes that before STRmix™, complex mixtures of DNA – any mixture of two or more persons – were unusable. “You could tell the evidence was there, but there was no manner to express that in court in a sustainable way,” he explains.

“What STRmix™ does is draw an evidential inference from a more complex mixture, whereas previously there were just not the methods to draw any inference from such evidence,” Dr. Buckleton continues. Using standard, well-established statistical methods to build up a picture of the DNA genotypes, STRmix™ “can take a person of interest, compare him with a profile of a crime scene, and produce an inference whether that person is included or excluded.”

Describing the Prime Minister’s Science Prize as “a great honor,” Dr. Buckleton concludes, “We are told we’ve taken usable DNA evidence in the U.S. courts from 40 percent to 70 percent, so we’re not just exonerating more false donors and convicting more true donors. We’re advancing the cause of justice in the U.S. and more broadly.”

ESR Chief Executive Dr. Keith McLea agrees, adding, “The award shows we’re already at the cutting edge of DNA research. STRmix™ reinforces that ESR and New Zealand are right out in front of the world in forensic science. I see a very bright future for STRmix™.”

STRmix™ introduced the latest version of the software, STRmix™ v2.6, in August 2018. The new version features a user interface that has been completely redeveloped and refreshed, providing users with vastly improved usability and workflow. Version 2.6 also enables a range of contributors to be entered when performing a deconvolution, and any type of stutter to be added and configured.

(91)

Share

Things to Know Before Developing Intelligence Requirements

Written by Mike Mimoso, Editorial Director, Flashpoint

To state the obvious, proper intelligence requirements must be in place before data collection, analysis, and consumption of intel can happen. These requirements are critical because they enable an organisation to choose and prioritise its intelligence goals, determine what information it needs to collect and from what sources to achieve those goals, establish how it will process this information, and identify which dissemination methods are most appropriate for the finished intelligence it produces.

Intelligence requirements mandate some initial groundwork, however. The commercial sector, for example, has a much different starting point than its public-sector counterparts; a government agency may want to know all it can about an adversary targeting its network, while a financial services organisation may be primarily concerned about getting those bad guys off its network—whomever they may be—and keeping them off.

This approach will guide how intelligence requirements are formulated as organisations attempt to understand and protect their infrastructure, lessen the attack surface a threat actor may target, and reduce exposure to risk.

Assets and Exposure

Building intelligence requirements that work for your organisation requires a deep understanding of available assets and exposure through a comprehensive asset inventory and threat-profiling exercise, more so than a debate about how much software and people hours you will need to invest in order to address a threat. A much more fruitful discussion should be had about the specific information you need to collect to satisfy specific intel requirements.

For the commercial sector, this type of asset inventory and evaluation of internal assets and exposure in the context of adversaries’ tactics, techniques, and procedures must also include an understanding of threats to others in your industry, and tangentially against your supply chain, or others who store and execute upon the same types of data as your company. Being solely reactive puts organisations at an immediate disadvantage, not only with regard to incident response, but also with communicating potential risk to intelligence consumers and decision makers.

The More You Know…

Looking at this from a commercial business risk intelligence (BRI) perspective, intelligence requirements are derived from questions that need to be answered, and those questions should be formulated by those who will consume the subsequent intelligence, such as business leaders or analysts in a security operations centre.

It’s too broad a question to ask whether there are hackers a business needs to be concerned with, because properly answering that question would require extensive, time-consuming data collection and profiling of active threat actors and could easily be over-taxing for analysts already overburdened with alerts. A more focused approach would be to first identify which systems are core to the business. Next, determine whether there are publicly disclosed vulnerabilities and/or attacks targeting those systems, understand the consequences of a breach of the data on those systems, and find out whether attackers are targeting others in your industry.

This level of insight can help an organisation narrow its open web or Deep & Dark Web sources of information and focus only on core areas of concern, such as cybercrime, fraud-loss avoidance, emergent malware, disruptive attacks, or public exploits, for example. It also puts security analysts and decision makers in a position to be proactive about future threats and inform risk-based decisions.

Worthwhile Challenges

Once there is an understanding of assets and exposure based on such specific and tailored questions, work on equally narrow intelligence requirements may begin. In the above examples, an organisation may establish a requirement that certain threat-actor profiles be developed, or intelligence on only a handful of pertinent vulnerabilities and exploits be produced. If threat actors have used a zero-day attack against organisations running a previously undisclosed Adobe Flash vulnerability, and you’ve blocked Flash usage on employee devices, these incidents have little bearing on your operation.

This is the type of tactical, operational, or strategic intelligence organisations require to inform decisions and lessen risk. It all begins with intelligence requirements, and going a layer higher, the legwork required to support the development of viable intelligence requirements is challenging. It’s also worthwhile and supports the ultimate outcome for any security and risk team: preserve an organisation’s resiliency and operational continuity.

(61)

Share

NTT Security Corporation acquires WhiteHat Security

NTT Security Corporation (Tokyo) has signed a definitive agreement to acquire privately-owned WhiteHat Security, the leading application security provider committed to securing applications that run enterprises’ businesses. Post-acquisition, WhiteHat Security will operate as an independent, wholly-owned subsidiary of NTT Security Corporation.
As a result of this acquisition, NTT Security will provide the world’s most comprehensive end-to-end cybersecurity solutions. Together, working hand-in-hand, the two organizations will address enterprise security needs that range from IT infrastructure to critical business applications, covering the full lifecycle of digital transformation.
Without question, this exciting new acquisition expands NTT Security’s portfolio, allowing its customers and partners to benefit from WhiteHat Security’s industry-leading, cloud-based Application Security Platform. WhiteHat’s customers and partners will have access to NTT Security’s consulting and advisory services, along with their next-generation platform based Managed Security Services.
“NTT Security’s overarching goal is to provide comprehensive, game-changing cybersecurity solutions that address the broader needs of digital transformation. WhiteHat is recognized globally as a leader and pioneer in the application security cloud services and DevSecOps spaces,” said Katsumi Nakata, Chief Executive Officer, NTT Security. “By bringing WhiteHat Security into our portfolio we are now well positioned to deliver on our vision of securing a smart and connected society by providing comprehensive security solutions for enterprises undergoing digital transformation.”
“WhiteHat has been at the center of application security, providing wide-reaching solutions to its customers and partners, and we will continue to invest in our people and technologies to maintain that leadership,” said Craig Hinkley, CEO, WhiteHat Security. “The synergy between our two security-focused companies will enable our partners, customers and prospects to benefit from our combined cybersecurity solutions.”
NTT Security and WhiteHat Security, with their complementary solutions and services, will continue to invest in emerging technologies to secure their customers’ businesses. The acquisition enhances NTT Security’s ability to deliver high-performing and effective application security at a global scale.

(32)

Share