3 US Hospitals fall victim to ransomware

Reports by the BBC today have revealed that 3 more US hospitals have been hit by ransomware. The Kentucky Methodist Hospital, Chino Valley Medical Center and Desert Valley Hospital, California, were all hit by ransomware, but their system are now up and running again. This is another high profile incident of hospitals being hit by ransomware this year, with Hollywood Presbyterian Medical Center in California and both the Lukas Hospital and the Klinikum Hospital in Germany suffering attacks earlier this year.

David Gibson, VP of Strategy and Market Development at Varonis provides the following recommendations; 

“Hospitals, like all organisations, will struggle to prevent, detect and recover from ransomware. Authors create variants too quickly to expect A/V and signature-based defenses to prevent all infections and ransomware is difficult to detect because file system activity is rarely logged or analysed. If you don’t have a record of file system activity, ransomware is difficult to recover from because you don’t know which users were (or still are) infected, which files were encrypted, or when.

Here’s what you can do to help yourself:

-Expect to be infected
-Start logging file system access activity and store it for forensics
-Use automation to analyse and alert on unusual file system activity – (this successfully detects and stops many ransomware infections in their tracks, as well as many other things that are worse)
-Make sure data stored on workstations is backed up
-Make sure your file servers are backed up
-Increase the frequency of your backups
-Keep your backups longer
-Make sure incident response plans address what you’ll do if you don’t have adequate backups or logging. (For example, if you don’t have a searchable log of activity you’ll probably need to manually inspect your file servers to see what else has been encrypted. )”



How Can Hosting Providers Protect Their Customers from DDoS Attacks?

Dave Larson, Chief Operating Officer, Corero Network Security

Almost every week there’s a new instance of DDoS attacks wreaking havoc upon its victims, costing them revenue and customers as a result of network outages. DDoS attacks have come a long way from their humble beginnings as a tool of the bedroom hacker, now being deployed by everyone from state-sponsored attackers to entry-level hackers as one of the most common forms of cyber threat activity today. What began as a simple volumetric attack has since evolved into a far more complex form of malicious activity with several different forms and purposes.

As the attacks change, so must our approach. An important step is looking further upstream and questioning the role that service providers have in mitigating the DDoS threat. This is something I explain to consultants with the following analogy:

Imagine running a bath and seeing that a quarter of the water coming through the tap was contaminated. When the bill from the water company came, I don’t imagine anyone being too happy paying for a contaminated supply. People can justifiably look at their Internet service in the same way.

If a hosting provider isn’t providing effective DDoS mitigation as a part of its service offering they may send useless and potentially harmful traffic across their customers’ networks. If folks refuse to pay the water company for contaminated water, why are so many companies paying for a similar situation with their hosting and service providers?

With Internet traffic, there’s the problem that customers can’t accurately visualise all the traffic flowing across their network and analysing it is far too big a job for existing staff to handle. Whether it’s a sub-saturation attack designed to explore or weaken certain aspects of a network, or a huge flood attempting to knock the whole place offline, customers aren’t able to hold providers to account in quite the same way, despite the second-rate service they may be receiving.

The legacy solution for hosting providers was to black-hole traffic i.e. if a suspected DDoS attack was taking place, traffic would be sent to an IP location that doesn’t exist. However this also sends the good traffic to said non-existent IP location, meaning these legitimate users can’t visit the site or service they were hoping to – costing the business money and customers. This is doing the attackers’ work for them, whereby the site is rendered out of use due to the DDoS attack, even after the attack itself has subsided.

Fast-forward to today and the technology has not only caught up with the hackers, but has surpassed their capabilities altogether. There are now technological innovations that utilise real-time mitigation tools installed directly inline with the peering point, meaning customer traffic can be protected as it travels across an organisation’s network.  Such innovations mean providers are better positioned than ever before to offer effective protection to their customers, so that sites and applications can stay up and running, uninterrupted and unimpeded.  

Fortunately, hosting providers are starting to deploy this technology as part of their service package to protect their customers, and the latest solutions are scalable and automated.  This maximises efficiency and minimises the need for human intervention – which should act as a gigantic aspirin for the headaches caused by DDoS attacks in the past. Providers can tune these systems so that customers only get good traffic, helping their sites run far more efficiently. It’s a win-win for both sides, as providers’ services become more streamlined and reliable, protecting their reputation and attracting more customers. The upside for the customer is that they’re no longer paying for poorly filtered traffic.

If purpose-built technology is laid out at ISPs’ peering points, DDoS traffic is halted before it can enter their networks. This is effectively shutting the door on the DDoS traffic, while leaving a window open for the legitimate user traffic to still get in. For security staff and service administrators, this means no more calls in the middle of the night, no more downtime and most importantly, no more victims of DDoS attacks.

A case in point is SdV Plurimédia, a French hosting provider. It handles huge amounts of traffic and, like any other hosting provider, experiences DDoS attacks at speeds capable of derailing their networks. SdV Plurimédia guarantees customers 24/7 operability; a risky promise if DDoS attacks are a persistent concern.

Through deploying automated technology that was simple to implement, SdV Plurimédia didn’t have to reconfigure any elements of its network. It chose an option that sits inline and is dedicated to mitigating DDoS attacks at the edge of the network meaning the threat was removed and business for their customers could carry on as usual without sudden surprises coming downstream. As SdV’s example shows, the technology is readily available, so why not encourage more conscientious behaviour within the industry?

So our advice for businesses is as follows: when shopping around for a hosting provider, look out for the companies that don’t provide security as part of their service offering, since they may end up charging you for traffic you really don’t need and certainly shouldn’t be paying for. Opting for a company that offers security as a service means that you’ll be saved a lot of the expensive call-outs, downtime and loss of customers that tend to go hand in hand with the DDoS attacks which negligent providers allow to run their course.



Expert Insight : CISO’S options to Baldock

Following the discovery of the Badlock earlier this week, @DFMag obtained the following insight from Cris Thomas (a.k.a Space Rogue), Strategist at Tenable Network Security:

“We have three weeks before technical details are public and a patch is issued, but CISOs are already getting questions from their executives and boards about how they are preparing for Badlock and they need to have an answer. 

“Few confirmed details about Badlock have been released, but it could be a major wide reaching vulnerability because Samba is an internal file sharing and printing protocol that is integrated into most operating systems. If an organization leaves the vulnerability unpatched it could grant administrator or root access to every user account on the network or possibly allow remote code execution. Smart CISOs should start planning now to patch immediately once the fix is available. They should also prepare for the possibility that the vulnerability may be discovered or leak before the patch is available.

“With the unusual case of the PR announcement coming so far ahead of the patch, Samba has now become a prime target for hackers wanting to find Badlock before it is patched—as well as other previously undiscovered vulnerabilities to exploit. 

“CISOs already have the tools in their arsenals to begin preparations, but then comes the part a lot of people forget about: communicating your strategy and overall security status to the board with language and metrics they will understand and be able to act on. 

“The upside to such an early announcement is that it presents CISOs and CIOs a rare opportunity to get ahead of the vulnerability conversation and set expectations about the response, rally resources and make sure they are in the best possible position to succeed.”



ICS/SCADA system hacked at a Water Treatment Plant

Hackers were able to infiltrate an ICS/SCADA system at a water treatment plant and altered crucial settings that controlled the amount of chemicals used to treat tap water according to Verizon’s 2016 Data breach Digest.  Along with outdated computers, the system was exposed to the Internet because traffic was routed through a Web server where customers could check their monthly water bill.

Lamar Bailey, Senior Director of Security R&D for Tripwire told @DFMag;

“Poor designs and misconfigurations lead to countless security incidents. An entity can purchase all the security products in the world and acquire the best staff available but if the network has gaping holes in the perimeter or DMZ machines have unfettered access to the secure side of the network it is only a matter of time before an attack succeeds. A network needs to first be a defendable position with clear defined boarders on which layers of of security are built upon.  It is imperative that companies examine their networks from the outside to see what is exposed and what “windows” are left open. 

Utility infrastructure entities have become  prime targets for hacktivists and terrorist so administrators must be even more diligent in securing theses locations. They are softer targets due to the antiquated insecure nature in how internal systems communicate so once the other shell is broken it can be trivial to cause havoc within the network.” 



Malware targeting Apple’s OS X unveiled

The number of reported vulnerabilities in OS X software almost tripled during 2015, as Apple’s rising market share made it more lucrative for malware authors to write OS X malware. AlienVault researchers have analysed some of the main strains of OS X malware which contributed to this sharp rise, including Mask, a sophisticated malware used for cyber espionage, and OceanLotus, that was discovered last year and found to be attacking Chinese government infrastructure.

The full details can be found directly here



Alarming Data Collected by Varonis Reveals Why Most Companies Are Easy Prey for Cyber Attackers

Varonis Systems, Inc. a provider of software solutions that protect data from insider threats and cyberattacks, today revealed the results of a year of anonymous data collected during risk assessments conducted for potential customers on a limited subset of their file systems. The 2015 results show a staggering level of exposure in corporate file systems, including an average of 9.9 million files per assessment that were accessible by every employee in the company.

Of the insights gleaned from dozens of customer risk assessments conducted in mid-to-large enterprises prior to remediation, in a subset of each company’s file systems, Varonis found the average company had:

35.3 million files, stored in 4 million folders, meaning the average folder has 8.8 files

1.1 million folders, or an average of 28% of all folders, with “everyone” group permission enabled –open to all network users

9.9 million files that were accessible by every employee in the company regardless of their roles

2.8 million folders, or 70% of all folders, contained stale data — untouched for the past six months

25,000 user accounts, with 7,700 of them or 31% “stale” – having not logged in for the past 60 days, suggesting former employees, employees who changed roles, or consultants and contractors whose engagements have ended

The ‘everyone’ group is a common convenience for permissions when originally set up. That mass access also makes it astonishingly easy for hackers to steal company data.

Some individual companies’ lowlights that were gleaned from the Varonis risk assessments:

In one company, every employee had access to 82% of the 6.1 million total folders.

Another company had more than 2 million files containing sensitive data (credit card, social security or account numbers) that everyone in the company could access.

50% of another company’s folders had “everyone” group permission and more than 14,000 files in those folders were found to contain sensitive data.

A single company had more than 146,000 stale users – accounts whose users had not logged in for the past 60 days.  That’s nearly three times more users than the average FORTUNE 500 company has total employees.

David Gibson, Vice President of Strategy and Market Development at Varonis, said, “Although this data presents a bleak look at the average enterprise’s corporate file system environment, the organisations running these risk assessments are taking these challenges seriously. Most of them have since implemented Varonis, embracing a more holistic view of the data on their file and email systems and closing these gaping, often unseen security holes before the next major breach causes heavy damage. Our software is able to provide a granular look at where sensitive data lives, where it is over-exposed within an organisation, who is accessing that data, and how to lock it down. While that remediation process is running, our ability to start detecting and stopping many types of insider threats has been a major revelation for our customers.”



How a SOC can help the SME become more threat focused

By James Parry, Technical Manager, Auriga

There’s been a fundamental shift in cyber security away from prevention to threat detection. Why? Because the sheer scale of cyber attacks is making a defensive posture unsustainable. A recent survey by Business Reporter found 78 percent of UK companies had experienced an increase in cyber attacks during the past year. The conclusion is that attacks are inevitable and that rather than defending against every eventuality, the organisation should focus and allocate resource through threat intelligence.

It’s now imperative that the business be aware of and monitor threat developments. However, creating and maintaining a comprehensive Security Operations Center (SOC) capable of not only capturing but also of identifying relevant threats is both time and cost intensive. For this reason, next generation SOC services capable of extensive monitoring and data crunching have typically been the tool of either large corporates or enterprises that can afford to outsource this capability, ruling it out as an option for the majority of SMEs.

This creates a real dilemma for the SME who is effectively left exposed and vulnerable, typically receiving no or very little warning of an impending attack other than those alerts generated by its own network defenses. Consequently, the SME is forced into a reactive, defensive position. With the scale and veracity of the attack, its origins and motivation, all are unknowns. Effectively, the SME is fighting blind.

Solutions such as Compass from Auriga, a scalable next generation Security Operations Center (SOC), are designed to meet the security monitoring needs of today’s hyperconnected small and medium sized business combating threats from numerous sources. This type of solution enables the SME to adopt a proactive rather than a reactive stance by providing real-time threat intelligence as identified and assessed by a dedicated team of data security analysts.

Next generation SOC services differ in that they enable metadata to be aggregated from a multiplicity of sources and to be analysed and assessed in real-time. This means vast amounts of data can be gathered and analysed not only from routine traffic traversing the network but also from dynamic data generation sources such as social media and even the darknet.

Take, for instance the recent high profile Distributed Denial of Service (DDoS) attacks have caught some organisations unawares. How many of those organisations impacted by or directly compromised as a result would have liked to know a DDoS attack was imminent? Armed with that foresight the organisation can prepare for such an attack months ahead by following key trends that vary by sector, region, company profile, operational model and technical complexity. Such knowledge can also help shape and inform future business plans, steering the company out of harm’s way.

Utilising a next generation SOC by outsourcing this aspect of cyber security enables the SME to benefit from limited financial exposure and minimal risk while benefiting from state-of-the-art monitoring that can be tailored to their market sector, geographic area and other criteria, effectively giving the SME a finger on the pulse of what is happening. This form of tailored threat intelligence can be further expanded to include Threat Forecasting and Business Intelligence, so that the business is not only aware of but able to anticipate and counter emerging threats.

So what, as a SME, should you be looking for when choosing a next generation SOC service? First and foremost, consider scalability. Look for a service that can grow your business and be tailored to your specific needs. You might elect to start off small, perhaps monitoring traffic from specific locations, and during limited times of day, for instance.

Essential to a SOC is a SIEM or Security Incident and Event Management tools. SIEM aggregate, collect and correlate, and interrogate the ‘events’ or threats detected as well as generating alerts and reports. Ideally you want a SIEM to be able to perform real-time and historical cross correlation, processing at phenomenal speeds, so do ask about speed and processing performance.
The logging of events is also important but don’t confuse this with SIEM. Event log and network flow data consolidation is about raw information and storage, making it very useful for auditing and compliance purposes. Logs are essential for event source identification, for instance. But unlike SIEM, it doesn’t interrogate that data and compare it to different rule sets to look for attack patterns. Look also at how event log data is secured. Is it hashed or encrypted using a standard such as HMAC, for instance?
When it comes to threat detection the idea is to dig deep so do enquire as to the range of sources covered in terms of geography, sector, and network traffic, and the numbers involved. How many threat intelligence feeds are typically analysed? Is the provider able to adapt that analysis to continuously learn from suspect network traffic, threat patterns and risks to your business?
Finally, be aware that it’s intelligence, not incidents, you are paying for. So look at the human face behind the machine. What is the size and experience of the team of analysts interpreting the results from the SOC? Are you being offered a fully managed security service or simply a reporting service? What levels of network, visualisation and application intelligence are on offer? And how will real threats be acted upon in terms of incident management? Unless the SOC can integrate with the way your business functions and convert that intelligence into action, any benefit will not be fully realised.

As our economy becomes more hyperconnected through the use of wearable tech and the Internet of Things, the attack surface of the business, and our susceptibility to attack, will increase. Outsourcing SOC services offers the scale and the flexibility to monitor and stave off multiple threats in real time. For the SME, being able to respond to and mitigate those threats isn’t just about keeping ahead of the competition; its about survival. A SOC’s capability will be a major determining factor in whether a business survives or thrives.

James Parry can be contacted at james.parry@aurigaconsulting.com



Damaging Consequences of DDoS Attacks Revealed in Survey results

What is the most damaging consequence of DDoS attacks to businesses?  Losing the trust and confidence of your customers, according to nearly half of IT security professionals participating in Corero Network Security’s second annual DDoS Impact Survey, which was released today by the company. The industry study polled technology decision makers, network operators and security experts attending the recent 2016 RSA Conference about key DDoS issues and trends that Internet service providers and businesses face in 2016.

“Network or website service availability is crucial to ensure customer trust and satisfaction, and vital to acquire new customers in a highly competitive market,” said Dave Larson, COO at Corero Network Security. “When an end user is denied access to Internet-facing applications or if latency issues obstruct the user experience, it immediately impacts the bottom line.”

Nearly half (45 percent) of the IT security professionals who responded said loss of customer trust and confidence were the most damaging consequences of DDoS attacks for their businesses, while 34% said lost revenues were the worst effect.

DDoS attacks get the most attention when a firewall fails, service outage occurs, a website goes down or customers complain, but Larson warns that companies should be concerned about DDoS attacks even when the attacks are not large-scale, volumetric attacks that saturate a company’s network and associated server infrastructure. Approximately one third (32%) of survey respondents indicated that DDoS attacks on their network occur weekly or even daily. “That is a troubling, yet not surprising, statistic because DDoS attacks are incredibly inexpensive to create, and relatively easy to deploy,” said Larson.

“Industry research, as well as our own detection technology, shows that cyber criminals are increasingly launching low-level, small DDoS attacks,” said Larson. The problem with such attacks is two-fold: small, short-duration DDoS attacks still negatively impact network performance, and—more importantly, such attacks often act as a smokescreen for more malicious attacks. “While the network security defenses are degraded, logging tools are overwhelmed and IT teams are distracted, the hackers may be exploiting other vulnerabilities and infecting the environment with various forms of malware.”

Larson noted that small DDoS attacks often escape the radar of traditional scrubbing solutions. Many organizations have no systems in place to monitor DDoS traffic, so they are not even aware that their networks are being attacked regularly.

The survey also asked participants about their current methods of handling the DDoS threat; nearly one third (30%) of respondents rely on traditional security infrastructure products (firewall, IPS, load balancers) to protect their businesses from DDoS attacks. “Those companies are very vulnerable to DDoS attacks because it’s well-documented that traditional security infrastructure products aren’t sufficient to mitigate DDoS attacks,” said Larson.

Interestingly, 30% of respondents currently rely on their upstream service providers to eliminate the attacks, yet an overwhelming majority (85%) of respondents indicated they believe upstream Internet Service Providers should offer additional security services to their subscribers to remove DDoS attack traffic completely. Furthermore, 51% responded that they would be willing to pay their Internet Service Provider(s) for a premium service that removes DDoS attack traffic before it is delivered to them, and 35% indicated they would allocate 5-10% of their current ISP spend to subscribe to this type of service.

“Clearly the majority of organizations need and are willing to pay for a service that protects them from DDoS attacks,” said Larson. “Fortunately we offer the industry-leading in-line, real-time DDoS mitigation solution that allows Internet Service Providers to easily meet that demand. The Corero SmartWall Threat Defense System can be deployed at the very edge of the network or Internet peering points to effectively inspect all Internet traffic and mitigate DDoS attacks in real-time before they can inflict damage downstream.”



FBI warns of car hacking: expert comment

In light of the news that the FBI has issued a warning about car hacking, Cesare Garlati, chief security strategist for the prpl Foundation has commented:

“Perhaps it goes without saying that the most dangerous part of the connected car is the “connected” part.  Criminals, using a little lateral thinking, can use one part of the car’s anatomy to get to another.  This could have dangerous consequences if hackers found their way into more critical functions, such as the steering and brakes as researchers were able to do with a Jeep back in 2014.

“The lack of subject matter expertise with mechanical and electrical engineers is leaving systems wide open to attack.  While it’s unfair to expect them to shoulder this burden, it is also unfair to place the onus squarely on the consumer who is likely to know even less about security.  This is something which vendors, regulators and manufacturers must carefully consider as the evolution of connected cars continues.”



DDoS attack on Swiss Federal Railways: expert comment

A recent DDoS attack on Swiss Federal Railways brought the following comments from Dave Larson, COO at Corero Network Security to @DFMag:

“Organizations or Government agencies or even infrastructure that rely on traditional IT security tools to protect against DDoS attacks are placing themselves at even greater risk from these devastating cyber-threats.  A DDoS attack, whether volumetric in nature or even application targeted, can lead to disastrous repercussions; latency issues, service degradation and potentially damaging and long-lasting service outages. Thankfully in this case the outcome was not a threat to the public at large, however the service impacting nature of the attack requires dedicated, real-time DDoS protection.”