Insurance Nexus by Reuters Events Releases the Connected Auto Insurance 2020 Report

The automotive sector is on the cusp of a huge wave of change, rivalled only by historic moments such as Ford Model Ts rolling off the construction line or the deep-seated impact of the 1973 oil crisis. This time, however, it is not just one technological frontier disrupting the sector, but multiple innovations that are already making their mark.

Insurance Nexus by Reuters Events have produced the Connected Auto Insurance 2020 report to make sure Auto insurance businesses; personal or commercial, can deliver on customer expectations and maximize the opportunities that available technologies like telematics, IoT, AI and analytics offer.

As well as gaining insight from over 1200 North American insurance executives, get the detail on what this means for an insurance organization from industry experts, including:

  • Shannon Lewandowski, Innovation and Digital Team – IoT, American Modern
  • Lorenzo Morganti, Big Data/AI Senior Project Lead, AXA
  • Glen Clarke, Head of Transformational Propositions, Allianz
  • Eugene Y. Wen, Vice President, Group Advanced Analytics, Manulife
  • Amrish Singh, Vice President of Product, Enterprise, Metromile
  • Allison Whittington, Head of Housing, Zurich Municipal

And many more…

Download the report now

By downloading the report readers can discover the vital strategic steps you must take in 2020 in order to keep pace with an ever-evolving Auto insurance ecosystem; validated by industry statistics based on 1200 insurance carrier executives and technology leaders.

Justify next steps for investment with 7 easy-to-decipher infographics that clearly demonstrate technology trends, carrier ambitions, investment strategies and partnerships and learn from your peers through 3 in-depth case studies focussing on ‘Open APIs Open Up Business Opportunities,’ ‘Tracking Through Tags, Pulses and Apps,’ & ‘Enabling Mobility-Based Insurance.’

You can also access exclusive viewpoints including James Spears’ take on ‘OEMs Muscling In: The Battle for FNOL’ so that your next step towards OEM collaboration is informed and profitable.

Understand the ‘state of the industry’ and where it’s heading through a wealth of articles, commentary, and debate on the impact of OEMs and how carriers will respond, new models of car ownership, autonomous vehicles and commercial fleet developments so that you remain on the cutting edge.

Have any comments? Get in touch and learn about the Auto Insurance USA conference, April 16-17, Chicago. Website viewable here: https://events.insurancenexus.com/auto/

 

(169)

Share

Cyan Forensics Announces New Chair to Lead Venture into the Next Stage of Growth

Cyan Forensics – the Edinburgh-based company aspiring and working towards a world where there is no place that harmful digital content can be easily hidden or shared – has announced that Paul Brennan is taking over as chair to guide through its next level of growth.

 

Cyan Forensics’ digital forensic analysis tools finds child sexual abuse images on devices within minutes and their product is currently being rolled out to police forces across the UK. Its products can also be applied in the field of counter terrorism and by social media and cloud companies to find and remove harmful content online.

 

Brennan offers a wealth of commercial experience helping to steer technology organisations into the international arena, with particular focus on the US and Europe. Former chair Simon Hardy will remain on the board continuing to bring with him experience from more than a decade of providing high technology solutions to law enforcement worldwide. Hugh Lennie, Cyan Forensics’ Chief Finance Officer (CFO), also joins the expanded board line up to bring his extensive experience of building, growing and exiting businesses.

 

Paul Brennan, new Chair of Cyan Forensics, comments: “I am delighted to have the opportunity to help shape Cyan Forensics’ forward momentum. Cyan Forensics’ technology has multiple applications to offer solutions that can make a real difference to protect people from online harms. The company has seen much success in its first three years’ of business and I look forward to supporting their expansion following a recent contract with the UK Home Office and into new markets in Northern Europe and the US.”

 

Ian Stevenson, CEO of Cyan Forensics, said: “We welcome Paul Brennan and Hugh Lennie onto our board, and are fortunate to retain the experience of our former Chair Simon Hardy. We are at an exciting stage of growth where our product is going into many police forces across the UK to help catch paedophiles much faster, and we are now in a strong position to enter the European market, as well as making greater in-roads in helping law enforcement in its fight against counter terrorism.”

 

Cyan Forensics was founded in 2016 by Bruce Ramsay, a former police forensic analyst and now the company’s CTO, and CEO Ian Stevenson. Last month the business confirmed a successful new round of funding from Triplepoint, Mercia, Social Investment Scotland Ventures, the Scottish Investment Bank and private investors, bringing the total raised by the company to £2.8m.

 

Last year Cyan Forensics announced partnerships with America’s National Center for Missing & Exploited Children and the UK Home Office’s Child Abuse Image Database (CAID).

 

Cyan Forensics is addressing a huge and growing problem for society. At the end of 2019 the WeProtect Global Alliance Threat Assessment report announced that there are 750,000 individuals estimated to be attempting to connect with children across the globe for sexual purposes online at any one time. Technology companies also reported a record 45 million online photos and videos of child abuse last year, that number was less than a million just five years ago, and is more than double what was reported the previous year, according to the National Center for Missing and Exploited Children (NCMEC).

(240)

Share

Clearview AI’s entire client list stolen in data breach- Comment

It has been reported that Clearview AI suffered a data breach that involved its entire list of customers. Clearview’s clients are mostly law enforcement agencies, with police departments in Toronto, Atlanta and Florida all using the technology. The company has a database of 3 billion photos that it collected from the internet, including websites like YouTube, Facebook, Venmo and LinkedIn. This comes on the heels of their photo-scraping and facial recognition capabilities raising major privacy concerns.

Commenting on this, Tim Mackey, principal security strategist within the Synopsys CyRC (Cybersecurity Research Center), said “In cybersecurity there are two types of attacks – opportunistic and targeted. With the type of data and client base that Clearview AI possess, criminal organisations will view compromise of Cleraview AI’s systems as a priority. While their attorney rightly states that data breaches are a fact of life in modern society, the nature of Clearview AI’s business makes this type of attack particularly problematic. Facial recognition systems have evolved to the point where they can rapidly identify an individual, but combining facial recognition data with data from other sources like social media enables a face to be placed in a context which in turn can enable detailed user profiling – all without explicit consent from the person whose face is being tracked. There are obvious benefits for law enforcement seeking to identify missing persons to use such technologies for good, but with the good comes the bad.

I would encourage Clearview AI to provide a detailed report covering the timeline and nature of the attack. While it may well be that the attack method is patched, it also is equally likely that the attack pattern is not unique and can point to a class of attack others should be protecting against. Clearview AI possesses a target for cyber criminals on many levels, and is often the case digital privacy laws lag technology innovation. This attack now presents an opportunity for Clearview AI to become a leader in digital privacy as it pursues its business model based on facial recognition technologies.”

(471)

Share

GDPR improves dwell times

Organisations are detecting and containing cyber attacks faster since the introduction of GDPR in 2018, according to a report from FireEye Mandiant. In the EMEA region, the ‘dwell time’ for organizations- the time between the start of a cyber intrusion and it being identified- has fallen from 177 days to 54 days since the introduction of GDPR. There has also been a decrease in dwell time globally, which is down 28 percent since the previous report. The median dwell time for organizations that self-detected their incident is 30 days, a 40 percent decrease year on year. However, 12% of investigations continue to have dwell times of greater than 700 days.

Jake Moore, Cybersecurity Specialist at ESET:

“It’s great to see a positive GDPR story – and this is exactly what it was designed to help with. Dwell times have notoriously been longer than they should be over the years, but this statistic really shows that GDPR regulations are working, and that organisations are becoming more secure in the process. GDPR shouldn’t be seen as an inconvenience, but instead as a remedy to improve security. There is simply no excuse to have a dwell time of over 700 days and I would imagine that the 12% of companies that do would require a serious security overhaul.”

(209)

Share

ISS World hack leaves thousands of employees offline- Comment

It has been reported that a cyber-attack has hit the major facilities company, ISS World, which has half a million employees worldwide. Its websites have been down since 17 February, and This Week in Facilities Management said 43,000 staff at London’s Canary Wharf and its Weybridge HQ, in Surrey, still had no email.

Commenting on this, Sam Curry, chief security officer at Cybereason, said “In the case of the ISS World ransomware attack, and all ransomware attacks for that matter, corporations can either become a hero or a villain. In the adrenaline rush of “crisis mode,” I hope the executives and security staff of ISS World choose to be heroes by protecting employees, being transparent and erring on the side of doing the right thing. We all hope for minimum damage, rapid recovery and strengthening of ISS World in the wake of this and of peers from their experience when the dust clears. In any cyber attack, transparency and clarity is what matters and like so many others we’ll wait to hear more in the coming days. Recently, Travelex suffered a significant breach and leadership was widely criticized for a slow response. That criticism was coming from pundits without specific knowledge of the incident. Let’s not “bayonet the wounded” because being a target and a victim is happening more and more frequently. Organizations today need to take a much more proactive approach to cyber hygiene by actively hunting for anomalies in their networks. Preventing, detecting and responding to incidents has to highest on the list of steps being taken to minimize and reduce high impact breaches.”

(261)

Share

Watchdog probes Redcar council cyber-attack

As reported by the BBC, a watchdog is probing a cyber-attack on Redcar and Cleveland Borough Council, which was still unable to provide any online services more than a week after its systems were crippled. The council’s website and all computers at the authority were attacked last Saturday, affecting 135,000 residents. The council notified the Information Commissioner’s Office (ICO) – the watchdog said the authority had “made us aware of an incident and we are assessing the information”.
Jake Moore, Cybersecurity Specialist at ESET:
“This indeed has all the hallmarks of a ransomware attack. The knock-on effects just show the devastation that this simple yet effective attack can leave in its wake.
This is by no means the first ever council to be hit with ransomware and nor will it be the last. Local governments have tight budgets but sadly, IT security still appears way down the priority list with some leaders. I would be surprised if this council was unaware of previous similar attacks, so it suggests they need a better understanding in how to protect their networks. Funding is a difficulty in local government but this is about assessing risk and must be addressed properly.
Offsite backups can be restored in hours when they are set up correctly so when they fail to be back up over a week later, serious questions should be asked. I never condone paying the ransom being asked as you can never be 100% certain you will see the money again, but no doubt the council will have this as a consideration if they are cornered. It’s better to prevent and protect rather than pay.”

(129)

Share

EU unveils proposals to regulate AI

As reported by Verdict, the European Union will unveil a range of policy proposals to keep Big Tech in check. The package includes tougher rules for digital services, a single European data market and a white paper on artificial intelligence (AI).
The white paper is expected to include proposals for a regulatory framework for Europe’s AI sector, focused on high risk sectors and high risk uses of AI. This is likely to include biometric identification systems, such as facial recognition and deepfakes.
Please see here for the EU’s press release on the topic.
John Buyers, Head of International AI at Osborne Clarke LLP:
“Getting regulation right around a fast-changing, very powerful emerging technology is not easy and the Commission’s horizontal, one-size-fits-all approach is very ambitious.  A lot of industries will be concerned that the right balance has been struck between enabling a vibrant European market in these new technologies and protecting the rights of EU citizens.”
For post-Brexit UK, this initiative is highly significant – we know that the government is actively considering regulatory divergence where it would serve UK interests.  Data and AI are areas where we can’t assume the UK will opt will go for alignment.  So this White Paper sets a clear threshold for UK regulatory bodies to work with in deciding the right direction for the UK AI industry.  Which direction are we going to take?  The decision could prove to be highly determinative.”

(125)

Share

Millions of Windows and Linux systems vulnerable to cyber-attack- Comment

It has been reported that fresh firmware vulnerabilities in Wi-Fi adapters, USB hubs, trackpads and cameras are putting millions of peripheral devices in danger of a range of cyberattacks, according to research from Eclypsium. TouchPad and TrackPoint firmware in Lenovo Laptops, HP Wide Vision FHD camera firmware in HP laptops and the Wi-Fi adapter on Dell XPS laptops were all found to lack secure firmware update mechanisms with proper code-signing.

Commenting on this, Tim Mackey, senior principal consultant at the Synopsys CyRC (Cybersecurity Research Centre), said “With supply chain cyber attacks on the rise in 2019, this research should serve as notice to software publishers that they are a critical component of the digital supply chain – regardless of what type of software they provide. In the case of insecure update mechanisms, or lack of cryptographically secure validation mechanisms for their software, they open the door for malicious attacks. This is due to the reality that most end users are not equipped to validate the legitimacy of the software they use and rely on the software delivery process to perform all validation. Importantly, when they can’t locate what they believe to be a solution for their issues from the vendor, they’ll download a potential solution from the internet with the potential result of a malware infection. Since device firmware executes on a computer before the operating system starts, the protections present from anti-malware solutions are rendered ineffective due to the ability of malicious firmware to behave in ways that allows anti-malware to believe there is nothing wrong with the computer system.

In the end consumers of any software, whether it be packaged commercial software, IoT firmware, computer drivers, or open source solutions, should first directly contact the supplier of their software for any updates or patches. While it might be convenient to apply a patch following an internet search, the reality is that third-party repositories could easily host malicious versions of software. This is why the first principle of patch management is to know where the software came from as that’s where any patches need to also originate.”

 

Michael Barragry, operations lead at edgescan, added “It seems a bit strange that software signing has become a modern standard when it comes to various programs and executables in general, whereas for firmware it has apparently been ignored on a massive scale.The practice of software signing ensures that an end-user can verify that what they are downloading is from a trusted source and has not been tampered with by a malicious actor somewhere along the way. Failing to do this for firmware essentially gives a free pass for malicious code to enter your system. Depending on the hardware that falls under the control of the firmware in question, this could lead to a multitude of attacks.Addressing this threat from an industry-wide perspective is not a small task and will require collective effort and cooperation from hardware vendors and OS manufacturers alike.”

(240)

Share

Penetration Testing in the Age of Artificial Intelligence

The world as we know it is rapidly being impacted by A.I.-driven technology. It was only a decade ago when smartphones came to prominence, and now, the A.I. landscape is slowly hogging all the spotlight.

Markets rise and fall based on predictive algorithms, smart homes perform menial tasks based on people’s behavior and self-driving cars drive more accurately by the day, among other uses.

These innovations have been made possible by increased internet speeds, stronger computing hardware, and the rise of technologies like Edge and Cloud Computing.

However, all of the benefits come with equivalent hazards. Now that our personal data is situated in the cloud, it’s more vulnerable than ever to theft.

That’s why it’s not surprising that a heavy emphasis on cybersecurity and secure server practices have been strictly adhered to in recent years.

But, before we dive right in to how A.I. fits in the whole cybersecurity puzzle, we first need to discuss the individual concepts in their current state.

White Hat, Black Hat

Cybersecurity, as a whole, encompasses a wide range of aspects from the hardware-level all the way to the social-level. It serves as a direct response to the malicious practice commonly known as hacking.

While hacking itself is an even more general term, ranging from phishing scams to malware attacks, hackers themselves aren’t all bad.

“White hat” hackers, for instance, use the same toolkit and adhere to the same practices as their more hostile counterparts, but their intent leans towards the improvement of security rather than breaking it.

Penetration testing or (Pen test), otherwise known as Ethical Hacking or White Hat Hacking, is conducted by white hat hackers to combat the threat of malicious hackers, commonly known as “black hats.”

Pen testing is an authorized simulated cyberattack on a computer system with the main goal of detecting vulnerabilities and weaknesses of a system.

The whole process is an end-to-end test that starts from gathering the necessary information all the way to reporting all of the detected weak spots.

Contrary to popular belief, penetration testing doesn’t just involve hardware and software components, it also employs social engineering tactics to weed out weak employees.

White hat hackers who conduct social engineering penetration testing do this by deceiving employees into giving out sensitive data or perform actions that will create security weaknesses that allow the hackers slip through. 

Automate Everything

Unfortunately for white hat hackers, their black hat counterparts are up-to-date on the latest cutting-edge technologies themselves.

A.I. is being used as drones for large-scale bot networks (or botnets) to enact massive Distributed Denial-of-Service attacks, among many other illicit activities.

In order to keep up with the pervasive threat, white hat hackers must be willing to keep up and adapt to the ever-changing landscape.

But, as with other industries that are automatable, there is a looming question of whether or not Artificial Intelligence would eventually replace the human aspect of penetration testing.

To answer that, we first need to examine the current state of A.I and how it could supplant the need for manual intervention.

Even though science fiction novels and movies have led us to believe that A.I. would be so advanced in the present day that they could easily pass as humans, unfortunately that’s not the case.

Chatbots have made great strides over the last decade, but they have a long way to go before they effectively mimic the way humans speak.

It’s an underrated aspect but one that could make or break the social engineering part of spoofing potential attack vectors.

On the plus side, A.I. can sift through a hundred thousand lines of data in a matter of seconds. Even the search parameters doesn’t have to be extensive since they can adjust on-the-fly.

Throw in a good Optical Character Recognition (OCR) plugin and they could easily have the ability to read text in pictures and handwritten notes.

Unlike humans that get tired, A.I. can run 24/7 non-stop. Plus, they could easily multiply and are highly extensible. You wouldn’t need to pay them for their services as well.

All of these are highly relevant to the information gathering process when performing penetration testing.

However, given the unpredictability of human emotions and the probability of human error, tactics would need to be adjusted on-the-spot given whatever context presents itself.

This is something that A.I. could eventually learn to analyze, but currently their models need to be trained further to cater to the inherent randomness.

Given a proper setup, A.I. can seamlessly interface to different systems and it can follow its protocol exactly as it was designed – leaving none to minimum margins of error.

A.I. decision making might be rigid (to an extent), but it is objectively infallible, especially compared to humans. Not to mention, it can generate extremely detailed reports in a blink of an eye.

Humans are prone to error, lapses in judgment and, depending on the sensitivity of the information to be handled, hard to trust than technology that can be programmed or designed to “learn” new information.

These glaring weaknesses can be easily bypassed through the use of A.I. So, why hasn’t A.I. fully taken over this whole process yet?

Man and Machine Working Together

Even though A.I. has come a long way in the past two decades, it still has a long way to go before it can fully take over the different types of penetration testing processes.

Despite the growing stack of benefits, the biggest argument against handing complete control of penetration testing to A.I. is its reliability.

While the A.I. can follow a set of instructions, it can easily be exploited by hackers that are prepared to take on the automated defense system.

Here’s an example of how hackers can carry out their attack on AI-based cybersecurity systems. 

Machine learning – an application of A.I. – learns and gets “smarter” by observing patterns found in data, and making assumptions about its meaning – whether on a large neural network or on individual computers. 

So if a certain action within computer processors occurs at the same time that particular processes are running, plus, the action gets repeated on the specific computer or neural network, the system will learn that the action means that a cyber-attack is happening. 

This also prompts the system that the necessary actions need to be taken to address the attack.

The tricky part though is that A.I. – savvy malware, for instance, can insert false data for the security to read – the goal of which is to disrupt the patterns that machine learning algorithms utilize to make their decisions.

This means that fake data could be injected into a database to make it seem like a process that’s copying sensitive information is part of the regular IT system routine, and therefore, can just be ignored.   

Thus, A.I.-centric approaches might be the future decades down the line, but for the time being, human-led pen testing still remains as the go-to for many prominent companies.

But, that doesn’t discount what A.I. can do for pen testing today. While it may not be a viable alternative to give it autonomy, pen testers could still leverage A.I. as a tool to aid their practices.

As mentioned earlier, A.I.-supported information gathering can help ease the burden of having to sift through piles of information. That would leave human pen testers more time to focus on other aspects.

White hat bots can be employed to combat malicious bots, and automated sniffers can be used to detect fraudulent sites before they can do any significant damage.

Reports can be automatically generated and steps can be easily documented with the help of automated tools.

A.I. – powered tools are also used to “look at” rendered web pages to determine the ones that most likely have actionable leads. 

The current penetration testers’ method are to do this task manually – which can take up a lot of time since they have to check each screenshot one at a time. 

With the latest AI technology, however, and deep neural networks, performing this task – visually inspecting web-pages – can now be done through an automated process.  

There has never been an easier time to get into penetration testing. If you’re interested to start a career, you can read up on tech articles to help you get started on your journey.

Don’t feel too pressured that you need to catch up quickly with the latest trends. There’s a lot of ground to cover and you would need a lot of time to practice.

What’s next?

While it might be tempting to go for a DIY approach when it comes to protecting your site against cyber attacks — especially since there are a lot of available tools out there in the market — you might do yourself more harm than good.

For the most part, it is a good practice to work with reliable cyber security companies that do penetration testing since they have specialists who work on cybersecurity day in and day out.

With the help of experts running pen tests on your network, your network security is bound to get rid of its security gaps.

(400)

Share

Chinese Spies Charged for Equifax Breach- Comment

Recently, we are hearing more about the charge of Chinese spies for the 2017 Equifax breach, particularly in the US.

More information here: https://www.politico.com/news/2020/02/10/us-charges-chinese-spies-with-massive-equifax-hack-113129

As this news is surfacing, the US National Counterintelligence and Security Centre has also published a report suggesting that “More foreign countries, militias and other groups are targeting US intelligence agencies with hacking … Not only that, but they’re increasingly targeting the private sector and government agencies that aren’t directly involved in national security”.

More information here: https://www.cnet.com/news/foreign-hackers-are-targeting-more-us-government-agencies-report-says/

In response to the Equifax Breach story and the recent report publication, Rosa Smothers, Senior VP of Cyber Operations at KnowBe4, has given the following comment:

“The DNI’s CI report indicates the private sector is increasingly a target of state-sponsored hacking efforts. The recent charges filed against four Chinese intelligence officers for hacking the credit reporting giant Equifax is a prime of example of state-sponsored hacking to uncover sensitive information. The credit rating data provided could indicate a target’s financial vulnerability, which can then be used against them for China’s gain. This is the “spot” in our old Agency adage “spot, assess and recruit.”

(104)

Share