2021 To See More Successful Security Attacks

In the period of 2021 more successful security attacks and compromise will be encountered, with many high profile organisations, in multiple sectors falling on their own sword of insecurity, and will thus pay the price of the reactive style of a supposed security posture. Sadly, 2021 will not be the year we see real steps taken toward Cyber Resilience – but it will be the year in which we finally see a more serious mindset toward addressing cyber insecurity with a proactive security posture.”

Developed back in the 1830/1840’s by Samuel Morse and other collaborating inventors, the telegraph revolutionized long-distance communication. It worked by transmitting electrical signals over a wire laid between stations, and changed the nature of communications forever – in fact it was commented by one authority:

The new technologies will bring every individual into immediate and effortless communication with every other, and will practically obliterate political geography, and make free trade universal. Thanks to technological advance, there are no longer any foreigners, and we can look forward to the gradual adoption of a common language.”

Powerful words, linked to positive aspiration. However, stepping forward to the invention of the Internet by Sir Timothy John Berners-Lee, not only may we track our all encompassing technological progress, but equally may note that the outcomes have not always been so positive, with the advent of cyber insecurity.

From the Genesis period of the Internet Revolution there was always a very real concern that such a multi-faceted world on interconnectivity should dictate a very firm need for security in the uncontrolled space of the World Wide Web (WWW) – it did not. In fact such early concerns were around the area of the Internet naming and numbering authority – or, to put it bluntly the root authority. In that era, John Postel was, like many are today, fighting to prove the dangers of lacklustre controls, and on 28 January 1998 decided to take action, and took control, sidestepped Network Solutions and demonstrated that he could transfer root authority whenever he chose to – this made those in control sit up and take note.

So just what has the histrionics of the Internet got to do with the WWW today – answer, the simplicity of John Postels early concerns are now maximised to an unprecedented level with complex interwoven connectivity, with potentially millions of domains across the world being maintained in a vulnerable and exposed profile.

Along the path to exploiting what is referred to as the Super Highway, multiples of global organisations, and governments have embraced this easy to empower technology to their own singular advantage. However, as this eager embracement grew, it would seem in the majority of cases, those who were chasing the benefits of the Internet were unaware of the Genie of Insecurity which was gradually creeping from the lamp and entering their domains.

As of 2020 there are around 2 billion websites running on the net, so just imagine if 10% are insecure – that amounts to 200,000,000. However based on what has been discovered from a number of sample surveys conducted with WHITETHORN SHIELD that number would seem to be very much on the low side – with 25% being a more realistic percentage, the end number of insecurity is now scarcely significant.

What really changed the world of cyber was the appreciation and practice of OSINT (Open Source Intelligence) which goes well beyond the element of the IP address to discover titbits of unknown unknowns which can expose even the most secure of sites – titbits gathered from multiple sources may then be leverage to paint a aggregated big picture, Cuckoo Egg style off-line acquisition of dark intelligence metrics which may be used to further expose and exploit further insecurities.

In 2020, much work has been done by Cybersec Innovation Partner with their cutting edge WHITETHORN SHIELD engine, and findings gathered from both commercial and government sites are to be observed with the question – how can this be? The findings not only suggest there is a potential for cyber insecurity to exists on multiple site, but goes well beyond and prove that these discoveries are fact. The problem seems to be, nobody is willing to listen – that is until such time they are compromised!

(300)

Share

UK Government Issues Cyber Security Professionalism Consultation Document

Dateline – 19th July 2018

As part of its National Cyber Security Strategy published in 2016, the Department for Culture Media and Sport today published its Consultation Document on creating the environment to develop the cyber security profession in the UK. In recognising that the UK has some of the best Cyber Security Professionals in the world the UK Government also recognises that “the need to further develop the right skills, capabilities and professionalism to meet our national needs across the whole economy is increasingly important” and that the “consultation sets out bold and ambitious proposals to implement that. It includes a clear definition of objectives for the profession to achieve and proposes the creation of a new UK Cyber Security Council to coordinate delivery”. The consultation aims are to:

* Summarise the Government’s understanding of the challenges facing the development of the cyber security profession;
* Seek views on objectives for the profession to deliver by 2021 and beyond; and
* Seek views on the creation of a new UK Cyber Security Council to help deliver those objectives.

The consultation period ends on the 31st August 2018 and therefore only provides a short period for the responses to be submitted. Responses may be submitted via an Online Portal by both organisations and individuals.

The current UK cyber security organisations were quick to recognise, that if left alone to plan and decide the future for the profession the outcome may not be desirable to their various members, a single governing body would not be suitable for all the various professional roles that are related to the cyber security profession. A collaborative ‘Cyber Security Alliance’ was therefore formed that includes many of the leading organisations such as the BCS, IET, IAAC, ISSP, to name but a few, of what has become a growing alliance. The ‘Cyber Security Alliance’ issued its own press release regarding the consultation process and its support to the National Cyber Security Strategy.

The aim of creating a Cyber Security Council is a bold move founded on previous experience of such organisations as the ‘General Medical Council’, ‘The Science Council’ and the ‘Engineering Council’. Some of these organisations were created by statute, however this is not the plan for the Cyber Security Council. Yet in this single point is the greatest danger to the future of establishing such a council. The council has to be all things to all the current organisations and potential new alliance members, with no single organisation taking a lead role, for to do so would potentially collapse the Alliance and ultimately the very idea of a Council. For this to work the cyber security council will need to be established from the ground up, be non profit for the benefit of its member organisations and have a plan to become self sufficient in the near future.

This is important for the future of the cyber security profession here in the UK and urge all to respond to the consultation to ensure that the widest possible participation is achieved.

(76)

Share

A Gathering of Big Data & Smart Cities Experts in Singapore

SINGAPORE,  – Experts from the Big Data & Smart Cities related industries have recently gathered at Marriott Singapore Tang Plaza for the BIGIT Technology Singapore 2016 featuring the 3rd Big Data & Smart Cities World Show conference. The two-day conference, sponsored by HPE (Platinum Sponsor), Cloudera, Marklogic and Talend (Gold Sponsors), saw a gathering of about 100 attendees from local and overseas including Singapore, Malaysia, USA, Spain, China, Korea, Saudi Arabia, Australia, India and the Philippines with the same objective and mission – to gain comprehensive learning experience related to Smart Cities and build interactive network with global ICT leaders.

This 3rd Big Data & Smart Cities World Show with the theme, “Shaping the Future with Big Data and the Internet of Things towards Building a Smart City” highlighted significant key areas of Big Data and Internet of Things (IoT) in changing businesses and people’s lives in line with the implementation of Smart Cities. With a total of 23 Speakers from various fields, 14 case studies and 4 panel discussions shared during the conference, attendees also had the chance to learn and explore the latest technologies used to build smart cities with the implementation of big data analytics and IoT. Our attendee summed up the event with the feedback: “Thanks a lot for getting me an opportunity to witness the future. I thoroughly enjoyed the event and have gained lot of insights.”

Olygen, the event organiser will also be kicking off its third event this year, known as BIGIT Technology Malaysia 2016, which will feature two concurrent conferences: the 4th Big Data World Show and Data Security World Show and the BIGIT Exhibition on 19th and 20th September at KLCC Convention Centre, KL Malaysia. Co-organised by Multimedia Development Corporation (MDeC) – Malaysia’s government agency leading the national Big Data Analytics initiative, the event will be the Anchor Event of the Big Data Week Asia 2016. To find out more about BIGIT Technology Malaysia, please visit: http://bigittechnology.com/malaysia2016.

For more information, please contact:

Chia Li, Teh
Tel          : +603 – 2261 4227
Email     : enquiry@bigittechnology.com

BIGIT_MY Web Banner 300x200

 

(116)

Share

Online translation tools still have some way to go against human translators according to research.

When it comes to translating a different language, which one’s better? That was the goal of the Los Angeles-based firm Verbal Ink, who embarked on a challenge to find out whether Google Translate could provide the same amount of accuracy as a professional human translator. They compared the search engine super-giant with Adriana (a real-life translator), and the results were more than surprising. Here are 3 of the key findings;

1. Google struggles with certain concepts

When people use Google Translate, they expect The Big G to provide them with a fast and accurate language translation. Verbal Ink found that, for the most part, Google did exactly that. However, their translation service struggled when it came to understanding certain concepts – particularly those which are specific to a certain language or dialect. This sometimes had an effect on the overall meaning of a text.

So, what did the research conclude? Although a human translator can work out to be expensive than a free service like Google Translate, this study suggests that the former has a better understanding of the language used in an everyday context.

2. Google is great for basic language translations

Verbal Ink found that the service is great at providing the basics of the text, although Adriana scored points when it came to overall interpretation and accuracy. In the study, professional translator Gaby V. found that, when compared like-for-like, Google Translate churned out sentences that were “disjointed” in one example, with fractured syntax and poor use of grammar. Adriana, however, had no difficulty when it came to word choice or overall literal translation.

What have we learned? Google Translate is great for those who need a quick translation, but a professional translator might be more worthwhile if a complex document needs to be deciphered.

3. Google had difficulty with pronunciation

Verbal Ink’s research was based on two tests; the first of which involved comparing the translation of a marketing pitch in Spanish. The text was translated using Google and given to Adriana to work her magic. Google was able to convey the overall meaning of the text in English, although some clauses and sentences were difficult to read. The second test involved speech, and both Google and Adriana were asked to transcribe a speech spoken in Spanish, before translating this into English. Here Google had difficulty with some pronunciations and repeated words.

Who won this round? Well, the human translator had a better grasp of pronunciation and clauses. To see the Infographic and check out the audio and text files used for this experiment click here.

(433)

Share

Cellebrite UFED 4.0 Offers New Time-Saving Workflow Capabilities

Cellebrite, leading developer and provider of mobile data forensic solutions, released the latest version of its leading mobile forensics solution – UFED 4.0. The new version offers features that improve investigative workflows and save time in both lab and field environments.

Inefficiencies such as extra layers of work process and lack of access to a full range of forensic tools often hinder efforts to obtain evidence and intelligence from mobile devices. UFED 4.0 aims to address some of these key challenges by enabling simple and effective language translation, faster and more powerful data carving, and integration of screen captures into forensic reports.

Key features of Cellebrite’s UFED 4.0 include:

  1. Efficient, Powerful Language Translation – An offline translation solution on UFED Physical/Logical Analyzer 4.0 that accurately translates both short and long words. It helps to reduce challenges associated with foreign language translation, including the need to rely on another person, or to copy/paste into an online tool. The UFED translation engine currently supports 13 languages, including English. Five of the 13 are offered free of charge with a UFED license.
  2. Updated Carving Process Enhanced automated carving from Android devices’ unallocated space offers access to much more—in some cases, double or triple the amount—of deleted data than previously allowed. While manual data carving is still an important part of the forensic validation processes, UFED 4.0 redesigned the automatic data carving functionality to present more precise deleted data by dramatically reducing false positive and duplicate results.
  3. HTML Report Viewing on UFED Touch – UFED Touch now offers the option to view an HTML report that includes general device Information and the logical extraction data on the touch screen.
  4. Web History and Web Bookmark Capabilities – Newly included for logical extractions, and therefore viewable with UFED Touch, are web history and web bookmarks. From iOS devices, the new UFED 4.0 feature extends logical extraction and preview capabilities to app data.
  5. New UFED Camera Function – A new manual evidence collection feature, UFED Camera, allows users to collect evidence by taking pictures or videos of a device’s screen. The ability to take screenshots can be important in the field, helping to substantiate documentation of what law enforcement or investigators saw on the device during an initial scroll-through. In the lab, taking screenshots can help you to validate device extraction results – to show that the evidence in an extraction file existed on the evidence device.
  6. Enhanced Dashboard and User Experience – Users can perform multiple extractions on one device without having to return to the home screen. This means that they can obtain additional logical, physical, file system, or camera capture extractions as soon as one type of extraction is complete.

For more details on these and other new and enhanced decoding and app support capabilities—including support for the new iPhone 6, iPhone 6 Plus and other Apple devices running iOS 8—take a look at the UFED 4.0 release notes at: http://releases.cellebrite.com/releases/ufed-release-notes-4-0.html.

(1130)

Share

Altium releases its TASKING ARM Cortex-M Embedded Development Tools for the Mac

Sydney, Australia – 2 October 2014 – Altium Limited, a global leader in Smart System Design Automation, 3D PCB design (Altium Designer) and embedded software development (TASKING) announces the release of its TASKING VX-toolset for ARM Cortex-M for Apple Mac computers running OS X.

web--PR_Image-_TASKING_MAC_Port_for_ARM_CompilerTraditionally embedded software development tools have been available exclusively for the Windows operating system and Altium has a long history in providing its TASKING cross compilers and debuggers for running on Windows, including its TASKING VX-toolset for ARM Cortex-M. With ARM Cortex-M based microcontrollers becoming popular in broad market consumer applications, especially with wearable electronics and electronic systems that can be controlled from the iPhone, it is apparent that embedded software engineers want to use the Mac as their development platform.

To serve this development community, Altium has developed a native OS X port of release v5.1r1 of its TASKING VX-toolset for ARM Cortex-M, bringing its C compiler suite with Eclipse based IDE and debugger to Mac computers.

“Given the growing popularity of Mac OS X and the development of ARM Cortex-M based embedded applications connecting to applications on the iPhone and iPad platforms, we’re excited to offer our TASKING Embedded Development Tools to Mac users,” said Harm-Andre Verhoef, Product Manager TASKING. “Altium’s product offering will empower embedded ARM based developments and provide Mac users with the tools to bring their embedded applications to life.”

Previously, embedded-application developers that preferred Mac computers relied on virtual machines hosting the Windows operating system within OS X in order to run an embedded cross compiler. This led to an inefficient workflow and a variety of challenges, including problems connecting a debug probe reliably to the debugger running inside the virtual machine. The native port to OS X of the TASKING compiler breaks down the barriers for developing embedded applications for Mac users, while allowing them to work efficiently in their platform of choice. Cooperation with STMicroelectronics made it possible to offer in-circuit debug capabilities with the Eclipse integrated TASKING debugger, using the USB port on the Mac to connect to the ST-LINK/V2 debug probe.

TASKING’s Viper compiler technology used in the ARM compiler ensures platform compatibility for developers on OS X and their colleagues using Windows, allowing for easy migration and collaboration. The Viper technology has an industry proven reputation of generating highly efficient and robust code for automotive applications like power train, body control, chassis control and safety critical applications, benefiting developments for broad market and industrial applications.

Key features of the TASKING VX-toolset for ARM Cortex-M for Mac OS X include:

  • Eclipse based IDE with integrated compiler and debugger
  • Highly efficient code generation, allowing for fast and compact applications
  • Support for a wide range of Cortex-M based microcontrollers from different vendors, such as STMicroelectronics, Freescale, Infineon Technologies, Silicon Labs, Spansion, Atmel and Texas Instruments
  • Integrated code analyzers for:
    • MISRA-C:1998, C:2004 and C:2012 guideline
    • CERT C secure coding standard
  • Fast and easy application development through TASKING’s award winning Software Platform technology, bringing:
    • an industry standard RTOS
    • a wide range of ready to use middleware components, such as support for CAN, USB, I2C, TCP/IP, HTTP(S), Bluetooth, file systems, graphical user interface, and touch panel control
  • Eclipse integrated Pin Mapper for assigning signals to microcontroller pins
  • In-circuit debug and programming support through ST-LINK/V2 probe (including on-board probes on starter-kits from STMicroelectronics)
  • Native support for 64-bit Intel-based Macs with Mac OS X

Developers using OS X that require certification of their embedded application for functional safety standards such as IEC 61508 and ISO 26262, benefit from TASKING’s ISO 26262 Support Program for its new ARM toolset on OS X. A manufacturer of an electronic (sub) system is responsible for obtaining certification credit and as part of the process has to assess the required level of confidence in the utilized software tools. Altium supports this through the availability of a Compiler Qualification Kit as well as optional Compiler Qualification Services.

The VX-toolset for ARM release v5.1 is available now on OS X Mavericks, and on OS X Yosemite once it is widely available. Pricing starts at USD 1,995 (€ 1,595) for the TASKING VX-toolset Standard Edition and USD 2,995 (€ 2,395) for the Premium Edition with the award winning Software Platform. Hardware debug support is available in the Professional and Premium Editions through the ST-LINK/V2 debug probe from STMicroelectronics.

(2167)

Share

Offender profiling is taking a different shape, as investigators grapple with increasingly ‘social’ criminal activity

Mobile forensics has changed the methodology when it comes to offender profiling. The frequent use of mobile devices has provided investigators with another source for profiling criminal suspects, as well as an insight into their habits and personalities.

This is not just because of the volume of user voice calls and SMS texts; the amount of rich data that can be extracted from Instant Messaging (IM) and social media applications gives forensic investigators the paint and brushes to develop a detailed picture of a suspect and a criminal case. A suspect’s social media personality can offer a more tailored overview of the character, his or her likes and dislikes and a reflection of ‘who’ they really are, beyond their alleged actions. A victim’s presence on social media can also be used to find a common link to possible suspects.

Recent research from Cellebrite found that 77 per cent of respondents believed that mobile apps were a critical data source in criminal investigations. While this clearly indicates that mobile apps offer a vital source of evidence, it’s not a suggestion that investigators should solely look at mobile-based apps when building the investigative picture – evidence should be extracted from all other items of phone-based data as well.

The widespread use of mobile apps makes them a critical data source for law enforcement, both in terms of evidence and investigative leads. The value to both prosecuting and defence counsels, in a court of law, makes the neglect of such data a potentially severe barrier to solving a case.

People now more frequently use mobile devices to access social media apps, rather than using a traditional PC or laptop. Moreover, social media data that is extracted from a suspect’s mobile device provides additional characteristics such as more accurate location-based data and time proximity to another event or situation. For example, by connecting to a specific Wi-Fi network investigators can establish presence in a certain place and at a certain time correlating it with another action, possibly, on social a network.

Criminals will use various communication channels in the course of their mobile activity. For example, a suspect could use an IM app to organise a meeting, but use SMS to contact the victim. Investigators must operate a flexible forensic practice when sourcing evidential data from mobile devices, because the various channels that criminals communicate through means that a one dimensional approach to forensic evidence gathering could lead to the omission of valuable data.

While data points such as SMS text messages and GPS locations may result in an immediate lead in a criminal case, the ‘online social identity’ of a suspect will allow investigators to delve into the personality of the suspect, which in turn could help build out the case.

This social data can be extracted through the social media apps that the suspect has downloaded on their device. Facebook posts, Tweets, ‘shares’ and ‘likes’ can all give critical information to investigators hoping to build the profile of a suspect.

A suspect’s social media identity goes beyond their ‘likes’ and ‘shares’ though; it can also include immediate locational data, such as a recent ‘check-in’ at a restaurant or a shop. Even if this locational data isn’t completely current, it will still help to paint the forensic picture of a suspect in terms of where they regularly go, who they meet with, and what they do when they’re there.

In court, social data retrieved from mobile apps is fast-becoming a major source of evidence in not only building up the profile of the suspect, but also in establishing or demolishing a witness’ credibility. While social or app-based data has become a crucial evidential component to an investigator’s case, it can also act as an important part of the prosecution or defence process in court.

Offender profiling is changing as people use more social applications to communicate with one another. This is providing investigators with another source of information to build up a complete profile of a suspected criminal, which in turn offers a more comprehensive picture of a suspect in a court of law.

The amount of data that is now being consumed and shared is opening up a number of different opportunities for mobile forensic investigators, who are in a constant battle to stay one step ahead of the increasingly connected criminal.

Yuval Ben Moshe Yuval Ben-Moshe, senior forensics technical director at Cellebrite

(977)

Share

Waking Shark II & Barclays

Last week, one agency was kind enough to print my controversial opinions on Waking Shark II, which were based on knowledge of standing deficiencies with the security cultures and infrastructures of banking. Many of which have been notified, but those in question have failed to act, or indeed acknowledge!

The recent Barclays breach is interesting, but I would add that this is only known as an insider blew the whistle, otherwise it would be unknown, and the subject public at large would have been none the wiser, and at risk. However, I am aware of many cases of such breaches which did not go public, one of which was the loss of 37,000 Barclays Client record’s, in clear (not encrypted) around 2007, which was not reported, notwithstanding the CISO, and all Executive IT Directors were aware, including one Main Board Member.

By main criticism and observation around Waking Shark II was its real value to serving security – if there were/are so many tolerated holes in place that support insecurity, then those in the security profession who support this situation, by association become part of the problem – in the name of security associations and bodies!

My conclusion is, we are not at a well trodden juncture of insecurity and public/business exposure which, in my opinion needs much more than to just pay lip service to the known, but which demands tangible action to secure the National and Global Economies.

We also need to be aware that the cultures which tolerated the unreported breach, have moved on, in some cases to the world of Outsourcing and Service Management (e.g. First Data), so sadly one may conclude that such attitudes for survival may have evolved into the unknown.
John Walker

John Walker
SBLTD
www.sbltd.eu

Professor John Walker is a Visiting Professor at the School of Computing and Informatics, Nottingham Trent University (NTU), owner and CTO of SBLTD, a specialist Contracting/Consultancy in the arena of IT Security and Forensics, and Security Analytics, the Director of Cyber Research at the Ascot Barclay Group.

(1461)

Share

Finding The Middle Ground

Finding the middle ground: keeping the public safe, while respecting their privacy

When it comes to keeping the public safe, the Government and indeed the public need to find the middle ground to achieve the best results. The timeless debate over ‘privacy versus security’ has reared its head in recent weeks, with the Government planning on reintroducing the Data Communications Bill, in the wake of the horrific murder of Lee Rigby. This will require negotiation.

On the one hand, keeping the public safe is the most important role the police and the security services play but, on the other, the privacy of members of the public is one of the most paramount features of democratic life. The public must also realise that for law enforcement to do its job, there must be a degree of willingness from the public when it comes to the extraction and analysis of mobile data.

It’s no surprise that the Government is looking to reintroduce this piece of legislation. The dreadful events that unfolded in Woolwich show that terrorists have no respect for human life and play by no rules. Investigators need the most cutting-edge technology, to prevent such acts of criminality from reoccurring. But these tools may come at a risk.

Police and security services rely on the most up-to-date technology to prevent crime from happening, as well as helping to unravel a criminal case. The Data Communications Bill will enable this power, with investigators having the ability to analyse digital communications between criminals and criminal groups, but knowing how to handle this data is critical.

A balance must be struck; a middle ground must be found. The police and the security services must keep the public safe, but they must also respect their privacy. They essentially have a dual-role, in that they are the guardians of the public’s safety as well as their privacy.

There are specific tools available to investigators that solve this problem. Tools such as the UFED Link Analysis, from Cellebrite, ensure that criminals don’t get away with communicating via mobile devices. Finding multiple connections between criminals and criminal gangs, allows investigators to build a far broader picture of a case than is possible using the more conventional methods.

The underlying weakness of digitally-reliant criminals is that they use their mobiles so frequently. Investigators can expect to find a wealth of data on mobile devices. The true advantage of tools such as the UFED Link Analysis is the fact that they allow investigators to establish connections. In cases such as the murder of Lee Rigby, searching for connections and patterns may well be crucial in discovering whether the attack was part of a wider plot and whether there are groups that are behind the attack.

But it’s important that those that operate the forensic equipment, know exactly how it works and what the purpose of it is. Just as it would be futile for someone that isn’t trained in biological forensics to extract and analyse DNA to solve a case, the same is true of mobile forensics. Due to the sensitive nature of data extraction, investigators must be fully trained to operate this equipment and use it to its optimum potential.

There will always be a ‘trade-off’ when it comes to privacy and security. Although the forensic equipment that’s available to investigators is highly advanced, it does require the application of human rationale, when operating the equipment.

Safeguarding the privacy and the security of the public is a dual-role that really goes hand-in-hand. Although negotiation will be an integral part of the legislation, when it’s debated in Parliament, it’s important to realise that, with the correct technology and know-how, investigators can hope to fulfil their dual-role to the highest level.
Yuval Ben Moshe
Yuval Ben-Moshe, senior forensics technical director at Cellebrite

(2481)

Share

Protect Your Business From State-Sponsored Attacks

It has taken some time but we finally have succumbed to the delights of a certain kitchen utensil. Years of resisting George, John, and the seductive talents of Penelope, had left me more determined than ever to resist at all costs. The result; a plethora of appliances – eight at last count – to produce the perfect cup of coffee at the right moment, cluttering kitchen surfaces and cupboards, and never quite getting it right. After all, each appliance needs and produces its own unique type of coffee.  And it’s difficult, when you’re the only serious coffee drinker, to convince ‘management’ at home that such a thing as a CCM (Centralized Coffee Management) system is essential.

And the story is similar with encryption keys and certificates. Look around any mid to large size organisation and you will find SSL, SSH and Symmetric keys and digital certificates scattered around – and each type will also have several variants. Then there are all the different “utensils” which use the keys, from applications to a myriad of appliances, as well as a host of built-in ‘tools’ to manage each variety.  The result is more management systems than the average household’s coffee machines.

Today SSL and SSH keys and certificates are found littered across virtually all systems, applications and end-user computing devices. In most cases no one knows who caused the ever-proliferating and expanding landscape of encryption “litter,” and since these keys and certificates are used to protect critical systems and sensitive data, ineffective and siloed management means that organisations are increasingly susceptible to failed audits, security risks, unexpected systems outages, compromises to systems applications and most importantly, critical data. Of course, each of these comes with its own costly financial and reputational consequences.

The Dark Side

And just as I’m told that there’s a dark side to my caffeine addiction, there is a definite dark side to the unmanaged and unquantified encryption keys and certificates that we’ve become so dependent on—which now act as the infrastructure backbone of all online trust and security. Today as never before, everyone from governments to private individuals is under attack. The use of malware for criminal, ideological and political aims is growing at an alarming rate. Stuxnet opened Pandora’s Box when the use of valid, stolen SSL certificates as a means to authenticate the malware and allow it to remain hidden and undetected became common knowledge. Since then there has been an explosion of malware using digitally signed certificates.

Can we defend ourselves against state-sponsored attacks?

Today we are faced with cyber-attacks on a scale never imagined, and the question that has to be asked is whether or not there is anything we can do to protect our infrastructure, enterprises and ourselves.

But I believe the reality is that we are responsible in large part for the ease with which cyber-terrorists, regardless of their ideology or motivation, are attacking us. In effect, we are supplying the weapons that are being used against us. The collective failure of enterprises to protect keys and certificates is resulting in these very keys and certificates being used against us.

The Flame attack for example, which masqueraded as a Windows update, was successful because of Microsoft’s continued use of MD5 algorithms, years after they themselves had identified that they were compromised. A surprisingly small amount of money needed to be spent to create a duplicate certificate. Shaboom, which attacked Aramco and RasGas, leveraged a certificate stolen from a company called Eldos, and issued by Globalsign. The fact that it was issued by Globalsign is not the problem; the problem is that the key and certificate were reportedly stolen from Eldos. And it goes on and on. Cyber-Terrorists are literally helping themselves to keys and certificates from global business because they know that no one manages them. When organisations don’t ensure proper controls over trust, business stops. End of story.

So the first step in defending ourselves is to protect our key and certificate arsenal. Having effective management so that access to any key or certificate is controlled is a first step in ensuring that you don’t become the next unsuspecting collaborator. And that management has to be unbiased, universal and independent if it’s going to work—not caring who issues the encryption or in what departmental silos it resides (one cannot be both the issuer and manager of encryption simultaneously—too many inerrant conflicts of interest).  No one wants to have their name associated with a cyber-attack that at the very least results in significant financial loss for the victim, but even more seriously results in the loss of life.

Secondly, enterprises are not responding to the attacks. There is massive investment in perimeter security but when we are told repeatedly that the threat is as much from within as outside, we need to act.

Can we still protect critical infrastructure from attack in the digital age?

If malware is the Cyber-terrorist weapon of the 21st century, then organisations need to reduce the risk as much as possible. At last count there are in excess of 1500 Trusted Third Parties who issue certificates globally. Many of these are in every system in the infrastructure, and the result is that if a system trusts the issuer, it will by default trust the “messenger”, in this case malware.

So like your firewall in the 20th Century, which you used to reduce the access points through your perimeter, effective management of trusted issuers and instruments similarly reduces your risk of malware infection. If a system doesn’t know the issuer, it’s not going to trust the messenger. So although you can never completely remove the risk because you have to trust some people, you will significantly reduce the number of possible attacks. But this requires the determination of an organisation to take steps to protect itself. The management of trust stores in every system becomes an absolute necessity in the fight against cyber-terrorism, regardless of what group, enterprise, or nation state is behind it

According to US Defence Secretary Leon Panetta, the Pentagon and American intelligence agencies are seeing an increase in cyber threats that could have devastating consequences if they aren’t stopped. “A cyber-attack perpetrated by nation states or violent extremist groups could be as destructive as the terrorist attack of 9/11. Such a destructive cyber terrorist attack could paralyse the nation.”

The question is: when will start to see individuals and organisations being held culpable for these attacks? In the Cyber-Terrorism war, it is a big business selling valid SSL certificates, whether stolen, lost or sold, to “terrorists” – and it is likely to play a significant be a part of a major incident, and ignorance will not be a defence!

So my advice is, as George Orwell wrote in “1984” –  “If you want to keep a secret, you must also hide it from yourself.”

Calum Macleod Calum MacLeod has over 30 years of expertise in secure networking technologies, and is responsible for developing Venafi’s business across Europe as well as lecturing and writing on IT security.

www.venafi.com

(2017)

Share