Why Dell Zero Trust? Disappearing Perimeters

Just after the New Year, I caught up with a work colleague of mine and I started to chat about all the good work we are doing in Dell with regards Zero Trust and the broader Zero Trust Architecture (ZTA) space. Clearly he was very interested (Of course!!). We talked about the Dell collaboration with MISI (Maryland Innovation Security Institute) and CyberPoint International at DreamPort, the U.S Cyber Command’s premier cybersecurity innovation facility. There, Dell will power the ZT Center of Excellence to provide organisations with a secure data center to validate Zero Trust use cases in the flesh.

Of course, me being me, I was on a roll. I started to dig into how this will be based on the seven pillars of the Department of Defense (DoD) Zero Trust Reference Architecture. Control Plane here, Macro-segmentation there, Policy Enforcement Points everywhere!

Pause… the subject of a very blank stare…. Reminiscent of my days as a 4 year old. I knew the question was coming.

“But Why Zero Trust?”

This forced a pause. In my defense, I did stop myself leaning into the casual response centered on the standard logic: Cyber attacks are on the increase, ransomware, malware, DoS, DDoS, phishing, mobile malware, credential theft etc., ergo we must mandate Zero-Trust. Clearly this didn’t answer the question, why? Why are we facing more cyber related incidences and why shouldn’t I use existing frameworks such as ‘Defense in Depth’? We have used them for decades, they were great then, why not now? What has changed?

Of course a hint lies in the title of this post, and in particular the very first line of the DoD Reference Architecture guide.

“Zero Trust is the term for an evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets, and resources. Zero Trust assumes there is no implicit trust granted to assets or user accounts based solely on their physical or network location (i.e., local area networks versus the Internet) or based on asset ownership (enterprise or personally owned)”

So the goal is to move from ‘static, network-based perimeters’ to ‘focus on users, assets and resources’. However, as you may have guessed, the next question is……

“But Why?”

I think we can formulate a relevant coherent answer to this question.

The Problem of De-Perimeterisation

Traditional approaches to network and infrastructure security are predicated on the idea that I can protect the perimeter. Stop the bad stuff at the gate and only leave the good stuff in leveraging firewalls, ACL’s, IPS and IDS systems and other platforms. ‘Defense in Depth’ has become a popular framework that enhances this network perimeter approach, by adding additional layers on the ‘inside’, another firewall here another ACL there. Just in case something gets through. Like a series more granular sieves, eventually, we will catch the bad stuff, even if it has breached the perimeter.

This approach of course has remained largely the same since the 1990’s, for as long as the Network firewall has existed. ( in fact longer but I choose not to remember that far back!)

The ‘noughties’ were characterised by relative simplicity:

  • Applications all live in the ‘Data-Center’ on physical hardware. No broad adoption of virtualisation just yet. What’s born in the DC stays in the DC for the most part. Monolithic workflows.
  • Hub/Spoke MPLS based WAN and Simple VPN based remote access. Generally no split tunnels allowed. In other words to get to the internet, when ‘dialed-in’ you needed to reach it via the corporate DC.
  • Fledgling Internet services, pre SaaS.
  • We owned pretty much all our own infrastructure.

In this scenario, the network perimeter/border is very well defined and understood. Placing firewalls and defining policy for optimal effectiveness is a straightforward process. Ports were opened towards the internet but the process was relatively static and manageable.

Interestingly, even back then we could possibly trace the beginnings of what we now know of Zero-Trust movement. In 2004, the Jericho Forum, which later merged into the Open Group Security Forum, remarked rather prophetically;

The traditional electronic boundary between a corporate (or ‘private’) network and the Internet is breaking down in the trend which we have called de-perimeterisation

And this was almost 20 years ago, when things were….. well, simple!

Rolling on to the next decade.

Things are beginning to change, I had to put a little thought into where I drew my rather crude red line representing the network perimeter. We now have:

  • The rise of X86 and other types of server virtualisation. All very positive but lending itself to proliferation of ‘virtual machines’ within the DC. Otherwise known as VM sprawl. Software Defined Networking and Security ‘Defense in Depth’ solutions soon followed such as VMware NSX to manage these new ‘East-West’ flows in the Data Center. Inserting software based firewalls representing the birth of micro-segmentation as we know it.
  • What were ‘Fledging’ Web based services have now firmly become ‘Business Critical ‘ SaaS based services. How we connected to these services became a little bit more complicated, indeed obfuscated. More and more these were machine to machine flows versus machine to human flows. For instance, my internal app tier pulling from an external web based SaaS database server. The application no longer lived exclusively in the DC nor did we have exclusive ownership rights.
  • More and More, the remote workforce were using the corporate DC as a trombone transit to get to business SaaS resources on the web. This started to put pressure on the mandate around ‘thou must not split-tunnel’, simply because performance was unpredictable at best, due to latency and jitter. (Unfortunately we still haven’t figured out a way to speed up the speed of light!)

Ultimately, in order for the ‘Defend the Perimeter’ approach to be successful we need to:

  1. ‘Own our own infrastructure and domain.‘ Clearly we don’t own nor control the Web based SaaS services outlined above.
  2. ‘Understand clearly our borders, perimeter and topology.’ Our clarity is undermined here due to the ‘softening’ of the split-tunnel at the edge and our lack of true understanding of what is happening on the internet, where our web based services reside. Even within our DC, our topology is becoming much more complicated and the data flows are much more difficult to manage and understand. The proliferation of East-West flows, VM sprawl, shadow IT and development etc. If an attack breached our defenses, it is difficult to identify just how deep it may have gotten or where the malware is hiding.
  3. ‘Implement and enforce our security policy within our domain and at our perimeter’ Really this is dependent on 1 and 2, clearly this is now more of a challenge.

The Industry began to recognise the failings of the traditional approach. Clearly we needed a different approach. Zero Trust Architectures (ZTA), began to mature and emerge both in theory and practice.

  1. Forrester Research:
    • 2010: John Kindervag coined the phrase ‘Zero Trust’ to describe the security model that you should not implicitly trust anything outside or inside your perimeter and instead you must verify everything and anything before connecting them to the network or granting access to their systems.
    • 2018: Dr. Chase Cunningham. Led the evolution into Zero Trust eXtended Framework (ZTN). ‘Never Trust always Verify’
  2. Google BeyondCorp:
    • 2014: BeyondCorp is Google’s implementation of the Zero-Trust model. Shifts access controls from the network perimeter to individual users, BeyondCorp enables secure work from any location without the need for a traditional VPN
  3. Gartner:

And so the the current decade:

Because the perimeter is everywhere, the perimeter is in essence dead…….

I refrained from the red marker on this occasion, because I would be drawing in perpetuity. The level of transformation that has taken place over the last 4-5 years in particular has been truly remarkable. This has placed an immense and indelible strain on IT Security frameworks and the network perimeter, as we know them. It is no longer necessary to regurgitate the almost daily stream of negative news pertaining to cyber related attacks on Government, Enterprise and small business globally, in order to copperfasten the argument, that we need to accelerate the adoption of a new fit for purpose approach.

In today’s landscape:

  • Microservice based applications now sit everywhere in the enterprise and modern application development techniques leveraging CI/CD pipelines are becoming increasingly distributed. Pipelines may span multiple on-premise and cloud locations and change dynamically based on resourcing and budgetary needs.
  • Emerging enterprises may not need a traditional DC as we know it or none at all, they may leverage the public cloud, edge, COLO and home office exclusively.
  • The rise of the Edge and enabling technologies such as 5G and Private Wireless has opened up new use cases and product offerings where applications must reside close to the end-user due to latency sensitivity.
  • The continued and increasing adoption of existing established enterprises of ‘Multi-Cloud’ architectures.
  • The emergence of Multi-Cloud Data mobility. User and application data is moving, more and more across physical and administrative boundaries based on business and operational needs.
  • The exponential growth of remote work and the nature of remote work being ‘Internet First’. More often than not, remote users are leveraging internet based applications, SaaS and not leveraging any traditional Data Center applications. Increasingly a VPN less experience is demanded by users.
  • Ownership it shifting rapidly from Capex to dynamic, ‘Pay As You Use/On-demand’ Opex based on-premise cloud like consumption models, such as Dell APEX.

So, if you recall, the three key controls required to implement a ‘Perimeter’ based security model include:

  1. Do I own the Infrastructure? Rarely at best, more than likely some or increasingly none at all. Indeed many customers want to shift the burden of ownership completely to the Service Provider (SP).
  2. Do we understand clearly our border, perimeter and topology? No. In a multi-cloud world with dynamic modern application flows our perimeter is constantly changing and in flux, and in some cases disappearing.
  3. Can we implement security policy at the perimeter? Even if we had administrative ownership, this task would be massively onerous, given that our perimeter is now dynamic at best and possibly non existent.

So where does that leave us? Is it a case of ‘out with the old in with the new’? Absolutely not! More and more security tooling and systems will emerge to support the new Zero Trust architectures, but in reality we will use much of what already exists. Will we still leverage existing tools in our armoury such Firewalls, AV, IPS/IDS, and Micro-segmentation? Of course we will. Remember ZTA is a framework not a single product. There is no single magic bullet. It will be a structured coming together of the people, process and technology. No one product or piece of software will on its own implement Zero Trust.

What we will see though emerge, is a concentration of systems, processes and tooling in order to allow us deliver on the second half of the first statement in the DoD Reference Architecture Guide.

Zero Trust assumes there is no implicit trust granted to assets or user accounts based solely on their physical or network location (i.e., local area networks versus the Internet) or based on asset ownership (enterprise or personally owned)”

If we can’t ‘grant trust’ based on where something resides or who owns it, then how can we ‘grant trust’ and to what level?

The answer to that lies in a systematic and robust ability to continuously authenticate and conditionally authorize every asset on the network, and to allocate access on the principle of ‘least privilege’. To that end, Identity and Access Management systems and processes (IAM) will step forward, front and center in a Zero Trust world. ( and into the next post in this Zero Trust series…..)

DISCLAIMER
The views expressed on this site are strictly my own and do not necessarily reflect the opinions or views of Dell Technologies. Please always check official documentation to verify technical information.

#IWORK4DELL

Software Bill of Materials (SBOM)

A guest contribution by Shakita DennisChain and Marshal Savage. (They did all the heavy lifting with the great video content below).

This post appeared originally on one of the other Blogs I contribute to: Engineeringtechnologists.com. I strongly recommend you head over there for same great content by my fellow technologist colleagues.

What is an SBOM (Software Bill of Materials) ?

Executive Order (EO) 14028, Improving the Nation’s Cybersecurity, references heavily the NIST Secure Software Development Framework (SSDF) – SP 800-218. Bottom line, this is a mechanism for aiding organisations develop and deliver secure software, throughout its lifecycle. Following on, last September, White House Memorandum M-22-18 officially required federal agencies to comply with the NIST guidance and any subsequent updates thereafter. A key component of this is the requirement, as a supplier, to ‘self- attest’ that software is built based on secure software development methodologies and to provide a SBOM (Software Bill of materials)

In truth, this is common sense and critical for all organisations, federal or otherwise. Bottom line, we all need to know what is in our applications and the software that we use. I think we all want to avoid the Log4J scramble again.

Modern cloud native and embedded firmware-based systems are architected using a compendium of open source, 3rd party commercial and in-house developed software and processes. Software Based Bill of materials (SBOM) shines a light on just that. What ingredients, what versions, what underlying packages and software are going into our applications?

In this episode, join Dell’s Shakita DennisChain and Marshal Savage, as they discuss the importance of SBOM and how to develop frameworks and procedures to deliver SBOM in practice. Well worth the listen!

#IWORK4DELL

As ever all the opinions expressed in the above post are my own, and do not necessarily represent the views of Dell Technologies.

PowerProtect Cyber Recovery Release 19.12 + CyberSense 8.0 : AWS GovCloud + CyberSense on AWS

Last week Dell released the much anticipated version 19.12 of the Cyber Recovery Solution. Obviously, one of the clear highlights was the ability to deploy the Cyber Recovery solution on Google Cloud Platform. The solution leverages PowerProtect DD Virtual Edition (DDVE) storage appliance in a GCP VPC to store replicated data from a production DD system in a secure vault environment. This data can then then be recovered to the production DD system. My colleague Ben Mayer gives an excellent high level overview in his blog, that can be found at https://www.cloudsquared.blog/2022/11/powerprotect-cyber-recovery-for-google.html.

This of course rounds out support for vault capability across all 3 major public clouds ( AWS, Azure and now GCP). This is a really exciting development and I look forward to digging deeper into what this means technically over the next couple of weeks and months, as part of my ongoing Dell Cyber Recovery Series.

But there are many other highlights to the release as follows (Clearly my list isn’t exhaustive….. I’m picking out the bits that have captured my attention, as ever please refer to the official Dell release note documentation for all the underlying detail)

  • Support for new Software Releases
    • DD OS 7.10
    • PowerProtect Data manager 19.12
    • Networker 19.7
    • Avamar 19.7
    • Cybersense 7.12 and Cybersense 8.0

Cyber Recovery Solution support in AWS GovCloud (US)

For those those not familiar, AWS GovCloud gives government customers and their partners the flexibility to architect secure cloud solutions that comply with the FedRAMP High baseline; the DOJ’s Criminal Justice Information Systems (CJIS) Security Policy; U.S. International Traffic in Arms Regulations (ITAR); Export Administration Regulations (EAR); Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG) for Impact Levels 2, 4 and 5; FIPS 140-2; IRS-1075; and other compliance regimes.

AWS GovCloud Regions are operated by employees who are U.S. citizens on U.S. soil. AWS GovCloud is only accessible to U.S. entities and root account holders who pass a screening process.

https://aws.amazon.com/govcloud-us/faqs/

A little under the radar, but for obvious reasons, likely to be a very important feature enhancement for customers.

CyberSense on AWS & Platform Extension

Beginning, CR version 19.12 (this release), the CR vault on AWS supports the CyberSense software. Really this is a very significant feature addition as it adds the ability to ‘analyze’ file and data integrity after data is replicated to the Cyber recovery Vault and a retention lock is applied.

CyberSense automatically scans the backup data, creating point-in-time observations of files and data. These observations enable CyberSense to track how files change over time and uncover even the most advanced type of attack. Analytics are generated that detect encryption/corruption of files or database pages, known malware extensions, mass deletions/creations of files, and more.

Machine learning algorithms then use analytics to make a deterministic decision on data corruption that is indicative of a cyberattack. The machine learning algorithms have been trained with the latest trojans and ransomware to detect suspicious behavior. If an attack occurs, a critical alert is displayed in the Cyber Recovery dashboard. CyberSense post-attack forensic reports are available to diagnose and recover from the ransomware attack quickly.

In truth this capability is a key capability of the Cyber Sense Solution. Even with the best of intentions, once we make a copy from the production side to the vault, we can never 100% be quite sure that the ‘data’ replicated is 100% clean, once we initiate the MTREE replication between DD appliances. The ML/AI capability of CyberSense, helps mitigate against this risk.

Finally, and more to follow on this topic in future posts. The expansion of the platform footprint of the CyberSense 8.0 software to support a SLES 12 SP5 based virtual appliance, ideal for small or medium sized deployments and environments.

DISCLAIMER
The views expressed on this site are strictly my own and do not necessarily reflect the opinions or views of Dell Technologies. Please always check official documentation to verify technical information.

#IWORK4DELL

Introducing Dell PowerProtect Cyber Recovery Part 3 – Setting up the Production Side – Protecting VM Workload with Dell PowerProtect Data Manager

As discussed in the previous post , the Dell Cyber recovery solution is underpinned by Data Domain. I need a Data Domain appliance in the production side that will replicate to a Data Domain appliance in the secure Vault using MTREE replication. The question is how do I get data into the production side Data Domain appliance in the first place? We could write data manually perhaps, but in the real word Data Domain will likely be be paired with some type of backup system (Dell Avamar, Dell Networker or Dell Power Protect DataManager).

This post will cover the basic standup of Dell PowerProtect DataManager (PPDM). This really is a powerful product that is multi-cloud optimised, allowing you to discover, manage, protect and restore workloads from multiple sources (Windows, Linux, K8S, VMware, Hyper-V, Oracle, SAP HANA etc.). Clearly, I won’t do it complete justice in this post, so the somewhat humble goal, as outlined above is to populate the MTREE with backup data. the fact that I got it working is of course testament of how easy a product it is to deploy and use.

Step 1: Deploy PowerProtect PPDM

Referring back to our previous diagram from the last post , we will concentrate on the ‘Green Box’.

Again, the assumption here is that we know how to deploy an OVA/OVF. The PowerProtect DataManager Deployment Guide provides all the low level detail you need. The only real step to watch is Step 5, where the OVF template asks where you wish to deploy the software? This is an ‘on-premise/hybrid’ configuration.

Next, Power up the Virtual Machine in the vCenter console. Once booted, browse using https to the FQDN of the appliance. (We have already setup NTP and DNS for everything in the last post). You will be presented with the following initial setup workflow.

Run through the install workflow.

  • Select 90 Day Eval
  • Add your password for admin user (again I used the uber secure Password123!)
  • Setup your Timezone and NTP servers. I used UTC+. Again, NTP is your friend so should be setup properly
  • Untoggle the Mail Server option, as we won’t be sending alerts etc

The setup will take a little time, but you can watch the progress here. Exciting !

All going well, the install will complete successfully and your browser will redirect to the following screen. Skip the workflows and go directly to ‘Launch’. Logon as ‘admin’ with the password you created during the setup process.

Step 2: Configure PPDM Storage

Of course you may rightly ask, why didn’t I do this in the wizard, well…my excuse is that it does help to understand how the GUI is laid out from the main menu. In this step, we are presenting the PPDM with its target storage device, the DDVE we configured in the last blog post. This is really very straightforward.

From the main menu, navigate to Infrastructure > Storage > Add

Follow the ‘Add Storage’ dialogue, select PowerProtect DD System ad the storage type. Don’t worry about HA for now. Make sure you add the device using the FQDN and use sysadmin as the DD user. Accept the rest of the defaults and verify the certificate.

Verify you can see the newly presented storage. You may need to refresh the browser or navigate to another area of the GUI and back to storage in order to see our DDVE storage resource populate.

Step 3: Add vCenter Assets

The next step in the process is create the ‘link’ between PPDM and vSphere vCenter. Here it discovers ‘Assets’ that are eligible for protection and recovery. Firstly, we will add details to PPDM Manager regarding the vCenter server that hosts it.

Now add the the same vCenter resource so that we can automatically discover resources. When stepping through the workflow make sure you check the vSphere Plugin tickbox. Of course you are given the option of IP versus FQDN, be safe not sorry, pick FQDN.

Once vCenter is added, it will automatically discover ‘assets’ under its control. In other words in the vCenter Inventory. In the next section we will create and run a demo protection policy. Truthfully this will look better via a live video. As promised at the end of the series we will do an end to end video blog… I think they are called vlogs?

Step 4: Create Protection Policy

Now we have a very basic PPDM system setup with access to Data Domain storage as a backup target and a discovered ‘Assets’ vCenter inventory to which it may apply protection and recovery policies.

Again, we will step through this in the upcoming video. In the interim, flick through the gallery attached. it should be fairly intuitive.

Step 5: Run Protection Policy

We have two options a) direct from the vSphere console and the PPDM plugin or b) manually via the PPDM interface. I’m going to take the latter approach for now. Of course we have scheduled the policy to run everyday at a certain time but in this instance we will initiate the process manually.

It really couldn’t be more simple. Highlight the Protection Policy and click ‘Protect Now’

Select ‘All Assets’ – Remember our Policy is only backing up one VM.

In this instance we will select the ‘Full’ Backup. You have an option of a ‘Synthetic backup’ also which backs up the deltas from the original.

Click ‘Protect Now’

The ‘Protection Job’ kicks off, we can monitor its progress in the jobs panel in the GUI.

Once complete we can review in the same window under the successful jobs menu. Note also, not only has our manual job completed successfully, but so have the automated Synthetic Full ‘Policy’ jobs, configured to kickoff each evening at 8pm.

Review

Ultimately the purpose of this exercise was to populate the production side DDVE appliance with data. Once we have data on the production side, we can then set up the MTREE replication into the Cyber Recovery Vault (and automate/control/secure this process via the CR appliance). Logging back into the Data Domain System Manager, we can see the result:

We have Data written Data Domain in the last 24 hours……

Digging a little deeper into the file system itself we can see the activity and the Data Domain compression engine in action. Finally, we see this presented in the MTREE format. This is where we will CREATE the link to the Vault Data Domain Virtual Edition in the next post.

Hopefully, you found this post useful. We are now set to start standing up the CR Vault in the next post. As ever any questions/comments, please reach out directly or leave a comment below and I will respond.

As ever for best practice always refer to Dell official documentation

Cheers

Martin

DISCLAIMER
The views expressed on this site are strictly my own and do not necessarily reflect the opinions or views of Dell Technologies. Please always check official documentation to verify technical information.

#IWORK4DELL

Introducing Dell PowerProtect Cyber Recovery Part 2 – Setting up the Production Side (DDVE)

Last week we overviewed the big picture (Diagram Below), and very briefly discussed the end to end flow (Steps 1 through 6) During this post we will start to break this up into consumable chunks and digest in a little more detail. Whether you are deploying the Cyber Recovery solution in one fell swoop or you have already have a data protection architecture leveraging Dell Power Data Protect Manager with Data Domain and you are investigating attaching the vault as a Day 2 activity, then hopefully you will find this post of interest.

Production Side

This post will concentrate on part of the ‘Big Curvy Green Box’ or the left side of the diagram. I am leveraging a VxRail with an embedded vCenter, for a couple of reasons a) I’m lucky to have one in my lab and b) it’s incredibly easy. This has been pre-deployed in my environment. Obviously, if you are following this blog, you can use any host/vCenter combination of your choosing.

This post will focus on how we stand up the Data Domain Virtual Edition appliance, with a view to leveraging this for the Cyber Recovery use-case only. Health Warning – this is for demo purposes only and we will absolutely not be making any claims with regards to best practices or the suitability of this setup for other use-cases. In the spirit of blogging, the goal here is to build our understanding of the concepts.

We will follow up next week to overview the basic setup of PPDM in the Production side and how it integrates with vSphere vCenter and the PowerProtect DDVE appliance.

Sample Bill of Materials Production Side

I’ve been careful here to call out the word sample. This is what I have used for this blog post, of course in production we need to revert to the official interoperability documentation. Just stating the obvious…. :).That being said this is what I have used in my setup.

  • VMware ESXi Version 7.0.3 (Build 19898904)
  • VMware vCenter Server 7.0.3 00500
  • Dell PowerProtect Data Manager 19.11.0-14
  • Dell PowerProtect DD VE 7.9.0.10-1016575

Prerequisites

As per the diagram I’m running this on 4 node VxRail cluster, so my TOR switches are setup properly, everything is routing nicely etc. The VxRail setup also configures my cluster fully with a VSAN Datastore deployed, vMotion, DRS, HA, and a Production VDS.

This won’t come as surprise but the following are critical:

  • Synchronised Time everywhere leveraging an NTP server
  • DNS Forward and Reverse lookup everywhere.

In some instances during installation you may be given the option to deploy devices, objects etc., leveraging IP addresses only. My experience with that approach isn’t great so DNS and NTP everywhere are your friend.

Assumptions

As per my previous post, I’m going to attempt brevity and to be as concise as possible. For partners/Dell employees reading this, then you will have access to more of the in-depth guidance. I urge everybody to familiarise themselves with the documentation if possible.

I’ll publish an ‘end to end’ configuration/demo video at the end of this series. In the interim I like using the ‘Gallery’ and ‘Images’ so readers can pause and review in their own time.

Some Lower Level Detail

The following is the low-level setup, which should help guide through the screengrabs.

This is all very straightforward. We have:

  • Our 4 VxRail Nodes with a vSAN Datastore pre-built.
  • Embedded VxRail vCenter server pre-deployed on the first host.
  • VMware Virtual Distributed Switch (VDS) with ESXi Management, vMOTION and a couple of other networks provisioned.
  • Routing pre-configured on two Dell TOR switches. Some very basic routing between:
    • The internal VxRail networks (Management, vMotion, and some other Managment networks we have provisioned)
    • Reachability to the Vault network via a Replication interface (More on that in a while)
    • Reachability to the IP services layer (DNS & Redundant NTP servers)
  • DNS forward and reverse lookup configured and verified for all components.

Step 1: Deploy PowerProtect DDVE

First step is to download the PowerProtect DDVE OVA from the Dell Data Domain Virtual Edition support site (you will need to register). Here you will also have access to all the official implementation documentation. As ever I urge you to refer to this, as I will skip through much of the detail here. I’m making the bold assumption we know how to deploy OVF’s etc. We will capture the process as mentioned in the wrap up video.

During the OVA setup you will be asked what configuration size you wish. This is a demo so go for the smallest 8TB -2CPUs, 8GB Memory.

The OVA setup will also ask you to select the destination networks for each source network or NIC. This is important as we will leverage the first for the ‘Management network’ and the second as the ‘Replication Network’ as per the previous diagram. In my setup I am using VLAN 708 for Management and VLAN 712 for the DD Replication Network.

Skip through the rest of the OVA deployment. We will deploy on the default VSAN datastore and inherit that storage policy. Of course we have everything else deployed here also, which clearly isn’t best practice but this is of course a demo!

Once the OVA has deployed successfully, do not power on just yet. We need to add target storage for replication. You can get by with circa 250GB, but I’m going to add 500GB as the 3rd hard disk. Right click in the VM, Edit Settings and ‘Add New Device’.

At this point you can power on the VM, open the web console and wait. It will take some time for the VM to initialise and boot. once booted you will be prompted to logon. Use the default combination of sysadmin/changme (you will be immediately prompted to change the password)

By default, the management NIC will look for an IP address via DHCP. If you have a DHCP service running, then you can browse to the IP address and run the setup from there. Of course in most instances, this won’t be the case and we will assign IP addresses manually. I’m going to be a little ‘old skool’ in any regard, I like the CLI.

  • Tab through the E-EULA and enter your new password combination, my demo will use Password123!. Incredibly secure I know.
  • Answer ‘Yes’ when asked to create a security officer. pick a username, I am using ‘crso’. the password needs to be different from your newly created sysadmin password.
  • Answer ‘no’ when prompted to use the GUI.
  • Answer ‘yes’ when asked to configure the network.
  • Answer ‘no’ when asked to use DHCP.
  • Follow the rest as prompted:
    • Hostname – your full FQDN
    • Domainname
    • ethV0 (used for Management)
    • eth V1 (we will use for replication to the vault)
    • Default Gateway (will be the gateway of ethV0)
    • IPv6 – Skip this by hitting return
    • DNS Servers
  • You will be presented with the summary configuration, if all good then ‘Save’.
  • When prompted to configure e-licenses, type ‘no’. we will be using the fully functioning 90 day trial
  • When prompted to ‘Configure System at this time’ – type ‘no’
  • You will then be presented with a message, ‘configuration complete’

Step 2: Initial Configuration of DDVE

Now browse to the DDVE appliance via the FQDN you have assigned. This should work if everything is setup correctly.

Logon using sysadmin and the password you created earlier.

You will be presented with a screen similar to the following. At this point we have no file system configured.

Note: There is a 6 step wizard we could have initiated earlier, but for for the purposes of the Cyber Recovery Demo, it is helpful to get a ‘look and feel’ of the DDVE interface from the start. This is just my preference.

Follow the wizard on screen to create the file system, when presented with the ‘cloud tier’ warning, click next and ignore. Click ‘SKIP ASSESMENT’ in step 4, and then click ‘Finish’. Step 6 will take some time process.

Enable DD Boost and Add User

We need to enable the DD Boost Protocol to make the deduplication process as efficient as possible and implement client side offload capability. We will see where that fits in during a future post.

Navigate to Protocols -> DD Boost and Click Enable

We want to add DD Boost user with Admin rights. Firstly create the user by navigating to Administration -> Access -> Local Users -> Create

Add this newly created user as a user with DD Boost Access. Follow the workflow and ignore the warning that this user has Admin access.

Wrap Up

So there you have it, a quick overview of our demo environment and we have stood up the Production side DDVE appliance, with a very basic configuration. In the next post we will stand up the production side PowerProtect Data Manager and knit these two components with vCenter.

As mentioned earlier I have skimmed through quite a bit of detail here in terms of the setup. The end goal is for us to dig deeper into our understanding of the Cyber recovery solution proper. So the above is no way representative of best practice as regards DDVE design (the DD storage is on the same VSAN Datastore that the DDVE VM and the machines it protects reside upon for instance ! Definitely not best practice).

For best practice always always refer to Dell official documentation

Thanks for taking the time to read this, and if you have any questions/comments, then please let me know

Cheers

Martin

DISCLAIMER
The views expressed on this site are strictly my own and do not necessarily reflect the opinions or views of Dell Technologies. Please always check official documentation to verify technical information.

#IWORK4DELL

Introducing Dell PowerProtect Cyber Recovery – Architecture Basics – A Practical Example

Vault 101 – Simple Questions, Simple Answers

We all suffer from terminology/lingo/jargon overload when discussing something new and multi-faceted, especially in the information security space. I am all too often guilty of venturing far too easily into the verbose depths…… In this instance however, I’m going to try and consciously keep this introductory post as high level as possible and to stick to the fundamentals. For sure I will likely miss something along the way, but we can fill in the blanks over time.

Brevity is beautiful….

To that end, this post will concentrate on providing simple concise answers to the following questions.

  1. What do we need in order to create an operational Vault?
  2. How do we close and lock the door in the ‘vault’?
  3. How do we move data into the vault, and use that data to re-instantiate critical applications?

This implies, we will not discuss some very key concepts such as the following.

  • Are we sure the Data hasn’t changed during the process of placing the ‘Data’ into the Vault? (Immutability)
  • Tools and processes to guarantee immutability?
  • Who moved the Data and were they permitted to do so, what happened? (AAA, RBAC, IAM)
  • How fast and efficiently we moved the ‘Data’ to make sure the ‘Vault’ door isn’t open for too long (Deduplication, Throughput)
  • Where is the ‘Source’ and where is the ‘Vault’? (Cloud, On-Premise, Remote, Local). How many vaults do we have?

Of course, in the real world these are absolutely paramount and top of mind when discussing technical and architectural capability. Rest assured we will revisit these topics in detail along with where everything fits within the NIST and COBIT frameworks in later posts.

What do we need in order create an operational ‘Vault’?

Let’s start with a pretty common real-world example. A customer running mixed workloads on a VMware infrastructure. Of course, they have a Dell VxRail cluster deployed!

In the spirit of keeping this as simple as possible, the following represents the logical setup flow:

  1. We need some mechanism to backup our Virtual Machines (VM’s) that are deployed on the vSphere cluster. We have a couple of choices; in this instance we will leverage Dell PowerProtect Data Manager. We have others such as Avamar and Networker, that we will explore in a later post, but PPDM is a great fit for protecting VMware based workloads.
  2. PowerProtect Data Manager (PPDM) does the backup orchestration, but it needs to store the data somewhere. This is where Dell PowerProtect Data Domain enters the fray. This platform comes in all shapes and sizes, but again for this VMware use case, the virtual edition, Dell PowerProtect DD Virtual Edition (DDVE) is a good option
  3. We need to get the Data into the ‘Vault’. We do this by pairing the Production DDVE with a DDVE that physically sits on a server in the Vault. The vault could of course be anywhere, in the next aisle, in the cloud. At this point, there is no need to get into too much detail around how they are connected, other than to say there is a ‘network’ that connects them. What we do with this network is a key component of the vaulting process. More on that in a while.
  4. Once we pair the DDVE appliances across the network, we create an MTree replication pair using the DDOS software. We’ll see this in action in a future post. The replication software copies the data from the source DDVE appliance to the Vault DDVE appliance. Power Protect Cyber Recovery will leverage these MTree pairs to initiate replication between the production side and the Vault.
  5. We will deploy another PowerProtect Data Manager in the vault, this will be available on the vault network but left in an unconfigured state. It will be added as an ‘application asset’ to the Cyber Recovery appliance. Power Protect Cyber Recovery will leverage an automated workflow to configure the vault PPDM when a data recovery workflow is initiated.
  6. Once we have the basic infrastructure setup as above, then we deploy the PowerProtect Cyber Recovery software in the vault. We will deploy this on the VxRail appliance. During setup, the Cyber Recovery appliance is allocated storage ‘Assets’, a mandatory asset is the DDVE

So, there you go, a fully functional Cyber Recovery Vault leveraging software only. Of course, when we talk about scale and performance, then the benefits of the physical Data Domain appliances will begin to resonate more. But for now, we have an answer to the first question.

Of course, the answer to the second question we posed is key…….

How do we close the vault and lock the door?

This part is fairly straightforward as the Cyber Recovery software automates the process. Once the storage asset is added to Cyber Recovery and a replication policy is enabled then the vault will automatically lock. Don’t worry we will examine what the replication policy looks like and how we add a storage asset in a future post.

Of course, I still didn’t answer the question. In short, the process is fairly straightforward. As mentioned earlier, I skipped over the importance of ‘network’ connectivity between the ‘Production’ side DDVE and the ‘Vault Side’ DDVE above.

Remembering that the Cyber Recovery software now controls the Vault side DDVE appliance (asset) then:

  1. When a Policy action (such as SYNC) is initiated by Cyber Recovery, then the software administratively opens the replication interface on the DDVE appliance. This allows the Data Domain software to perform the MTree replication between the Production Side and Vault.
  2. When the Policy action is complete, then the Cyber Recovery software closes the vault by administratively shutting down the replication interface on the vault side DDVE appliance.
  3. The default state is admin down. or locked.

This is in essence the logic behind the ‘Operational Airgap’. Again, we will dig into this in more depth in a future post, but for now I’m going to move on to the third question. Brevity is beautiful!

How do we use a copy of the ‘Data’ in the vault if required to re-instantiate a business function and/or application?

The cyber recovery software is a policy driven UI which includes:

  • Policy creation wizards allowing for point in time and scheduled execution of tasks replication and copy tasks.
  • Recovery assistance with the ability to easily recover data to the recovery host(s). e.g., VxRail cluster in our example.
  • Automated recovery capability for products such as Networker, Avamar and PowerProtect Data Manager. For example, using point-in-time (PIT) copies to rehydrate PPDM data in the Cyber Recovery Vault.

We have skipped over this last question to an extent, but I think it is deserving of its own post. For example, we will cover in depth how we leverage PPDM in the Vault to re-hydrate an application or set of VM’s

Up next

Hopefully you will find this useful. Clearly the subject is much more extensive, broader and deeper than what we have described thus far. The intent though was to start off with a practical example of how we can make the subject ‘real’. How does this work at a very basic architectural level using a common real-world example? Keeping it brief(ish) and keeping it simple…. we will add much more detail as we go.

Stay tuned for my next post in the series, which will cover how we stand up the Production side

DISCLAIMER
The views expressed on this site are strictly my own and do not necessarily reflect the opinions or views of Dell Technologies. Please always check official documentation to verify technical information.

#IWORK4DELL

Blog Post Zero: A Framework for Cyber Resilience 101

I’m sure at this stage that everybody is very much aware of the increased threat of ransomware based cyber-attack, and the importance of cyber security. To that end, and to the relief of all, I’m going to pleasantly surprise everybody up front, by not quoting Gartner or IDC. I think we are past having to have the industry analysts reaffirm what we already know. This is the here and now.

That said, I think it is important to call out one important emerging trend. Organisations in every industry are moving from a ‘threat prevention strategy’ to a more rounded ‘cyber resilience model’ for a holistic approach to Cyber Security. Bottom line, your organisation will be the subject of an attack. Hopefully, your threat prevention controls will be enough, alas I suspect not, and increasingly there is a tacit acceptance that prevention will never be 100% successful. This creates a problem.

More and more, the question is not ‘how did you let it happen?’ but rather ‘what did you do about it?’ All too often, even the largest organisations have struggled with an answer to the latter and have panicked in the eye of the cyber storm… too late of course at that point. Damage done or worse damage still being done whilst we look on like a helpless bystander, desperately seeking coping strategies to manage our reputation and minimise loss.

Damage limitation whilst the damage is still happening, is not a good place to be.

We are in ‘coping’ mode and certainly not in control. Again, we all know of high visibility examples of ransomware cyber-attacks, where ‘hoping for the best but expecting the worst’ are the order of the day. Fingers crossed or more accurately in the dam…

How do we shift the dial from ‘Cope and Hope’ to ‘Resilience and Control’?

Thankfully we have some very mature methodologies/frameworks that can help us develop a cohesive plan and strategy to take back control.  The ‘Five Functions’ as defined by the NIST Cybersecurity Framework is an example of a methodology which helps us both frame the problem and define a resilient solution. Perhaps a cohesive response to ‘what did you do about it?’……

Organisations need the tools and capability to ‘Detect’, ‘Respond’ and ‘Recover’ from an attack, mitigating the damage and assure data integrity to restore business function and reputation. 

NIST, focusses on restorative outcomes. It’s inferred that the cybersecurity instances will happen, it’s what you do about it that matters most. For example:

“Ensuring the organization implements Recovery Planning processes and procedures to restore systems and/or assets affected by cybersecurity incidents.”

Practical Steps towards NIST like outcome(s).

Dell PowerProtect Cyber Recovery is one such solution that aids in the implementation of not only the ‘Respond’ pillar but also of course ‘Detect’ and ‘Recover’.  Over the coming weeks, we will delve into what this means in practical terms.

Properly implemented, the adoption of a cohesive framework such as NIST, together with well-structured policies and controls, help to shift the dial towards us taking back resilient control and away from the chaos of ‘cope and hope’.

However, as somebody very famous once said, “there is nothing known as ‘perfect’. It’s only those imperfections which we choose not to see”. Or more accurately that we can’t see yet. So clearly an effective cyber resilient architecture must constantly evolve and be flexible enough to respond to future threats not yet defined. This is why the fluidity offered by framework such as NIST is so useful.

There are other exciting developments on the way, that will further shift the balance away from the bad actors, such as Zero Trust and Zero Trust Architectures. (These fit nicely into the Identity and Protect pillars) This blog series will look to deep dive into these areas in the coming months also.

This will not be a marketing blog however, there are way better people at that than I. I’ll happily leverage their official work where necessary (Citation via Hyperlinks are my friend!). The intent is that this will be a practical and technical series, with the goal to peel back the layers, remove the jargon where possible and provide practical examples of how Dell Technologies products and services, amongst others and our partners can help meet the challenges outlined above. (Disclosure & Disclaimer: Even though I work for Dell, all opinions here are my own and do not necessarily represent those of Dell, you’ll see me repeat that quite a bit !!)

What is a Resilient Architecture?

To conclude, we should think of a Resilient Architecture as an entity that is adaptive to its surroundings. It is impermeable to the natural, accidental or intentional disasters it may have to face in its locale/environs.

Resilient Architectures are not new, we have been building Data Centers for decades in high-risk environments such as earthquake zones and flood plains, where we expect failure and disaster. It will happen. Death and Taxes and all that….

Our DC Storage, Compute and Network architectures have been resilient to such challenges for years, almost to the point where it is taken for granted.  This tree certainly is under stress, but is hasn’t blown down…

Unfortunately, the security domain, hasn’t quite followed in lockstep. It isn’t until relatively recently that it has begun to play catch up, previously wedded in the belief that we could prevent everything by building singular monolithic perimeters around the organization. Anything that got through the perimeter we could fix. Clearly, this is no longer the case.

The mandates around Zero Trust and Zero Trust architectures are acknowledgement that this approach must change, in lieu of the proliferation of the multi-cloud and ever more mobile workforce and the failure of organisations to deal with cybersecurity attacks in a resilient, controlled fashion that protected their assets, revenue, reputation and IP.

One thing is for sure, these challenges are not going away, the security threat landscape is becoming infinitely more complex and markedly more unforgiving. Thankfully, flexible, modular frameworks such as NIST and ZTA, in addition to emerging technical tools, controls and processes will allow us deliver architectures that are both secure but ultimately and more importantly resilient.

DISCLAIMER
The views expressed on this site are strictly my own and do not necessarily reflect the opinions or views of Dell Technologies. Please always check official documentation to verify technical information.

#IWORK4DELL

  1. Unknown's avatar
  2. Unknown's avatar
  3. Unknown's avatar
  4. Unknown's avatar
  5. Unknown's avatar