Module 4 Designing a Virtual Compute Environment Flashcards

(342 cards)

0
Q

Isolates the virtual servers from the underlying hardware, while simultaneously isolating the applications from one another

A

Hypervisor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

What provides the ability to virtualize a number of servers on the same hardware?

A

A hypervisor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

5 Characteristics of an x86 Classic Host BEFORE Virtualization

A
  1. Runs single OS per machine at a time.
  2. Couples software and hardware TIGHTLY.
  3. May create conflicts when multiple applications run on the same system.
  4. Underutilizes resources.
  5. Is inflexible and expensive.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

5 Characteristics of an x86 Host After Virtualization

A
  1. Runs MULTIPLE OSs per machine concurrently.
  2. Makes OS and applications hardware independent.
  3. Isolates VMs from each other, hence no conflict.
  4. Improves resource utilization.
  5. Offer flexible infrastructure at low cost.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

3 Characteristics of a Type 1 Hypervisor

A
  1. Runs as an OS
  2. Installs and runs on x86 bare-metal hardware
  3. Requires certified hardware
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

2 Characteristics of a Type 2 Hypervisor

A
  1. Installs and runs as an APPLICATION.

2. Relies on underlying OS on the physical machine for device support and physical resource management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Also known as a Type 1 Hypervisor

A

Bare-Metal Hypervisor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Also known as a Type 2 Hypervisor

A

Hosted Hypervisor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Also known as a Hosted Hypervisor

A

Type 2 Hypervisor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Also known as a Bare-Metal Hypervisor

A

Type 1 Hypervisor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Type of hypervisor which supports the broadest range of hardware configurations

A

Type 2 - Hosted Hypervisor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why does a Type 2 Hypervisor support the broadest range of hardware configurations?

A

Because the Type 2 Hypervisor is running on an OS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

4 Characteristics of Full Virtualization

A
  1. Virtual Machine Monitor (VMM) runs in the privileged Ring 0.
  2. VMM decouples Guest OS from the underlying physical HW.
  3. Each VM is assigned a VMM.
  4. Guest OS is NOT aware of being virtualized.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Two product examples of hypervisors which implement the full virtualization technique

A
  1. VMware ESX/ESXi

2. Microsoft Hyper-V

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Essential in a Full Virtualization

A

Binary Translation (BT) of OS instructions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Replacing the Guest OS instructions that cannot be virtualized, with new instructions that have the same effect on the virtual hardware

A

Binary Translation (BT)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Definition of Binary Translation (BT)

A

Replacing the Guest OS instructions that cannot be virtualized, with new instructions that have the same effect on the virtual hardware

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Provides virtual components to each VM

A

Virtual Machine Monitor (VMM)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Performs Binary Translation (BT) of non-virtualizable OS instructions

A

Virtual Machine Monitor (VMM)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Two functions of a Virtual Machine Monitor (VMM)

A
  1. Provides virtual components to each VM.

2. Performs Binary Translation (BT) of non-virtualizable OS instructions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

True or False: The VMM provides each VM all the services similar to a physical computer, including a virtual BIOS and virtual devices.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Decouples the Guest OS from the underlying physical hardware

A

Virtual Machine Monitor (VMM)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Why is Binary Translation (BT) said to provide Full Virtualization?

A

Because the hypervisor completely decouples the Guest OS from the underlying hardware.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

4 Characteristics of Paravirtualization

A
  1. Guest OS KNOWS that it is virtualized.
  2. Guest OS runs in Ring 0.
  3. Modified Guest OS kernel is used, such as Linux and OpenBSD.
  4. Unmodified Guest OS is not supported, such as MS Windows.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
In this approach, the Guest OS kernel is modified to eliminate the need for Binary Translation (BT).
Paravirtualization
25
Possible in open source OSs
Paravirtualization
26
In which Ring does a paravirtualized Guest OS run?
Ring 0
27
Uses applications run in which Ring?
Ring 3
28
Two product examples of paravirtualization
1. Xen | 2. KVM
29
Virtualization approach adopted for unmodified Guest OSs, such as MS Windows
Full Virtualization
30
Determines the level of consolidation that can be achieved in a hypervisor environment, as well as the underlying design of the compute layer of a VDC.
Scalability
31
Limits capacity of each host
Number and size of storage devices
32
Restricts the amount of bandwidth available to the host
Number of I/O cards (network and/or storage)
33
Influences the choice of server hardware
Number of physical CPUs (cores) and amount of memory
34
Impacts how many physical CPUs are required to support your VMs
Number of virtual CPUs per physical CPU (core)
35
3 VM scalability characteristics
1. # of Virtual CPUs 2. Virtual Memory 3. Virtual Disk Size
36
2 Cluster Scalability Characteristics
1. # of hosts | 2. # of VMs
37
7 Host Scalability Factors
1. # of Physical CPUs 2. Physical Memory 3. # of I/O Cards 4. # of Storage Devices 5. Size of Storage Devices 6. # of Virtual CPUs per Physical CPU 7. # of VMs
38
Scalability in a hypervisor environment determines what two things?
1. Level of consolidation that can be achieved. | 2. Underlying design of the compute layer of a VDC.
39
Clusters are important to what?
High Availability
40
Can comprise a significant portion of the cost of a technology implementation
Licensing
41
How are VM OSs licensed?
1. Per VM | 2. Per Hypervisor
42
True or False: In a VDC, the hypervisor is licensed, as is the OS within EACH VM, and any applicable application licenses.
True
43
True or False: For a one-to-one migration of physical servers to VMs, licensing costs will increase.
True
44
Two things that can lead to a decrease in licensing costs
1. Consolidating multiple VMs onto a single hypervisor (server). 2. Consolidating redundant VMs that existed for fault tolerance / high availability.
45
A hypervisor license may vary based on what three factors?
1. Number of CPUs 2. Amount of Memory 3. Number of VMs Supported
46
True or False: VMware vSphere 5.0 is licensed by physical host.
False
47
How is VMware vSphere 5.0 licensed?
Licensed based on the number of physical CPUs managed by vCenter. Each physical CPU license entitles any number of processor cores and physical memory. The amount of virtual memory available is limited to a specific amount per CPU, but can be allocated in any way across all of the physical CPUs.
48
True or False: In some cases, VM licensing may be identical to the licensing for physical servers.
True
49
True or False: In some cases, it may be possible to configure multiple VMs with a single license.
True
50
A single Windows 2008 R2 Enterprise license allows how many instances to run on a single physical server?
4, regardless of hypervisor brand
51
A single Windows 2008 R2 Datacenter license allows how many instances to run on a single physical server? What is the associated caveat?
Unlimited. | Each physical server must be licensed separately.
52
3 management options for a virtual compute environment
1. Web 2. Client / Server 3. CLI
53
3 Virtual Compute Environment Management Tool Considerations
1. Options (Web, Client/Server, CLI) 2. Usability of Interface 3. Support for common tasks in EACH interface
54
3 Product Examples of Virtual Compute Environment Management Tools
1. XenCenter 2. VMware vCenter Server 3. MS System Center 2012 Infrastructure Management
55
3 Integration Considerations for Storage
1. Exposed APIs 2. I/O Offload, Array Information 3. Multipathing Software
56
2 Integration Considerations for Network
1. 3rd Party Distributed Switches | 2. Separation of roles between network and server admins
57
Native multipathing software included with VMware vSphere
NMP
58
Native multipathing software included with Microsoft Hyper-V
MPIO
59
Three NMP options for distributing load
1. Fixed 2. Most Recently Used (MRU) 3. Round Robin (RR)
60
NMP Fixed Load Distribution
NMP Fixed uses the same path at all times
61
NMP MRU Load Distribution
NMP MRU uses the same path until it becomes unavailable.
62
NMP RR Load Distribution
NMP RR cycles through all available paths - regardless of load.
63
Four MPIO options for distributing load
1. Round Robin (RR) 2. Round Robin with Subsets 3. Dynamic Least Queue Depth 4. Weighted Path
64
MPIO Round Robin (RR) Load Distribution
MPIO RR cycles through all available paths - regardless of load. Identical to NMP RR.
65
MPIO Round Robin with Subsets Load Distribution
Used for Active / Passive arrays so that the passive paths are not used unless a failure occurs with the primary storage controller.
66
MPIO Dynamic Least Queue Depth Load Distribution
MPIO Dynamic Least Queue Depth directs I/O to the path with the least number of outstanding requests.
67
MPIO Weighted Path Load Distribution
MPIO Weighted Path allows you to set priority on the paths.
68
Four multipathing challenges
1. Multiple paths are not dynamically load balanced. 2. Native multipathing is not optimized for specific storage arrays. 3. Different native multipathing across hypervisors, with differing features. 4. Must support different storage protocols.
69
True or False: Multipathing algorithms are not optimized for any specific storage array.
True
70
Multipathing algorithms are provided by what?
The OS itself
71
True or False: Multipathing software are designed to work with a number of storage systems.
True
72
Provides options for load balancing and tighter integration with different types of arrays
EMC PowerPath/VE
73
4 criteria evaluated by EMC PowerPath/VE when choosing a path
1. Pending I/Os 2. Size of I/Os 3. Types of I/Os 4. Most Recently Used Paths
74
PowerPath/VE has optimized algorithms for what two storage platforms?
1. EMC VNX | 2. EMC VMAX
75
True or False: Multipathing is automatically configured for Active/Active (VMAX) arrays and Active/Passive/ALUA (Asymmetrical Logical Unit Access) (VNX) arrays.
True
76
PowerPath/VE supports which three connectivity technologies?
1. FC 2. FCoE 3. iSCSI
77
Connectivity options supported by PowerPath/VE
1. HBA 2. CNA 3. iSCSI HBA 4. Software iSCSI Initiator
78
4 Characteristics of EMC PowerPath/VE
1. Different algorithms for supported storage. 2. Integrates with Active/Passive or Active/Active storage. 3. Supports FC, FCoE and iSCSI. 4. Same based feature set across Hyper-V and vSphere.
79
3 Virtual Compute Security Threats
1. Attacks in the hypervisor risk all VMs running on it. 2. DoS attacks can impact numerous VMs across servers. 3. VM infection can spread quickly.
80
Four Virtual Compute Security Measures
1. Automated (or partially automated) Patching System 2. Integration with Antivirus / Anti-malware Products 3. Resource Limitations 4. VM Isolation
81
Minimum requirement for an automated patching system
At a minimum, should provide a mechanism to patch and upgrade the hypervisors in the environment.
82
Significant advantage of an automated patching system
Ability to support the VM operating systems and / or applications
83
Two considerations with antivirus/anti-malware products
1. Automated deployment | 2. External appliance
84
Can reduce the impact of a DoS attack by throttling the virtual CPU or network
Resource Limitations
85
Prevents an infected VM from starving the other VMs and avoids a cascading disruption.
Resource Limitations
86
Two considerations for VM isolation
1. Mechanism to prevent VM to VM communication. | 2. Mechanism to restrict shared resources.
87
Three established best practices for computing platforms
1. Disable unnecessary services. 2. Establish a logging infrastructure. 3. Perform vulnerability assessments.
88
Two forms of hypervisor encryption
1. Secure Hypervisor Configuration (Hash) | 2. Data Encryption
90
Secure construct where a hypervisor's hash is stored
Trusted Platform Module (TPM)
91
Explain hypervisor hashing
Portions of the hypervisor's hardware and software can be examined and a hash created. The hash is stored in the Trusted Program Module (TPM). Each time the hypervisor boots, the hash is examined against the existing hash, and if an anomaly is detected, will prevent the hypervisor from booting.
92
Two transport layer encryption mechanisms for data that is being sent to / from an application (LAN data in transit)
1. Secure Sockets Layer (SSL) | 2. Transport Layer Security (TLS)
93
For data that is being written to storage, encryption solutions can be implemented at what two levels?
1. Hypervisor Level | 2. VM Level
94
Hypervisor storage encryption mechanisms can be integrated into what three places?
1. Into the hypervisor OS 2. A 3rd party offering 3. Part of the storage solution
95
Encryption solution for storage
EMC PowerPath/VE with RSA Encryption
96
True or False: The native Hyper-V multipathing, MPIO, is not optimized for any array and does not provide a robust framework for multipathing.
True
97
Four characteristics of an EMC PowerPath/VE with RSA Encryption solution
1. Data encrypted at host before being transmitted onto SAN. 2. Data encrypted on a per-LUN basis. 3. Supports FC, FCoE, and iSCSI. 4. Different algorithms supported for storage.
98
Significance of data encryption on a per-LUN basis
You can choose which LUNs to encrypt and which to leave in the clear.
99
Protocols supported by EMC PowerPath/VE with RSA Encryption
All protocols that PowerPath/VE supports, including: 1. Fibre Channel 2. Fibre Channel over Ethernet 3. iSCSI
100
Five considerations with regards to BC/DR
1. HA Capabilities 2. Fault Tolerance 3. Local / Remote Recovery 4. Remote HA Capabilities / Online Remote Mobility 5. Integration with Other Resources (storage, network, etc.)
101
Why use a distributed cluster?
To get benefits of high availability and workload management combined with the security of a remote site.
102
Two remote HA / online remote mobility capabilities
1. Online migration | 2. Stretched clusters
103
Two BC/DR Local / Remote Recovery Considerations
1. Ability restart processing on alternate compute resources | 2. Granularity - entire cluster, entire environment
104
Advantage of Standby VM Mirror
No downtime for server or VM failure
105
Allows processing to restart on the copy without any downtime
Option for a fully redundant copy of a VM
106
Two basic HA capabilities desirable in a hypervisor
1. Ability to automatically restart a failed VM in the event that the VM or the hosting server fails. 2. Ability to automatically redistribute workloads to prevent performance degradation during peak times or processing spikes.
107
Ability to manually move a workload from one location to another
Online migration
108
Two compute requirements for online migration
1. Hypervisor at both locations | 2. Sufficient capacity to host the migrated VM
109
Online migration requirement for storage
Access storage across data centers
110
Two online migration requirements for network
1. Layer 2 adjacency | 2. Bandwidth and latency
111
Refers to moving a VM from one location to another without downtime - typically from one data center to another
Online Migration over Distance
112
Refers to a VLAN spanning data centers
Layer 2 Adjacency (a.k.a. Stretched VLANs)
113
Online migration solution for VMware vSphere environments
vMotion
114
Online migration solution for Hyper-V environments
SAN Migration
115
vMotion Requirement
Both source and destination hypervisor must have access to the storage where the VM's virtual disks and configuration are stored.
116
How does vMotion work?
vMotion mirrors the VM state to the destination. When it is synchronized across servers, the VM is quiesced momentarily while the ownership is transferred to the destination.
117
How does SAN Migration for Hyper-V environments work?
SAN Migration quiesces the VM and then changes the storage masking to present the appropriate LUNs to the destination server and then restarts the VM on the destination.
118
SAN Migration requirement
SAN Migration requires integration with the storage system
119
SAN Migration limitation
SAN Migration only supports a single VM per LUN.
120
Geographically dispersed clusters of servers
Stretched clusters
121
Two compute requirements for stretched clusters
1. Cluster members at both locations. | 2. Sufficient capacity to host the migrated VM.
122
Storage requirement for stretched clusters
Mirrored storage
123
Two network requirements for stretched clusters
1. Layer 2 adjacency | 2. Bandwidth & latency
124
The requirements for stretched clusters are identical to those for Online Migration over Distance, with what exception?
With the exception of storage, which needs to be a mirrored configuration and not cross-site access.
125
True or False: Online migration of VMs within a cluster is widely supported by hypervisors.
True
126
Resource group that allows HA and load balancing between nodes
Cluster
127
Allows automated migration of VMs from one node to another
Stretched cluster
128
How do Stretched Clusters differ from Online Migration over Distance?
In a stretched cluster, the servers are part of the same cluster.
129
Common requirement for features like High Availability and Online Migration
Shared Storage
130
Traditional mechanism to provide storage to a server
Block, or LUN-level, storage
131
Has become more prevalent in use with hypervisors, primarily due to lower cost
File-level storage
132
Block storage environments
DAS or SAN
133
File storage environment
NAS
134
Four hypervisor storage considerations
1. Shared storage required? (Clustering, FT, etc.) 2. Block or file support, or both? 3. Protocol support 4. BC/DR requirements and options
135
Storage protocol questions for hypervisors
1. Does the hypervisor provide native drivers for FC or iSCSI HBAs or FC over Ethernet CNAs? 2. If not, do the HBA/CNA vendors provide support for the hypervisors? 3. Does the hypervisor support an integrated software initiator option for the FCoE or iSCSI? 4. Is there a 3rd party option, such as the Intel FCoE stack? 5. Can the hypervisor act as a NFS or CIFS client, allowing it to integrate with NAS technologies?
136
Hypervisor storage questions for BC/DR
1. Does the hypervisor have integrated recovery features that can be integrated with your storage infrastructure? 2. Does the hypervisor require certain storage types or configurations in order to utilize the integrated recovery options?
137
TPM
Trusted Platform Module (TPM)
138
Four Network Requirements
1. Support for 10GigE and FCoE 2. NIC Aggregation 3. Latency & Bandwidth Requirements 4. Impact to BC/DR
139
Why can it be easy to exceed the bandwidth capabilities of one or two physical GigE interfaces?
Because a number of physical hosts can be consolidated onto a single hypervisor.
140
of I/O slots in servers with smaller form factors
Generally limited to 2 or 3.
141
Two things promoted by I/O slot limitations in small form factor servers
1. 10 GigE Interfaces | 2. Converged Networked Adapters
142
Two things which allow the consolidation of multiple interfaces into a single (or two for redundancy) larger bandwidth interface
1. 10GigE Interfaces | 2. Converged Networked Adapters
143
Checkpoint if you are planning to utilize either 10GigE or a CNA
Ensure that the hypervisor supports the option you want to deploy
144
True or False: Regardless of the type of NIC/HBA/CNA used, two ports of each type are required for redundancy.
True
145
True or False: For some types of connectivity, such as Fibre Channel or FCoE, each interface is treated as a separate entity, and load balancing is typically done using multipathing software.
True
146
Three hypervisor I/O questions
1. Does the hypervisor include integrated multipathing software? 2. If so, what options does it have for distributing I/O across the various interfaces? 3. What types of storage does it interoperate with?
147
From a NIC perspective, how is load balancing done?
Via software
148
Two NIC configuration modes
1. Active / Active Mode | 2. Active / Passive Mode
149
NIC mode where the interfaces can be aggregated into a port channel or used as separate entities.
Active / Active Mode
150
NIC mode where some NICs wait for another NIC to fail before becoming active and transmitting or receiving data.
Active / Passive Mode
151
Four NIC mode questions
1. What are the available options for managing how data is transmitted using multiple active NICs? 2. Is each VM pinned to a specific NIC? 3. Is data spread out using a round-robin approach? 4. Is it based on the MAC or IP address?
152
Two reasons why is understanding NIC mode options important
1. To ensure compatibility between the hypervisor and upstream switches. 2. Understanding bandwidth requirements
153
Examples of system level traffic
1. Mobility | 2. Cluster Heartbeats
154
Fiver requirements / questions for system level traffic
1. Maximum latency allowed to perform an online migration of a VM from one server to another. 2. Is an isolated VLAN required for management traffic? 3. Is a dedicated management interface needed or desired? 4. If using the same interfaces, do you want to separate the types of data using VLANs? 5. How many VLANs can the hypervisor support?
155
Network requirements' impact to BC/DR
1. Requirements to use integrated recovery mechanisms. 2. Data replicated or copied for the hypervisor? 3. Sufficient bandwidth (LAN or SAN) to accommodate backup data?
156
Benefits of stateless hardware
1. Hardware identity stored externally. | 2. Swap failed hardware without reconfiguration.
157
Disadvantage of stateless hardware
More complex to manage
158
Benefit of traditional hardware
No external configuration required
159
Disadvantages of traditional hardware
Failed hardware requires reconfiguration (zoning, ACLs, etc.)
160
Benefits of commodity hardware
1. Inexpensive | 2. Simplified management
161
Disadvantage of commodity hardware
Not optimized
162
Benefit of propreitary hardware
Improved performance
163
Disadvantages of proprietary hardware
1. Cost | 2. Managing multiple platforms
164
Stateless hardware virtualizes the identity what kinds of server components
WWN, MAC Address, UUID, etc.
165
Allows the administrator to define pools of identities (WWN, MAC, UUID, etc.) within the management software.
Cisco Unified Computing System (UCS)
166
From what are identities allocated to physical servers when a profile is applied?
Pools of identities (WWN, MAC, UUID, etc.)
167
Simplifies the process of replacing failed hardware
Having virtual identities assigned to a server
168
Why does assigning virtual identities simplify the process of replacing failed hardware?
Since the new server will be assigned the same properties as the original, no reconfiguration (zoning, ACLs, etc.) needs to be performed to allow the new server to access the resources.
169
Why do pool capacities have to be monitored?
To ensure that the pools do not run out of resources
170
Risk if multiple pools are used
Overlapping identifies, causing service interruptions
171
In the event of a failure in traditional hardware, what has to happen?
Either 1) hardware components must be swapped to maintain access, or 2) aspects of the environment must be reconfigured.
172
Why is traditional hardware simpler to manage?
The identify is physically imprinted on the hardware component.
173
Trade-offs to consider when deciding between commodity or proprietary hardware
1. Improved performance with a specific application or hypervisor versus cost of the hardware. 2. Associated cost of managing multiple hardware platforms.
174
Provides the greatest flexibility in deploying services across all resources in a Cloud environment
Commodity hardware
175
Can be managed with a single tool, giving a global view of the entire environment
Single hypervisor environment
176
May require multiple tools to manage the environment, providing a more limited view, as well as requiring administrators to understand multiple systems, configurations, etc.
Multi-hypervisor environment
177
Allows administrators to reallocate resources anywhere within the environment
Single hypervisor environment
178
Environment where unused or underused servers can be moved to a cluster that is approaching capacity or is experiencing performance issues.
Single hypervisor environment
179
Provides greatest flexibility in allocating resources
Single hypervisor environment
180
Impacts longer-term capacity planning, as multiple environments must be analyzed independently.
Multiple hypervisor environment
181
Runs a greater relative risk of having idle systems
Multi-hypervisor environment
182
Risks of a multi-hypervisor environment
Underlying hardware may not be compatible, and minimally, the hypervisor would need to be reinstalled or changed to match the target destination.
183
True or False: A single hypervisor is always able to run all of the required OSs within the data center.
False. | A single hypervisor may not be able to run all of the required OSs within the data center.
184
Why must each hypervisor platform be treated independently from a recovery perspective?
Not all hypervisors have the same options available.
185
True or False: Optimal performance might only be possible for a particular application if it is paired with a specific hypervisor.
True
186
Possible valid reason for deploying a second hypervisor
To achieve optimal performance for a specific application
187
True or False: If the eventual plan is to move to a hybrid or cloud environment, ensure that the hypervisors are compatible with the provider's systems.
True
188
Four top-level design considerations in determining whether to use a single hypervisor or multi-hypervisor environment
1. Management 2. Flexibility 3. BC/DR 4. Performance & Compatibility
189
Four performance and compatibility considerations for multi-hypervisor environment
1. Optimal performance for certain applications. 2. More complex capacity planning (multiple pools of resources). 3. Support for more OSs. 4. Interoperability with Cloud Provider.
190
Will generally cost less in terms of operational expenses
Single hypervisor environment
191
Is capital expense for a multi-hypervisor environment higher than that of a single hypervisor environment?
May or may not be higher, depending upon factors such as licensing, hardware costs, etc.
192
Limitations on flexibility in a multi-hypervisor environment
1. Resources cannot be allocated anywhere in the environment. 2. May inhibit migration to cloud
193
Predominantly deployed hypervisor for server virtualization
vSphere
194
Deployed on a vSphere platform
Linux-based applications, such as those based on Apache and Oracle
195
Deployed on a Hyper-V infrastructure to take advantage of integration between Microsoft products
Microsoft applications, such as Exchange, SQL Server, and SharePoint
196
Used by many companies for their VDI deployments
Microsoft Hyper-V or Citrix XenServer, along with XenDesktop and/or XenApp
197
True or False: XenDesktop and XenApp products have integration with both XenServer and Hyper-V System Center.
True
198
Two Maintenance Design Considerations when Sizing Servers
1. Software upgrades and patches | 2. Firmware and BIOS
199
Two Hardware Cost Design Considerations when Sizing Servers
1. Base server (chassis, CPUs, memory) | 2. I/O cards and switch ports
200
Three Licensing Design Considerations when Sizing Servers
1. Per server? 2. Per CPU? 3. Memory
201
Consolidation Design Considerations when Sizing Servers
Size of physical footprint, cooling, power
202
Simpler with fewer servers
1. Upgrading and patching hypervisors. | 2. Maintaining firmware and BIOS on servers and associated components.
203
True or False: If a hypervisor is licensed per physical host, then larger servers may be more cost effective.
True
204
Typically allow you to consolidate more physical nodes onto a single hypervisor.
Larger servers
205
Often less expensive than a comparable larger server when looking at base hardware cost (i.e., chassis, CPU, memory, etc.)
Two smaller servers
206
Also considered in server sizing hardware costs
Number of I/O cards
207
Each I/O card represents what?
An associated switch port, cable, optic, etc.
208
True or False: Each server requires a minimum of two of each I/O card (NIC, CNA, HBA).
True
209
Two scenarios where you might deploy multiple clusters
1. Environment where VDI has been deployed. | 2. Environment with mission critical applications.
210
Has a very different resource profile that typical server applications.
VDI
211
Can cause apps to become resource starved in a VDI environment
Co-locating virtual desktop instances with other applications
212
Advantage of separating the virtual desktops onto a separate cluster
Events, such as boot storms or antivirus scans, will not impact critical systems.
213
Factors determining whether or not multiple applications are placed into a single cluster or only one application is placed into a single cluster
1. Criticality of the application | 2. Resource profile of the application
214
Four design considerations for Hypervisor HA
1. Simpler to deploy. 2. Time to restart entire VM on same hypervisor or alternate hypervisor. 3. Option to configure hot standby. 4. Can it detect service failure?
215
Design considerations for OS Clusters on Virtual Operating Systems
1. More complex configuration 2. Time to restart services on another cluster node 3. Limited to maximum cluster nodes 4. Failback configuration
216
True or False: In general, hypervisor HA provides a very similar amount of redundancy that an OS cluster did in a classic data center.
True
217
Design Considerations - Application Clusters on Virtual OSs
1. Ensure redundant copies are stored on different hypervisors. 2. Use Hypervisor HA in conjunction?
218
Restarting a failed VM on the same hypervisor or another hypervisor in the cluster.
Hypervisor HA
219
Key Hypervisor HA question
Can the hypervisor detect that a service, such as Microsoft Exchange, has failed on a VM and trigger a restart of the VM?
220
True or False: While most OSs have internal processes to restart failed services, they will typically do so only for a finite number of times before stopping.
True
221
Time to restart a VM on another node will vary based on a number of factors, such as?
1. Number of devices | 2. Utilization of the Hypervisor
222
Minimizes restart time of a failed VM
Hypervisor hot standby feature
223
Why is deploying an OS cluster within a hypervisor environment generally more complex?
Certain disk configurations are required for clusters.
224
Deploying services into a cluster usually requires what elements?
1. Additional IP addresses 2. DNS names 3. Other network components
225
Basic function of an OS cluster
Allowing services to float between the cluster nodes
226
True or False: An OS cluster will natively be able to detect a service failure and trigger a restart.
True
227
Is there a maximum number of cluster nodes that an OS can support?
Yes
228
Can multiple clusters be configured for services?
Yes
229
True or False: If the cluster is configured to fail the resource back immediately, then when the VM is restarted, the service will experience another outage to return to normal operating status.
True
230
Design consideration for application clusters, or applications which are configured with redundant components
Be sure that the redundant copies are stored on different hypervisors so that a server failure does not impact the entire application.
231
If an application has an automated failback mechanism, why might you not want to use hypervisor HA?
Use of hypervisor HA in this scenario could also cause service disruption during the failback process.
232
True or False: If failback can be controlled, then you may want to use Hypervisor HA or some other process to restart the VM so that if the second node fails, you do not lose the service entirely.
True
233
For what purpose are clusters typically used?
1. Fault tolerance 2. Load balancing (could be both for a given scenario)
234
A portion of the cluster capacity is reserved for what?
A failure event
235
Used to determine the percentage of capacity that needs to be left unused in a server cluster for failure situations
1. Number of nodes in the cluster. | 2. How many nodes you are willing to allow to be unavailable at any time.
236
Maximum server load for N+1 redundancy in a 4-node cluster
100%/4 = 25% | 100% - 25% = 75% maximum load
237
Maximum server load for N+2 redundancy in a 5-node cluster
(100% / 5) * 2 = 40% | 100% - 40% = 60% maximum load
238
Concern for critical applications
High Availability (HA)
239
Utilized in classic data centers to provide a failover option in the event that an application or server failed.
Application clusters
240
Operates in conjunction with vSphere Clusters, and when enabled, provides the capability to monitor the cluster environment for failures and automatically restart one or more VMs
VMware HA
241
Function of a vSphere HA Cluster at its most basic configuration
Automatically monitor all cluster members via both a network and datastore heartbeat.
242
Used to verify that the slave node is operational in the event that the network heartbeat between a slave node and the master node in a cluster is lost
Datastore heartbeat
243
If the datastore heartbeat has stopped (as well as the network heartbeat), then what is the situation?
The slave node is determined to have failed.
244
If the slave node is determined to have failed, what does the master node do?
Master node begins restarting the appropriate VMs on other nodes in the cluster.
245
Automatically restarts VMs from a failed hypervisor on another cluster node
VMware HA with EMC Symmetrix VMAX
246
Monitors VMware Tools in VM to ensure OS is operational
VMware HA with EMC Symmetrix VMAX
247
Can be extended to monitor applications
VMware HA with EMC Symmetrix VMAX
248
Requires shared storage for mobility within a cluster
VMware HA with EMC Symmetrix VMAX
249
Two levels of monitoring by vSphere HA Cluster
1. Node-level monitoring | 2. Heartbeats from the VMware Tools installed on a VM
250
What happens if VMware Tools is not generating network and I/O heartbeats?
The VM is reset (rebooted)
251
How can VMware HA be extended to monitor applications?
By using the appropriate SDK to configure customized heartbeats for specific applications (and then monitoring them in the same way as is done for a VM).
252
What must be configured at the basic infrastructure level in order to create a vSphere HA cluster (or any cluster)?
One or more shared datastores must be configured on each cluster node.
253
What do shared datastores on a cluster house?
1. VM virtual disks | 2. HA heartbeats
254
True or False: Cluster datastores can be either NFS or VMFS.
True
255
Provides a massively scalable, high-performance block storage system that provides active / active access to any storage resource.
EMC Symmetrix VMAX
256
VM allocation metrics
1. Memory (GB) | 2. CPU cores (total number)
257
Guarantees resources for applications but idle resources cannot be used by other VMs
Non-Overprovisioned Physical Resources
258
Lesser degree of consolidation
Non-Overprovisioned Physical Resources
259
Works well for applications that require consistent performance or have consistent resource requirements
Non-Overprovisioned Physical Resources
260
Can result in variable performance
Overprovisioned Physical Resources
261
Storage I/O often overlooked in this physical resource design scenario
Overprovisioned Physical Resources
262
Allows for a higher degree of consolidation
Overprovisioned Physical Resources
263
Works well for applications that have varying periods of processing bursts
Overprovisioned Physical Resources
264
Physical resource design scenario where application starvation is possible if there are simultaneous bursts
Overprovisioned Physical Resources
265
Why can an overprovisioned physical resource design result in variable performance?
If resources become constrained, the hypervisor will have to swap / schedule.
266
Risks of overprovisioning a hypervisor
1. Possibility of causing resource contention | 2. Poor performance for applications
267
How is overprovisioning examined most times?
In terms of memory and CPU resources
268
Can be a significant bottleneck in a hypervisor environment if it is not designed correctly
Storage I/O
269
Can cause storage latency
Placing numerous high volume I/O VMs on a single volume
270
Degrades performance for the VMs stored on that volume, as well as that of all VMs on the hypervisor
Excessive storage latency
271
Benefits of overprovisioning a hypervisor
Allows you to achieve a higher rate of consolidation of physical servers, reducing OpEx (i.e., space, power, cooling, etc.) as well as CapEx (i.e., servers, wiring, racks, etc.)
272
Potentially allows a large portion of resources to remain idle.
Non-Overprovisioned Hypervisor
273
Used in situations where performance is of the utmost performance or where an application's resource needs are fairly consistent
Non-Overprovisioned Hypervisor
274
Very closely mirrors a classic environment
Non-Overprovisioned Hypervisor
275
True or False: In an environment where you do not overprovision physical resources, you are able to guarantee resources for all VMs / applications at all times.
True
276
Not ideal for "bursty" applications
Non-Overprovisioned Hypervisor
277
Replication Options
1. Local | 2. Remote
278
4 Local Replication Characteristics
1. High-speed 2. Synchronous 3. Protects against device / system failure 4. Layer 2 adjacency possible
279
4 Remote Replication Considerations
1. Variable speed 2. Synchronous or asynchronous 3. Protects against site failure 4. Layer 2 adjacency not always possible
280
Does replication protect against data corruption or deletion?
No
281
Before deciding what type of replication to use, if any, what must be done?
Must determine what you are trying to protect against. | Once you determine the scope of your DR planning, you can determine what type of replication makes sense.
282
4 Things to Protect Against
1. Power outage to a storage array. 2. Entire hypervisor cluster going offline. 3. Data center disaster. 4. Geographic disaster.
283
Can protect against a failure localized within a data center, or an entire data center if the replica is located in another building (or close by).
Local Replication
284
What is an issue where Layer 2 adjacency is not available?
When the recovery site is activated, the VMs must have their IP addresses changed, DNS entries will need to be modified, etc.
285
Provides the ability to recover from small OR large disasters.
Using both types of replication (local and remote).
286
Usual goal of remote replication
To protect against a data center or site failure, or even a geographic catastrophe, if the recovery site is far enough away.
287
Sizing the Recovery Site: Considerations for determining what needs to be recoved
1. Do you need a 1:1 recovery? (recover all operating VMs) 2. Do you only recover specific VMs? (reduces amount of physical hardware required, but increases complexity, especially as VMs are added or retired)
288
Sizing the Recovery Site: Considerations for Overprovisioning
Do you overprovision hypervisors? | run with degraded performance for a period of time
289
Sizing the Recovery Site: Considerations for Online Migration over Distance
Online migration can recover from hypervisor failure (with downtime).
290
Sizing the Recovery Site: Considerations for Stretched Clusters
Stretched clusters address the infrastructure components. | Full cluster capacity unavailable at recovery site.
291
Traditional Backup Approaches
1. VM backed up like physical host OR 2. VMs backed up as flat files
292
Hypervisor Backup
VM backed up via hypervisor (image-based)
293
Storage Backup Approaches
Array backs up LUN or file system
294
4 Advantages of Traditional (Backup VM as a Physical Server (Install backup agent on VM))
1. Same management model as non-virtualized environment. 2. Best integration with application agents - application consistent backups. 3. Can restore specific file(s). 4. Can backup physical LUNs.
295
4 Disadvantages of Traditional (Backup VM as a Physical Server (Install backup agent on VM))
1. Requires agent on each VM. 2. Cumbersome to manage. 3. Does not backup VM configuration. 4. Multiple backups running simultaneously can degrade performance.
296
4 Advantages of Traditional (Backup VM as flat files)
1. Simple to implement. 2. Back up VM config files. 3. Only need to install agent on hypervisor.
297
4 Disadvantages of Traditional (Backup VM as flat files)
1. Backups not application or crash consistent. 2. Mobile VMs introduce complexity. 3. Can only restore an entire VM. 4. Cannot backup a physical LUN.
298
Allows the best granularity for restoring files, as well as providing a mechanism to backup any physical devices.
Backup VM as a physical server.
299
Most closely resembles a physical environment.
Backup VM as a physical server.
300
Backup method with the capability to integrate with applications, to provide application consistent backups.
Backup VM as a physical server.
301
Backup approach where an agent must be installed on each VM.
Backup VM as a physical server.
302
Can be difficult to manage in a virtual environment, where the number of VMs can grow quickly.
Backup VM as a physical server.
303
True or False: In the Backup VM as a physical server approach, since the VM is treated as a physical server, the VM configuration is not preserved as part of the backup.
True
304
Approach allows the hypervisor to backup the VMs as a set of flat files, essentially treating the hypervisor as the host and the VMs as simple data.
Backup VM as flat files
305
Requires a backup agent only on the hypervisor
Backup VM as flat files
306
No way to recover anything other than the entire VM.
Backup VM as flat files
307
Any physical devices that are presented directly to the VM cannot be backed up
Backup VM as flat files
308
Two advantages of Hypervisor-based (Backup VM using a snapshot) / Image Based
1. Tightly integrated with hypervisor. | 2. Offloads backup processing to backup server.
309
Disadvantage of Hypervisor-based (Backup VM using a snapshot) / Image Based
May not be possible to restore individual files.
310
Two disadvantages of Storage-based (Backup VM usinga snapshot or clones)
1. Can only restore a LUN / Filesystem. | 2. Physical LUNs are backed up separately.
311
Advantage of Storage-based (Backup VM usinga snapshot or clones)
No overhead on the hypervisor.
312
If you need to have application consistent backups, you should use what backup approach?
Traditional backup with an application agent installed on each VM.
313
If you need to have crash consistent backups, you should use what backup approach?
Hypervisor-based (image-based) backup
314
If you need to restore individual files, you should use what backup approach?
1. Traditional backup with an agent installed on each VM. | 2. Hypervisor-based (image-based) backup if the software supports it for the VM OS.
315
If you want a minimal backup environment, you should use what backup approach?
Array based snapshots or clones and recover the VM to an alternate hypervisor.
316
If you need to backup some VMs with application consistency and others with crash consistency, you should use what backup approach?
1. Traditional backup with an application agent installed on each VM that needs application consistency. 2. Hypervisor-based (image-based backup) for all other VMs.
317
If you need to back up VMs with static information, you should use what backup approach?
Use hypervisor tools (snapshots/clones) to recreate VMs instead of backing up.
318
Optimal hypervisor for Exchange Email environment
Microsoft Hyper-V
319
Optimal hypervisor for use with Oracle applications
VMware vSphere 5
320
Cluster Design Criteria
Use two clusters for the email application (Tier 1), and another cluster for the Tier 2 and 3 applications (A/R, A/P, & Expense Management).
321
Failure Capacity Design Criteria
Provide N+1 scale at the primary site (Site 1), by leaving 25% of the four-node cluster unused.
322
Design Criteria for Recovery Site Sizing
Because we are not recovering Tier 3 applications (Expense Management), we can reduce the number of hypervisors required at Site 2. Only deploying two hypervisors there reduces the CAPEX investment, but also means that the Tier 2 applications may not run optimally while in a failed over state.
323
Example of taking advantage of licensing
Take advantage of Windows 2008 Datacenter licensing by purchasing only 4 licenses, and use them to support the 4 Hyper-V servers as well as all of the virtual Exchange Servers.
324
Design Criteria for Application Clusters
Cluster the Microsoft Exchange Servers using DAGs, which provide the mechanism to restore processing on standby node in the event of a failure. Hypervisor will address VM failure.
325
Virtual OS for Tier 2 and Tier 3 apps
RHEL 6
326
DAG
Database Access Group
327
Disaster recovery approach for Tier 1 applications
Application replication | Application failover
328
Cluster model for Tier 1 application disaster recovery
Active at both sites, 50% unused
329
Cluster model for Tier 2 application disaster recovery
Active at site 1; 25% unused
330
Cluster model for Tier 3 application disaster recovery
None
331
What is a disadvantage of backing up VMs as flat files? a. Requires an agent on each VM. b. Does not backup VM configuration. c. Not application or crash consistent. d. Can only restore an entire LUN
c. Not application or crash consistent
332
What type of security threat can resource limitations help mitigate? a. Viruses b. Denial of Service c. Data Corruption d. Black Hole
b. Denial of Service
333
What feature is used in a stretched cluster and not in an environment that uses online migration over distance? a. Cross-site storage access b. Layer 2 adjacency c. Bandwidth d. Mirrored storage
d. Mirrored storage
334
What is a drawback to deploying multiple hypervisors? a. Limited flexibility in allocating resources. b. Use of a single management tool. c. Simplified BC/DR plans. d. Decreased operational costs.
a. Limited flexibility in allocating resources
335
What is a consequence of deploying multiple clusters? a. Decreased hardware costs b. Reduced security c. More complex capacity planning d. Lower performance
c. More complex capacity planning
336
Enables evaluation of different hypervisor platforms and features
The application requirements established during the discovery phase
337
Required to make an educated decision about which hypervisor(s) to deploy in the VDC
Evaluation of different hypervisor platforms and features in the context of the applications to be supported
338
Inputs to the VDC's Compute Layer design
1. Application requirements | 2. Hypervisor capabilities
339
Provide a set of requirements for both the Storage and Network layers
1. Application requirements | 2. Hypervisor capabilities
340
What happens after the hypervisor platform is chosen?
You could proceed to design the Compute Layer of the VDC, and create a set of requirements for the Storage and Network Layers.
341
Three compute (hypervisor evaluation) criteria important to a successful VDC implementation
1. BC/DR 2. Licensing Cost 3. Separating Workloads
342
Advantage of separating workloads
Prevent resource contention