Wednesday, 1 October 2025

Desktop Virtualisation

Desktop Virtualization (DV)

Desktop Virtualization (DV) is a technique that creates an illusion of a desktop provided to the user. It delivers either a single application or an entire desktop environment to a remote client device.

  • Function: It's a software technology used to separate the desktop and its connected application software from the physical device used by the client.

  • Mechanism: It works similar to a client-server model, where applications are executed on a remote desktop with different operating systems, and a user interacts with the application via a remote display protocol.

  • Key Advantage: Unlike traditional desktop administration, which is often expensive and time-consuming, DV allows IT staff to alter, update, and organize the desktop elements independently of the hardware, leading to greater business quickness and improved response time.

Components for Desktop Virtualization (VDI Architecture)

The DV architecture often follows the Virtual Desktop Infrastructure (VDI) model and consists of several components working together for an end-to-end solution:

  1. Endpoint Devices: The physical hardware (like laptops, thin clients, or tablets) used by the user to access the remote desktop.

  2. Connection Broker: The software component that authenticates the user and connects them to the appropriate available virtual desktop/VM Host.

  3. VM Hosting: The remote servers (in the data center/cloud) where the virtual desktops (Virtual Machines) are hosted and executed.

Techniques Used for Desktop Virtualization

There are three typical techniques used to deliver a virtual desktop experience:

  1. Remote Desktop Services (): Traditionally from Microsoft, this allows multiple users to share a single server's OS instance, each getting a separate session-based desktop.

  2. Virtual Desktop Infrastructure (): This is where each user is provided with a dedicated virtual machine (VM) running its own operating system from the data center.

  3. Desktop-as-a-Service (): A cloud service model where a third-party provider hosts the entire VDI infrastructure and streams the desktop to the user over the internet.

1. Remote Desktop Services (RDS)

RDS, formerly known as Microsoft Terminal Server, is primarily used for providing services to users and is a cost-effective desktop virtualization option.

Key Characteristics:

  • It is customarily called terminal services.

  • It allows consumers to distantly access Windows applications and graphical desktops.

  • It is also known as Remote Desktop Session Host (RDSH).

  • Applications and desktop images are served via the Microsoft Remote Desktop Protocol (RDP).

  • Cost-Effectiveness: Since one instance of Windows Server OS can support many simultaneous users as the server hardware can handle, RDS is a more cost-effective choice than VDI.

Advantages of RDS:

  • Data recovery in tragedy (Disaster Recovery).

  • Operation from any place (Remote Access).

  • Economical (Cost-effective).

Disadvantages of RDS:

  • Requirement of powerful RDS server hardware.

  • Requirement of RDS monitoring tools.

  • Requirement of a reliable network.


2. Virtual Desktop Infrastructure (VDI)

VDI refers to the hosting of a desktop OS running in a Virtual Machine () on a server in the virtual data center (VDC).

Key Characteristics:

  • VDI allows a user to access a remote desktop environment from an endpoint device via a remote desktop delivery protocol.

  • The desktop image travels over the network to the end user's device, making it seem as if the user is interacting with a local machine.

  • Dedicated Resources: VDI gives each user their own dedicated VM running its own operating system.

  • Resource Management: A hypervisor layer manages the resource allocation (drivers, CPUs, memory, etc.) to multiple VMs and ensures they run side by side on the same server.

  • Windows 10 Delivery: A key benefit is its ability to deliver the standard Windows 10 desktop and operating system to the end user's devices. However, because VDI supports only one user per Windows 10 instance, it is generally less cost-effective than RDS.

Advantages of VDI:

  • Low price in buying desktop computers (Since users can use cheaper thin clients/endpoint devices). successful

  • Centralized client operating system management.

  • Reduction in the costs of desktop and electricity.

  • Enhanced security of data and protected remote access.

  • Lesser applications compatibility troubles.

  • Disadvantages of VDI

    • Printing normally involves third-party appends (extra steps or software).

    • Scanning is natively unsupported or difficult.

    • Bi-directional audio is natively unsupported (microphone/speakers issues).

    • Exhibit protocols are unsuitable for graphic design or other high-demand graphical applications.

    • Needs low-latency association between the virtual infrastructure and the customer (requires excellent network speed).

    • Needs enterprise class server hardware and storage area network for VMs permanently delivered to particular users.

    • Needs trained IT staff.

  • Sr. No.Remote Desktop Services ()Virtual Desktop Infrastructure ()
    1.Separate virtual machines are not provided to the user.Separate virtual machines are provided to the users (one VM per user).
    2.Multiple operating systems instances need not be managed (since all users share one server OS).Multiple operating systems instances need to be managed (one OS per VM).
    3.Various users share the same virtual machines and operating systems.Same resources need not be shared (users get dedicated VM resources).
    4.Full administration is not provided to the users because many instances of the same resources are shared.User gets full administration over resources.
    5.Less resource utilization of , memory elements, etc. (More efficient per-user).More resource utilization (Less efficient per-user due to dedicated ).
  • Cost Savings:

    • Allows IT budgets to shift from high Capital Expenditures () (buying new hardware) to predictable Operating Expenditures () (like a regular usage-based charge for DaaS).

    • Extends the life of older or less powerful end-user devices (thin clients, etc.) because the intensive processing is done remotely on the data center VMs.

  • Improved Productivity:

    • Makes it easier for employees to access enterprise computing resources.

    • Allows users to work anytime, anywhere, from any supported device with just an Internet connection.

  • Support for a Broad Variety of Device Types:

    • Supports remote desktop access from a wide range of devices, including laptops, thin clients, tablets, and even mobile phones.

    • Delivers a consistent desktop experience regardless of the operating system native to the end-user device.

  • Stronger Security:

    • The actual desktop image and data are separated and abstracted from the physical hardware used to access it.

    • The VM used to deliver the desktop is hosted in a data center, which is a tightly controlled environment managed by the IT department.

  • Agility and Scalability:

    • It is quick and easy to deploy new or serve new applications whenever needed.

    • It is just as easy to delete them when they are no longer required, making the infrastructure highly responsive to business needs.

  • Better End-user Experiences:

    • Users can enjoy a feature-rich experience without sacrificing necessary functionality they rely on, such as printing or access to USB ports (though these features may require additional configuration, as noted in previous sections).

Types of Virtualization


The provided text is a continuation of the Cloud Computing topic, focusing on Virtualization Types, Server Virtualization Benefits/Limitations, and Network Virtualization. This information is essential for understanding cloud infrastructure.

Here is a summary of the concepts from the image:

1. Types of Virtualization
The text highlights two main types of virtualization techniques beyond the base concept:

Para-Virtualization:

In this model, simulation overhead is reduced because the guest operating system (OS) is modified to communicate directly with the Hypervisor (the virtualization software).

This modification improves overall performance compared to Full Virtualization.

Full Virtualization:

This model fully emulates the underlying hardware. It is more complex than Para-Virtualization.

The guest operating system is unmodified and runs directly on top of the hypervisor.

The hypervisor intercepts and manages machine operations (like I/O) and returns the status codes, making it seem like the guest OS is running on physical hardware.

2. Benefits and Limitations of Server Virtualization
Benefits of Server Virtualization
Cost Reduction: Reduces hardware requirements, which leads to lower costs.

Isolation: Each virtual server can be rebooted independently without affecting the operation of other virtual servers on the same physical machine.

Consolidation: Supports live migration and server consolidation, maximizing hardware usage.

Disaster Recovery: Facilitates easier backup and recovery from disasters.

Simplified Maintenance: Makes it easier to install or set up software patches and updates.

Limitations of Server Virtualization
Availability and Resource Consumption: Potential issues with resource consumption and guaranteed availability due to overcommits (allocating more virtual resources than physical resources exist).

Upfront Costs: Significant initial costs related to the virtualization software and network setup.

Licensing: Complexity and cost associated with software licensing for the virtual environments.

Steep Learning Curve: IT staff requires specialized training and experience in virtualization management.

Security: Security can be a concern, especially if multiple virtual servers belonging to different tenants or functions share the same physical server.

3. Network Virtualization
Network Virtualization (NetV) is the core technology that builds the connectivity fabric for cloud storage and computing.

Concept: It's similar to Server Virtualization but applies to the network. Instead of dividing a physical server, network resources (bandwidth, channels, switches, etc.) are divided among multiple virtual networks.

Purpose: It is used in multi-tenant data centers where each tenant needs its own isolated virtual network.

How it Works (Tunneling): A common way to isolate virtual networks is by providing a special label within each data frame that identifies the virtual network it belongs to. This labeling and forwarding is called network tunneling.

Definition: The method of splitting up network resources into separate bandwidth channels that are isolated and independent of one another, which can then be assigned and reassigned to different services or servers.

Goal: To optimize the speed, reliability, and flexibility of the network.

Types of Network Virtualization

External Network Virtualization:
Combines multiple networks or parts of networks into a single virtual unit.

A key goal is to improve the efficiency of a large network/data center.

Its two main components are the Virtual Local Area Network (VLAN) and the network switch.

System administrators use these to configure systems that are physically attached to the same local network into many different virtual networks.

1. Internal Virtualization

Internal virtualization (also called network in a box) uses software containers to mimic or provide the functionality of a single physical machine.

  • It's a network virtualization confined to a single system.

  • It improves the overall efficiency of a single system by isolating the separate virtual environments and allowing them to communicate over a virtual network interface.

  • This type is commonly seen on workstation versions of VMware and similar platforms.


2. Basic Components of Virtual Networks

Virtual networks have three fundamental components:

  1. Network hardware: Includes network interface cards, virtual switches, and VLANs.

  2. Network storage devices: The devices where the data is actually stored.

  3. Network media: The physical cabling used, usually Ethernet or Fiber Channel.


3. Architecture of Network Virtualization

As illustrated in Fig. 3.5: Network Virtualization, multiple virtual networks run on a single physical network.

  • Process: Network virtualization comprises rationally grouping and segmenting physical network(s) into distinct rational units known as 'virtual network(s)' and forming them to act as one or multiple separate networks.

  • Resource Sharing: It allows multiple virtual networks to share the underlying physical network resources (routers, hubs, switches, etc.) within the Virtual Data Center (VDC).

  • VM Network: A virtual network exists entirely within a physical server where the Hypervisor runs the host machine, and each virtual machine is considered a guest machine.

  • Goal: It allows the construction of multiple virtual networks in the Data Center (DC) while ensuring all nodes belonging to a single working unit in an enterprise are aligned.


4. Benefits of Network Virtualization

  1. Reduced Hardware and Power Consumption: Network virtualization reduces the amount of physical network hardware required, leading to a corresponding decrease in power consumption in the office space.

  2. Automated Management: It allows for the easy and automatic administration of network security and protocols, ensuring they are consistently applied across the entire virtualized network infrastructure.

  3. Simplified Provisioning and Troubleshooting:

    • Network Provisioning (the delivery of new services to network users) is greatly simplified in a virtual environment.

    • Troubleshooting is also easier because the management and control of the entire virtual network are consolidated in a single physical location.

  4. Improved Scalability:

    • Network virtualization provides a quick and easily scalable solution.

    • It removes the IT infrastructure as a major barrier to business growth, allowing organizations to respond to market demands with agility.


Disadvantages of Network Virtualization

While powerful, Network Virtualization has several limitations that an organization must consider:

  1. Increased Upfront Costs: There is a significant initial cost associated with investing in virtualization software.

  2. Need to License Software: Organizations must deal with the complexity and cost of licensing the virtualization software.

  3. Steep Learning Curve: There may be a substantial learning curve if IT managers and staff are not already experienced in virtualization technology.

  4. Application Incompatibility: Not every application and server is guaranteed to work flawlessly in a virtualized environment. Compatibility issues can arise.

  5. Availability Issues: Availability (the ability to access the data) can be a critical issue if an organization faces network problems and cannot connect to its virtualized data, highlighting the dependency on network connectivity.

Virtual Data Center (VDC) for Cloud Storage

 

Benefits of Virtual Data Center (VDC) for Cloud Storage

The Virtual Data Center (VDC) architecture, which utilizes virtualization for centralized services and consolidation (as shown in Figure 3.4), provides the following ten major benefits that make cloud storage a superior solution to traditional physical storage:

  1. Reduction of Costs: Virtualization significantly cuts down management costs and eliminates expensive outlays for the purchase, maintenance, and replacement of dedicated physical IT hardware.

  2. Simplified Management: Virtual servers allow IT infrastructure management to be done remotely, easily, and in real-time, simplifying overall administration.

  3. Optimization of Resources: Virtualization maximizes the use of both hardware and virtual resources, making the system more flexible and performant.

  4. Pay-per-use Model: This model enables users to pay only for the resources they actually consume, which drastically reduces waste and saves money.

  5. Lower Consumption: Virtual hardware requires less power than physical hardware, reducing the data center's energy consumption, environmental impact, and feeding costs.

  6. Security of Facilities and Data: Cloud providers, relying on a specialized facility, can offer high-level security and advanced data protection, including compliance with disaster recovery and business continuity systems, which is often superior to on-premise security.

  7. High Efficiency: Virtual machines make the IT infrastructure more agile and improve operational efficiency.

  8. Integration with Managed Services: VDCs allow organizations to easily rely on an external provider to manage their IT services, enabling them to focus on their core business.

  9. The Latest Technology: Customers always get access to the newest, most cost-effective technology, as the VDC provider is responsible for continuous disposal and upgrade of the underlying physical equipment.

  10. Availability and Scalability: Virtual machines enable superior scalability (ability to handle growing workloads) and availability (ensuring access) of resources compared to physical equipment.

1. VDC Environment (Virtual Data Center)

A Virtual Data Center (VDC) is the evolution of the classic data center. It is the logical infrastructure that powers cloud services.

  • Core Elements: The classic data center consists of elements like host, storage, connectivity (network), applications, and DBMS (Database Management System).

  • Virtualization and Consolidation: In a VDC, these physical resources are pooled and provided as virtual resources using software.

  • Abstraction: This process of abstraction hides the complexity and limitation of physical resources from the user.

  • Benefit of VDC: By consolidating IT resources, organizations can optimize their infrastructure and reduce the total cost of owning an infrastructure.

  • Deployment: Virtual resources are created using software, enabling faster deployment compared to deploying physical resources.


2. Server Virtualization and Benefits

Server Virtualization is a technique that partitions a single physical server into a number of smaller, isolated virtual servers using specialized virtualization software (like a Hypervisor).

Server Virtualization Definition

  • It involves running multiple operating system instances on a single physical server at the same time.

  • It is used in cloud computing to create a virtual edition of a device, server, storage, network, or an operating system, where the structure splits the resources as one or more environments for execution.

  • It improves resource utilization by moving workloads from many underutilized servers onto a fewer number of powerful servers.

Uses of Server Virtualization

The practical applications of server virtualization include:

  1. To centralize the server administration.

  2. To improve the availability of server resources and services.

  3. Helps in disaster recovery by making it easier and faster to restore virtual machine images.

  4. Ease in development and testing by quickly provisioning new, isolated environments.

  5. Make efficient use of server resources by significantly increasing resource utilization and reducing the number of idle servers.

Server Virtualization Techniques

The key component in server virtualization is the Hypervisor.

  • Hypervisor: A hypervisor is a software layer between the operating system and the hardware.

    • It manages and keeps requests separate from multiple operating systems (Guests) running on the same physical machine (Host).

    • It is responsible for critical tasks like handling queues, dispatching, and returning hardware requests.

    • The OS that runs on top of the hypervisor is used to administer and manage the various virtual machines.

Cloud Storage Characteristics

 Cloud Storage Characteristics (NIST Model)

The essential characteristics of cloud computing, which directly apply to cloud storage, are defined by the National Institute of Standards and Technology (NIST):

  1. On-Demand Self-Service: Users can unilaterally provision (add) and de-provision (remove) storage capabilities, such as server time and network storage, automatically without requiring human interaction with the service provider.

  2. Broad Network Access: The storage services are available over the network and accessed through standard mechanisms (like web browsers or mobile apps) that enable use by heterogeneous client platforms (e.g., mobile phones, tablets, laptops).

  3. Resource Pooling: The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. The user generally doesn't know the exact location of the stored data but can often specify the region.

  4. Rapid Elasticity: Storage capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited. This allows users to scale up or down as needed.

  5. Measured Service (Pay-per-use): Resource usage is monitored, controlled, and reported, providing transparency for both the provider and the consumer. Storage is typically charged based on consumption, such as the amount of storage used, data transfer volume, and number of read/write operations.


Advantages of Cloud Storage

BenefitDescription
Accessibility and ConvenienceData can be accessed anytime, anywhere using any internet-connected device (laptop, smartphone, tablet). This facilitates remote work and learning.
Cost EfficiencyIt eliminates the Capital Expenditure (CAPEX) of buying and maintaining physical hardware (hard drives, servers). You pay a predictable Operational Expenditure (OPEX) based on usage (pay-as-you-go model).
Scalability and ElasticityStorage capacity can be increased or decreased almost instantaneously and virtually unlimited, matching the exact requirement of the user or organization without manual intervention.
Data Backup and Disaster RecoveryCloud providers replicate data across multiple redundant servers and data centers. This ensures high durability and makes data loss due to local hardware failure or disaster highly unlikely.
Easy CollaborationSharing files and collaborating on documents in real-time is simplified, as multiple users can access the same up-to-date files simultaneously.
Security (Provider Side)Major providers invest heavily in enterprise-grade security measures like encryption (data in transit and at rest), threat monitoring, and robust physical data center security.

Disadvantages of Cloud Storage
DrawbackDescription
Internet DependencyAccessing and syncing files requires a reliable internet connection. Poor or no connectivity means limited or no access to your data.
Security and Privacy Concerns (User Side)Entrusting sensitive data to a third-party vendor raises concerns. Users must rely on the provider's security measures and contract terms. Data breaches or unauthorized access are potential risks.
Limited Control and CustomizationThe user has less control over the underlying infrastructure, operating system, and data management policies compared to on-premises storage. The setup is dictated by the vendor.
Vendor Lock-inMigrating large volumes of data from one cloud provider to another can be complex, time-consuming, and costly, creating a reliance on the current vendor.
Evolving Costs (Long-Term)While initially cheap, costs can accumulate over time, especially for high usage (large storage capacity or frequent data retrieval/transfer, known as "egress" costs).
Downtime RiskAlthough rare, cloud service providers can experience service outages or technical issues, temporarily disrupting access to the stored data.

Cloud Storage, Monitoring and Management

Traditional data center architecture-

The architecture is made up of a few core systems that work together to keep everything running.

• Compute: This is the brain of the data center. It consists of racks of physical servers that provide the processing power to run applications and services. These servers are typically single-purpose and managed individually.

• Storage: This is where all the data lives. In a traditional setup, data is stored on dedicated storage devices and systems, often in the form of a Storage Area Network (SAN) or Network Attached Storage (NAS).

• SAN is a high-speed network that provides access to shared block-level storage.

• NAS is a file-level storage device that makes data available to network users.

• Network: The network is the nervous system, connecting all the servers and storage devices to each other and to the outside world. This is typically a three-tier architecture that uses a hierarchical design.

Three-Tier Network Architecture

This is the most common network model in traditional data centers. It's a layered approach that's easy to understand and manage, but can create bottlenecks.

• Access Layer: This is the lowest layer, where all the servers are physically connected. The switches in this layer, often called top-of-rack (ToR) switches, link to the servers within their specific racks.

• Aggregation (or Distribution) Layer: This layer aggregates traffic from the access layer switches and provides connectivity between them. It acts as a bridge and is responsible for policy-based routing and other network services.

• Core Layer: This is the backbone of the network. It provides high-speed, high-capacity connectivity to the outside world and interconnects all the aggregation layer switches. It's designed for speed and is a central point for all data flow.

Challenges of Traditional Architecture

Traditional data centers are reliable and secure, but they have some key drawbacks, especially when compared to modern cloud-based solutions.

• Scalability: Scaling up means buying and installing more physical hardware, which is a slow and expensive process. You have to physically add more servers, storage, and networking equipment.

• Cost: The initial investment is high due to the cost of hardware, power, cooling, and the physical space itself.

• Inefficiency: The hardware is often underutilized. Since resources are not easily shared or reallocated, a server might be sitting idle while another is at full capacity.

• Complexity: Managing a large number of separate, physical components requires a lot of manual effort and can be prone to errors.

What is a Virtualized Data Center? 🏢
A virtualized data center is a modern approach that uses software to create "virtual" versions of the hardware components (servers, storage, and networking) found in a traditional data center. Instead of running on a single, dedicated physical machine, many virtual machines (VMs) can run on a single piece of physical hardware. This is a game-changer because it allows you to use your resources much more efficiently.

Key Components and How They Work
Instead of the physical three-tier architecture of traditional data centers, a virtualized data center is organized around a centralized services model, which is managed through a software layer.
• Virtualization Layer (Hypervisor): This is the most crucial part. The hypervisor is a software that sits on top of the physical hardware. Its job is to separate the physical resources (CPU, RAM, storage) from the virtual machines. It allows you to create multiple isolated virtual machines on a single physical server, each with its own operating system and applications. Think of it like a landlord who divides a large building (the physical server) into many separate apartments (the VMs), each with its own utilities, and manages them all from one central office.
• Virtual Compute (Virtual Machines - VMs): These are the virtual servers created by the hypervisor. They have their own virtual CPUs, memory, and storage, and act just like a physical server. The main advantage is that you can create, move, and delete them in minutes using a central management tool, unlike a physical server which takes hours or days.
• Virtual Storage: This pools together the storage capacity from multiple physical storage devices into a single, large pool that can be centrally managed. It separates the storage logic from the physical hardware, allowing for more flexible and efficient allocation of storage to the VMs.
• Virtual Network: This is a software-defined network (SDN) that creates virtual switches, routers, and firewalls. It allows VMs to communicate with each other and with the outside world without needing to physically re-cable the network. Network policies and rules are defined in software, making them easy to change and manage centrally.

Centralized Services and Their Benefits
The core of a virtualized data center is its centralized, software-defined management. This changes the entire architecture from a hardware-centric to a software-centric model.
• Simplified Management: Instead of managing each physical server, storage device, and network switch individually, you manage them all from a single, centralized management console. This reduces manual effort and the risk of human error.
• Increased Efficiency and Cost Savings: Since you can run many virtual machines on a single physical server, you use your hardware much more efficiently. This means you need fewer physical servers, which saves money on hardware costs, power, and cooling. It also reduces the physical space required for the data center.
• Improved Scalability and Agility: It's much faster to scale up or down. If you need a new server for a project, you can simply spin up a new VM in minutes. When the project is over, you can delete it and free up the resources. This flexibility allows businesses to respond to changing needs much more quickly.
• Enhanced High Availability and Disaster Recovery: You can easily move a VM from a failing physical server to a healthy one. This process, called live migration, happens without any downtime. It also makes it much easier to back up and restore entire virtual environments.




Consolidated virtual data center architecture is a strategy to combine multiple, often older or underutilized, data centers into a single, highly efficient, and centrally managed virtualized data center. Instead of having servers scattered across different locations, you bring them all together into one powerful, virtualized environment. This process uses virtualization technology to drastically reduce the amount of physical hardware needed.

The Goal of Consolidation
Think of it like this: Imagine you have five separate small offices, each with its own small server room. In a consolidated approach, you would shut down four of those offices and move all their servers, data, and applications to one large, modern, and highly efficient server room.
The key here is that you don't just move the physical servers. Instead, you virtualize them. You take the applications and data from dozens of old, physical servers and run them as virtual machines (VMs) on just a few new, powerful physical servers. This significantly


Resource Pooling: Instead of dedicating resources to a specific task, you create a large, shared pool of compute, storage, and networking resources. Virtual machines can then dynamically pull resources from this pool as needed.


Centralized Management: A single software platform or console is used to manage and monitor all the virtual resources. This includes creating new VMs, allocating storage, configuring network settings, and monitoring performance across the entire data center.


Why Consolidate?
The decision to consolidate is driven by several key benefits:
• Cost Reduction: This is the biggest advantage. By reducing the number of physical servers, you save money on hardware, software licenses, power consumption, and cooling costs.
• Improved Efficiency: With fewer physical servers, resource utilization increases dramatically. Instead of having many servers running at 10-20% capacity, you have a few servers running at 70-80% capacity.
• Simplified Management: A centralized management system makes the data center easier to operate. Instead of managing a complex web of physical devices, you're managing a single, virtual environment from a single point.
• Better Security: With a smaller physical footprint and a centralized management system, it's easier to implement and enforce security policies, making your data center more secure.
• Scalability and Agility: When you need a new server, you can create a new VM in minutes from your central console. This is much faster and more flexible than ordering, installing, and configuring a new physical server.

Monday, 25 August 2025

Xen Architecture

 Xen Architecture
1. What is Xen?
Definition: Xen is an open-source type-1 hypervisor that allows multiple operating systems to run simultaneously on the same hardware.
Type-1 hypervisor means it runs directly on hardware (bare-metal), not on top of another OS.
Main purpose: Server virtualization — used in cloud platforms like AWS.

2. Xen Architecture
Think of Xen as a traffic controller between hardware and operating systems.
Main Components:
1. Xen Hypervisor
The core layer that runs directly on the CPU.
Handles CPU scheduling, memory management, and I/O requests.
Provides an abstraction layer between hardware and OS.
2. Domain 0 (Dom0)
The first virtual machine started by Xen.
Has special privileges to directly access hardware drivers.
Manages other virtual machines (DomU).
Runs a modified Linux OS with Xen management tools (xend, xl).
3. Domain U (DomU)
User domains — guest operating systems.
They do not have direct hardware access; they go through Dom0 for I/O.
Can be:
Paravirtualized (PV) — OS is modified to work with Xen.
Hardware Virtual Machine (HVM) — uses CPU virtualization features, unmodified OS.
4. Control Interfaces
Tools and APIs for creating, starting, stopping, and managing VMs.
Example: xl create myvm.cfg

Xen Architecture Diagram (Exam-friendly)
   +------------------------------+
   |        Guest OS (DomU)        |
   +------------------------------+
   |        Guest OS (DomU)        |
   +------------------------------+
   | Privileged OS (Dom0) + Tools  |
   +------------------------------+
   |        Xen Hypervisor         |
   +------------------------------+
   |          Hardware             |
   +------------------------------+

3. Guest Operating System in Xen
A Guest OS is any operating system running inside a Xen virtual machine (DomU or Dom0).
Types of Guest OS in Xen:
1. Paravirtualized (PV) Guest
OS is modified to work with Xen hypervisor calls.
Direct access to Xen APIs for better performance.
Example: Modified Linux kernel for Xen.
2. Full Virtualized / HVM Guest
OS is not modified.
Uses CPU features like Intel VT-x or AMD-V.
Xen emulates hardware so the OS thinks it’s running on a real machine.
Example: Windows running on Xen.

Guest OS Role
Executes applications.
Uses virtual hardware provided by Xen.
Sends I/O requests (disk, network) through Dom0.

✅ Key Points for MSBTE Exam:
Xen is Type-1 Hypervisor → runs directly on hardware.
Dom0: First booted VM, has direct hardware control, manages DomU.
DomU: Guest OS VMs, run in isolated environments.
Guest OS can be PV (modified) or HVM (unmodified).
Xen provides isolation, resource sharing, and security between VMs.

Virtual Machine

    
Virtual Machine
1. Virtual Machine (VM) 
Definition:
A Virtual Machine (VM) is a software-based emulation of a physical computer that runs an operating system and applications, just like a real computer.
Key points:
Created and managed by a hypervisor.
Runs on virtual hardware (virtual CPU, memory, disk, network).
Provides isolation between different VMs.
Example: Running Windows inside VMware on a Linux laptop.

2. Life Cycle of a VM
Think of it like the life stages of a living thing, but for a virtual computer.
1. Creation
     VM is defined with CPU, memory, storage, and network settings.
Operating system is installed or imported.
2. Power On / Start
VM is booted and the guest OS starts running.
3. Running
VM executes applications and performs tasks.
4. Suspend / Pause
VM state is saved in memory or disk, execution is halted temporarily.
5. Resume
VM continues execution from the suspended state.
6. Shutdown / Power Off
Guest OS is stopped, and VM resources are released.
7. Deletion
VM configuration and virtual disks are removed.

📍 Exam diagram idea:
[Create] → [Start] → [Running] → [Suspend] ↔ [Resume] → [Shutdown] → [Delete]
3. VM Migration — Concept and Techniques
Concept:
Moving a running or stopped VM from one physical host to another without affecting its execution significantly.
Why needed?
Load balancing between servers.
Hardware maintenance.
Energy saving.
Techniques:
1. Cold Migration
VM is powered off, then moved to another host.
Simple but causes downtime.
2. Live Migration
VM is moved while still running, with minimal downtime.
Memory and CPU state are transferred while VM is still active.
3. Storage Migration
Moving the VM’s virtual disk files to another storage location.

4. VM Consolidation 
Concept:
Combining workloads from multiple VMs onto fewer physical servers to save resources.
Purpose:
Reduce power consumption.
Lower hardware costs.
Improve resource utilization.
How it works:
Identify underutilized VMs.
Migrate them to fewer hosts.
Power off unused servers.
📍 Example:
If 5 servers are each running at 20% capacity, consolidate into 2 servers running at ~50%, and turn off 3 servers.

5. VM Management — Concepts
Concept:
The process of monitoring, controlling, and maintaining VMs for performance, security, and availability.
Tasks in VM Management:
1. Provisioning — Creating new VMs and allocating resources.
2. Monitoring — Tracking CPU, memory, network usage.
3. Backup & Recovery — Protecting VM data.
4. Security — Applying patches, controlling access.
5. Automation — Using scripts/tools to auto-scale or auto-heal.
✅ Exam Tips:
Always include definition + purpose + example in answers.
Diagrams for VM life cycle and migration types can fetch extra marks.
Keep answers in point form for clarity in MSBTE papers.

Desktop Virtualisation

Desktop Virtualization ( DV ) Desktop Virtualization ( DV ) is a technique that creates an illusion of a desktop provided to the user. It d...