Server Virtualization Research Papers - Academia.edu (original) (raw)

There’s a new wind of change in the IT industry today. It’s called virtualization. Virtualization is a software technology designed to let us run multiple virtual machines with different operating systems on a single physical... more

There’s a new wind of change in the IT industry today. It’s called virtualization. Virtualization is a software technology designed to let us run multiple virtual machines with different operating systems on a single physical machine.Virtualization is changing almost every aspect of how we manage systems, storage, networks, security, operating systems, and applications. Server consolidation reduces maintenance cost while high availability and live migration allow us to minimize the downtime. With virtual machines, recovering from failure and disaster recovery is easier and more affordable than ever.

Software defined data centers (SDDC) and software defined networking (SDN) are two emerging areas in the field of cloud data centers. SDN based centrally controlled services takes a global view of the entire cloud infrastructure between... more

Software defined data centers (SDDC) and software defined networking (SDN) are two emerging areas in the field of cloud data centers. SDN based centrally controlled services takes a global view of the entire cloud infrastructure between SDDC and SDN, whereas Network Function Virtualization (NFV) is widely used for providing virtual networking between host and Internet Service Providers (ISP's). Some Application as a Service used in NFV data centers have a wide range in building security services like Virtual firewalls, Intrusion Detection System (IDS), load balancing, bandwidth allocation and management. In this paper, a novel security framework is proposed to combat SDDC and SDN based on NFV security features. The proposed framework consists of a Virtual firewall and an efficient bandwidth manager to handle multiple heterogeneous application requests from different ISPs. Real time data were taken from an experiment for a week and A new simulation based proof of concept is admitted in this paper for validation of the proposed framework which was deployed in real time SDNs using Mininet and POX controller.

El Centro de Datos del Complejo de Investigaciones Tecnológicas Integradas necesita migrar a una Nube Privada OpenStack para brindar Infraestructura como Servicio de forma óptima y eficiente, manteniendo como plataforma de virtualización... more

El Centro de Datos del Complejo de Investigaciones Tecnológicas Integradas necesita migrar a una Nube Privada OpenStack para brindar Infraestructura como Servicio de forma óptima y eficiente, manteniendo como plataforma de virtualización a VMware ESXi 5.5. El proceso se vio obstaculizado debido a que la integración de OpenStack con VMware ESXi 5.5 fue fallida en reiteradas ocasiones tomando como referencia los tutoriales brindados por ambos proveedores, por lo que el presente trabajo tuvo como objetivo lograr la integración de una Nube Privada OpenStack con el hipervisor VMware ESXi 5.5 en un escenario de prueba. Para ello se realizó la selección de la solución que mejor se adapta a las características del Centro de Datos del complejo mediante el empleo de métodos teóricos, sistémicos y analíticos, obteniéndose como resultado un manual de instalación para lograr una integración exitosa. La solución obtenida permite explotar todas las funcionalidades de la suite de productos vSphere; le brinda a los usuarios el aprovisionamiento de Infraestructura como Servicio con auto-servicio y bajo demanda, acelerando el proceso productivo de los administradores de las tecnologías de la información, al convertir a los usuarios en los responsables del funcionamiento y administración de sus instancias virtuales; contribuye al ahorro en cuanto a inversiones, ya que el gestor OpenStack no requiere pagos de licencia; y posibilita el despliegue de los servicios de Plataforma como Servicio y Software como Servicio, lo que puede propiciar el aumento de la seguridad y la innovación.

Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and operational costs required for running datacenters. These tools are helpful for business owners to improve and evaluate the costs and... more

Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and operational costs required for running datacenters. These tools are helpful for business owners to improve and evaluate the costs and the underlying efficiency of such facilities or evaluate the costs of alternatives, such as off-site computing. Well understanding of the cost drivers of TCO models can provide more opportunities to business owners to control costs .In addition, they also introduce an analytical structure in which anecdotal information can be cross-checked for consistency with other well-known parameters driving data center costs. This work focuses on comparing between number of proposed tools and spreadsheets which are publicly available to calculate datacenter total cost of ownership (TCO) ,The comparison is based on many aspects such as what are the parameters included and not included in such tools and whether the tools are documented or not. Such an approach presents a solid ground for designing more and better tools and spreadsheets in the future.

Bisnis masa kini dihadapkan dengan tantangan yang lebih lincah dan responsive dalam menjalankan operasional sehari-hari dan terus berinovasi untuk menjawab kondisi yang terjadi disekitarnya. Menyikapi hal ini, pembisnis melihat... more

Bisnis masa kini dihadapkan dengan tantangan yang lebih lincah dan responsive dalam menjalankan operasional sehari-hari dan terus berinovasi untuk menjawab kondisi yang terjadi disekitarnya. Menyikapi hal ini, pembisnis melihat diperlukannya teknologi yang dapat terhubung secara otomatis baik dari smart phone, Laptop, tablet maupun komputer yang dipakai dalam bekerja. Peningkatan produkstifitas dalam bekerja diperlukan kehandalan sistem yang tersedia setiap saat, otomatis dan memberikan manfaat bersar bagi perusahaan. Hybrid directory services merupakan perpaduan layanan directori yang bekerja melayani sistem di area jaringan komputer interlokal perusahaan baik dalam skala local area network, metropolite area network, maupun wide area nework dengan sistem yang berada didalam komputasi awan (Cloud Computing). Perpaduan ini antara Office 365, Azure Active Directory dengan layanan direktori Windows (Active Directory Domain Services, active directory federation services, dll) yang sudah ada dan dengan layanan e-mail Exchange Server, Lync, atau SharePoint Server 2013. Integrasi ini dimaksudkan guna meminimalkan pengerjaan yang sama, informasi yang sama dilakukan berulang kali dan mengambil manfaat informasi dari identitas tiap karyawan berserta kebijakan yang melekat pada identitas karyawan tersebut untuk kepentingan yang lebih banyak dan bermanfaat bagi perusahaan.

Discussion and analysis of a scenario where a demo web application that acts as a profile manager is assessed from a security point of view. We will design and develop the test web application and we will perform a vulnerability... more

Discussion and analysis of a scenario where a demo web application that acts as a profile manager is assessed from a security point of view. We will design and develop the test web application and we will perform a vulnerability assessment throughout all the technologies applied, in order to identify possible security weaknesses and exploits.

Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and operational costs required for running datacenters. These tools are helpful for business owners to improve and evaluate the costs and... more

Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and operational costs required for running datacenters. These tools are helpful for business owners to improve and evaluate the costs and the underlying efficiency of such facilities or evaluate the costs of alternatives, such as off-site computing. Well understanding of the cost drivers of TCO models can provide more opportunities to business owners to control costs .In addition, they also introduce an analytical structure in which anecdotal information can be cross-checked for consistency with other well-known parameters driving data center costs. This work focuses on comparing between number of proposed tools and spreadsheets which are publicly available to calculate datacenter total cost of ownership (TCO) ,The comparison is based on many aspects such as what are the parameters included and not included in such tools and whether the tools are documented or not. Such an approach p...

The International Journal of Software Engineering & Applications (IJSEA) is a bi-monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Software Engineering & Applications. The... more

The International Journal of Software Engineering & Applications (IJSEA) is a bi-monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Software Engineering & Applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts & establishing new collaborations in these areas.

Yosh! ayo buruan belajar Windows Server!

Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and operational costs required for running datacenters. These tools are helpful for business owners to improve and evaluate the costs and... more

Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and operational costs required for running datacenters. These tools are helpful for business owners to improve and evaluate the costs and the underlying efficiency of such facilities or evaluate the costs of alternatives, such as off-site computing. Well understanding of the cost drivers of TCO models can provide more opportunities to business owners to control costs .In addition, they also introduce an analytical structure in which anecdotal information can be cross-checked for consistency with other well-known parameters driving data center costs. This work focuses on comparing between number of proposed tools and spreadsheets which are publicly available to calculate datacenter total cost of ownership (TCO) ,The comparison is based on many aspects such as what are the parameters included and not included in such tools and whether the tools are documented or not. Such an approach presents a solid ground for designing more and better tools and spreadsheets in the future.

The quantity of old residents is on the ascent around the world. Thus, the quantity of those living alone is likewise liable to increment. At the point when an old individual living alone has a coronary failure or falls at home, no one is... more

The quantity of old residents is on the ascent around the world. Thus, the quantity of those living alone is likewise liable to increment. At the point when an old individual living alone has a coronary failure or falls at home, no one is around to alarm relatives or the specialist. It can take hours or days for the episode to be found, and the individual might be dead by that point. With this stressing situation furthermore, the developing maturing populace as a primary concern, we have thought of a framework that permits ready signs to be sent either consequently or at the press of a catch. This wearable wristband health checking framework involves a smart wristband gadget that can screen the soundness of an older individual and can discover regardless of whether the individual wearing it is in a health-related crisis and can consequently alert the relatives and specialists if fundamental. The gadget can speak with an advanced mobile phone and is furnished with selective and imaginative highlights.

In computer science, systems are typically divided into two categories: software and hardware. However, there is an additional layer in between, referred to as middleware, which is a software "pipeline," an operation, a process, or an... more

In computer science, systems are typically divided into two categories: software and hardware. However, there is an additional layer in between, referred to as middleware, which is a software "pipeline," an operation, a process, or an application between the operating system and the end user. This article aims to define middleware and reflect on its necessity, as well as address controversies about when and where it applies. It also explores the application of middleware in emerging technologies such as cloud computing and the IoT (Internet of Things), as well as future middleware developments.

3 rd International Conference on Machine learning and Cloud Computing (MLCL 2022) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Cloud... more

3
rd International Conference on Machine learning and Cloud Computing (MLCL 2022)
will provide an excellent international forum for sharing knowledge and results in theory,
methodology and applications of on Machine Learning & Cloud computing. The aim of the
conference is to provide a platform to the researchers and practitioners from both academia as
well as industry to meet and share cutting-edge development in the field.

Ventajas de la virtualizacion

Dalam Tutorial ini akan sedikit dijelaskan tentang bagaimana mensetting mikrotik OS dengan menggunakan Software Oracle Virtual Box akan tetapi penulis menggunakan file pdf dengan pixel yang rendah jika ingin yang lebih jelas silahkan... more

Dalam Tutorial ini akan sedikit dijelaskan tentang bagaimana mensetting mikrotik OS dengan menggunakan Software Oracle Virtual Box akan tetapi penulis menggunakan file pdf dengan pixel yang rendah jika ingin yang lebih jelas silahkan email ke : diankurnia68@gmail.com

Virtual Private Network (VPN) merupakan sebuah teknologi komunikasi yang memungkinkan adanya koneksi dari dan ke public network(WAN) seolah-olah menjadi private network dan bahkan bergabung dengan private network itu sendiri. Dengan... more

Virtual Private Network (VPN) merupakan sebuah teknologi komunikasi yang memungkinkan adanya koneksi dari dan
ke public network(WAN) seolah-olah menjadi private network dan bahkan bergabung dengan private network itu sendiri.
Dengan mengunakan teknologi ini, maka seseorang dapat mengakses sumber daya jaringan yang berada di dalam private
network, dan mendapatkan hak akses dan pengaturan yang sama bagaikan secara fisik berada di tempat dimana private
network itu berada. Penelitian ini mencoba untuk menganalisis dan merancang suatu sistem keamanan jaringan yang
dapat dimanfaatkan untuk menghubungan antara jaringan komputer baik di kantor maupun diluar kantor. Metode
analisis dilakukan dengan observasi terhadap jaringan yang terdapat di STIE Perbanas Surabaya serta mengidentifikasi
permasalahan yang dapat dibantu dengan menggunakan teknologi jaringan. Sedangkan, metode perancangan dilakukan
dengan membuat topologi jaringan serta menentukan elemen yang dibutuhkan untuk merancang teknologi VPN dengan
model keamanan portknocking, dimana pengguna yang diijinkan dapat melakukan manipulasi rule firewall dengan
mengirimkan ketukan(knocking) atau informasi kepada firewall sebelum melakukan akses ke jaringan
lokal dengan menggunakan user authenticate VPN. Kemudian memberikan usulan konfigurasi sistem dan melakukan
test untuk mengetahui apakah sistem yang diusulkan dapat berjalan dengan baik atau tidak. Hasilnya adalah VPN dapat
digunakan untuk menghubungkan jaringan dari komputer rumah atau yang terhubung dari public network dengan sumber
daya jaringan di komputer/server STIE Perbanas Surabaya dengan mudah dan memiliki tingkat keamanan yang tinggi.

Laboratorium komputer pada UPT STMIK AMIKOM Yogyakarta berjumlah banyak dan kapasitas pengguna yang besar maka kebutuhan layanan data center juga semakin tinggi. Untuk memenuhi kebutuhan pengembangan data center tersebut dibutuhkan... more

Laboratorium komputer pada UPT STMIK AMIKOM Yogyakarta berjumlah banyak dan kapasitas pengguna yang besar maka kebutuhan layanan data center juga semakin tinggi. Untuk memenuhi kebutuhan pengembangan data center tersebut dibutuhkan peningkatan kapasitas komputasi, salah satunya adalah dengan cara pengadaan server baru. Namun terdapat konsekuensi dari keputusan tersebut, organisasi akan menghadapi beberapa masalah baru dalam pengelolaan server yang semakin bertambah yaitu biaya yang dihabiskan untuk keperluan tersebut cukup besar. Biaya yang paling besar adalah pada pembelian dan maintenance server. Pemanfaatan fungsi server juga digunakan untuk mendukung pembelajaran praktikum di laboratorium komputer, jadi selain biaya dan maintenance organisasi juga akan menghadapi permasalahan baru, yaitu utilisasi server yang rendah. Untuk itu cloud computing sebagai solusi yang tepat untuk di implementasikan di laboratorium komputer STMIK AMIKOM Yogyakarta agar layanan data center bisa optimal dari sisi jumlah layanan dan penggunaan sumberdaya server.

Cloud computing and education sounds ambiguous on the face of it. Naturally, it's because, very few individuals, publishers and users alike come from the education sector. In most cases, cloud computing is only associated with businesses... more

Cloud computing and education sounds ambiguous on the face of it. Naturally, it's because, very few individuals, publishers and users alike come from the education sector. In most cases, cloud computing is only associated with businesses and how they can leverage their efficiencies. Just to introduce how the cloud deserves a place in our current education institution, it's important to reiterate the education philosophy. Its essence is knowledge. It's this knowledge which brings advancement, achievement and success. However, there are several things which make these parameters unattainable. In blunt language, this is failure. Small classrooms, lack or resources, short-handed staff, lack of adequate teachers…the list is endless. One way or the other, cloud computing can be utilized to improve education standards and activities. The end result will be to curb the above problems and instead, boost performance.

As SD-WAN disrupts legacy WAN technologies and becomes the preferred WAN technology adopted by corporations, and Kubernetes becomes the de-facto container orchestration tool, the opportunities for deploying edge-computing containerized... more

As SD-WAN disrupts legacy WAN technologies and becomes the preferred WAN technology adopted by corporations, and Kubernetes becomes the de-facto container orchestration tool, the opportunities for deploying edge-computing containerized applications running over SD-WAN are vast. Service orchestration in SD-WAN has not been provided with enough attention, resulting in the lack of research focused on service discovery in these scenarios. In this article, an in-house service discovery solution that works alongside Kubernetes' master node for allowing improved traffic handling and better user experience when running micro-services is developed. The service discovery solution was conceived following a design science research approach. Our research includes the implementation of a proof-of-concept SD-WAN topology alongside a Kubernetes cluster that allows us to deploy custom services and delimit the necessary characteristics of our in-house solution. Also, the implementation's performance is tested based on the required times for updating the discovery solution according to service updates. Finally, some conclusions and modifications are pointed out based on the results, while also discussing possible enhancements.

Live migration is an advanced virtualization capability supported by several Virtual Machine Manager (VMM) solutions that allows a running virtual machine (VM) to be moved from one physical server to another without interruption. In the... more

Live migration is an advanced virtualization capability supported by several Virtual Machine Manager (VMM) solutions that allows a running virtual machine (VM) to be moved from one physical server to another without interruption. In the past, it was a premium feature largely restricted to enterprise datacentres due the need for Storage Area Networks (SANs), but advances in technology and increased competition have now brought this in reach of most small-medium sized organisations. This paper explores the benefits and challenges of incorporating live migration into virtualized remote lab architectures to achieve reduced electricity usage, improved fault tolerance, distribution of computational load and reduced maintenance costs.

Virtualization has become a widely and attractive employed technology in cloud computing environments. Sharing of a single physical machine between multiple isolated virtual machines leading to a more optimized hardware usage, as well as... more

Virtualization has become a widely and attractive employed technology in cloud computing environments. Sharing of a single physical machine between multiple isolated virtual machines leading to a more optimized hardware usage, as well as make the migration and management of a virtual system more efficiently than its physical counterpart. Virtualization is a fundamental technology in a cloud environment. However, the presence of an additional abstraction layer among software and hardware causes new security issues. Security issues related to virtualization technology have become a significant concern for organizations due to arising some new security challenges. This paper aims to identify the main challenges and risks of virtualization in cloud computing environments. Furthermore, it focuses on some common virtual-related threats and attacks affect the security of cloud computing. The survey was conducted to obtain the views of the cloud stakeholders on virtualization vulnerabilities, threats, and approaches that can be used to overcome them. Finally, we propose recommendations for improving security, and mitigating risks encounter virtualization that necessary to adopt secure cloud computing.

The environmental footprint of ICT continues to increase. Data centres are key contributors of greenhouse gas emissions that pollute the environment and cause global warming. All data centres are overwhelmed with numerous servers as the... more

The environmental footprint of ICT continues to increase. Data
centres are key contributors of greenhouse gas emissions that pollute the environment and cause global warming. All data centres are overwhelmed with numerous servers as the major components of processing. These servers and other equipment consume high amounts of power, thereby emitting CO2. In an
average server environment, 30% of the servers are ‘dead’ and only consume energy, but such servers are not properly utilised, in which their utilisation ratios range from 5% to 10%. This paper proposes a new algorithm to manage and categorise the workload of different underutilised volume servers properly to increase their utilisation capacity. The proposed algorithm helps apply server consolidation methodology and increases the utilisation ratio of underutilised servers by up to 50%, thereby saving high amounts of power and reducing greenhouse gas emissions by up to 88%.

Hardware accelerators are available on the Cloud for enhanced analytics. Next generation Clouds aim to bring enhanced analytics using accelerators closer to user devices at the edge of the network for improving Quality-of-Service by... more

Hardware accelerators are available on the Cloud for enhanced analytics. Next generation Clouds aim to bring enhanced analytics using accelerators closer to user devices at the edge of the network for improving Quality-of-Service by minimizing end-to-end latencies and response times. The collective computing model that utilizes resources at the Cloud-Edge continuum in a multi-tier hierarchy comprising the Cloud, the Edge and user devices is referred to as Fog computing. This article identifies challenges and opportunities in making accelerators accessible at the Edge. A holistic view of the Fog architecture is key to pursuing meaningful research in this area.

Onlive Server is giving world class and High Performance Managed Dedicated Servers, VPS Hosting, Cloud Servers, Shared Server Hosting and Web Hosting services at very affordable price. We have wide range of hosting plans with excellent... more

Onlive Server is giving world class and High Performance Managed Dedicated Servers, VPS Hosting, Cloud Servers, Shared Server Hosting and Web Hosting services at very affordable price. We have wide range of hosting plans with excellent technical support team.

Cloud computing has become an inevitable part of information technology (IT) and other non-IT businesses. Every computational facility is now provided as computing services by the cloud service providers (CSPs). While providing these... more

Cloud computing has become an inevitable part of information technology (IT) and other non-IT businesses. Every computational facility is now provided as computing services by the cloud service providers (CSPs). While providing these services, CSPs try to maintain efficiency by keeping the performance index as high as possible. Although virtualization technology has made this possible by applying resource provisioning techniques, but this approach is still hectic and expertise-dependent. In this paper, we propose an efficient infrastructure as code (IaC) based novel framework for optimizing resource utilization percentage through an automatic provisioning approach. This framework maximizes the resource utilization and performance metrics of virtualized cloud platforms. In this context, we have presented some mathematical formulations and considering those, we addressed our designed programming model for the proposed IaC-based framework. Extensive simulations have been performed to establish the novelty of the proposed approach. We have also presented a comparative study by considering two data centers, one with IaC based proposed model and the other is a conventional contemporary model. Result analysis confirms the performance of our proposed IaC-based framework.

Servers and other IT devices inside datacenters have hardware component and other software or virtual component. Some programs are required to create and secure everything related to the virtual environment on IT devices. These programs... more

Servers and other IT devices inside datacenters have hardware component and other software or virtual component. Some programs are required to create and secure everything related to the virtual environment on IT devices. These programs can be free or require a prepaid license to be accessed .Most of the proposed datacenter total cost of ownership (TCO) models focus only on calculating the costs of the hardware component of IT devices and they ignore the costs of the other virtual component.In this paper, we present a cost model for building a datacenter and provide through it an analysis way to calculate the IT software license cost. Our model helps in solving a real problem in faculty of computers and information which plans to establish a datacenter. Our model can help the faculty administrators to know how much money they need to buy the IT devices and how much IT software license cost. We also calculate the cost of the power distribution equipment(PDU) and the uninterruptable p...

The rapid growth in the size and capacity of data centers driven by a continual rise in the number of servers and other IT equipment is causing an exponential increase in the demand for power. All data centers are plagued by the... more

The rapid growth in the size and capacity of data centers driven by a continual rise in the number of servers and other IT equipment is causing an exponential increase in the demand for power. All data centers are plagued by the operational presence of thousands of servers as major components. These servers consume a huge amount of power while performing little in terms of useful work. In an average server environment, 30% of servers are “zombies”—they merely consume power while having a utilization ratio of only 5 to 10%. Server virtualization contributes to this problem by o offering an opportunity to consolidate multiple underutilized volume servers into a single physical server, thereby reducing the physical and environmental footprint of data centers. This paper suggests implementing “server virtualization” to achieve energy eefficient data centers. The proposed technique increases the utilization ratio of underutilized servers up to 50%, saving a huge amount of power and at the same time reducing the emission of greenhouse gases.

Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and operational costs required for running datacenters. These tools are helpful for business owners to improve and evaluate the costs and... more

Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and operational costs required for running datacenters. These tools are helpful for business owners to improve and evaluate the costs and the underlying efficiency of such facilities or evaluate the costs of alternatives, such as off-site computing. Well understanding of the cost drivers of TCO models can provide more opportunities to business owners to control costs .In addition, they also introduce an analytical structure in which anecdotal information can be cross-checked for consistency with other well-known parameters driving data center costs. This work focuses on comparing between number of proposed tools and spreadsheets which are publicly available to calculate datacenter total cost of ownership (TCO) ,The comparison is based on many aspects such as what are the parameters included and not included in such tools and whether the tools are documented or not. Such an approach presents a solid ground for designing more and better tools and spreadsheets in the future.

his paper presents the design and implementation of a virtual cluster hosting platform for hands on for teaching virtualization in education institutions. This is a cost effective solution. This is indigenous Ts olution. The work should... more

his paper presents the design and implementation of a virtual cluster hosting platform for hands on for teaching virtualization in education institutions. This is a cost effective solution. This is indigenous Ts olution. The work should be useful to create lab infrastructure in colleges. We have used 'Xen hypervisor' as the monitoring tool. These virtual machines appear as normal Linux processes and integrate seamlessly with the rest of the system. Our results indicate that Para-virtualization is very efficient and practical for educational lab-systems. Creation of more virtual machine instances simultaneously is possible by carrying out the various standardized lab procedures. Proper validation of the suggested designs has been carried out using experimental determination of performance indicators and their statistical analysis. We present proper content design and standardized laboratory plans for three lab sessions, which could be the short term goal for the organization. Appropriate checklists, time schedules, and adequate observation templates which are provided to help for conduct of lab sessions. A long term strategy could be to create such infrastructure and encourage research using this platform for experimentation, specially on distributed computing, which in the absence of such infrastructural support otherwise need external support. The solution offered in this paper is at best a feasible solution. It cannot be considered as an optimal solution because of the innumerable alternatives possible and dependency of the solutions on availability of indigenous components to pick up from the shelf.

One of the biggest challenges today is global warming due to carbon emissions. The seventh goal of the United Nations Millennium Development Goals (MDGs) is geared towards achieving a sustainable environment. The Green IT initiative seeks... more

One of the biggest challenges today is global warming due to carbon emissions. The seventh goal of the United Nations Millennium Development Goals (MDGs) is geared towards achieving a sustainable environment. The Green IT initiative seeks to motivate organisations to cut their carbon emissions.
Most studies on climate change in Africa have focused on human activities due to anthropogenic carbon emissions, deforestation and natural disasters. There is paucity of academic research on emissions due to energy consumption. Server virtualisation is a technology which is being widely implemented in developed countries to cut data centres’ carbon emissions, not much is known about its implementation in developing countries.
This paper reviews climate change in Africa and provides an overview of energy consumption in data centres and suggests that server virtualisation can be implemented to mitigate carbon emissions.

Servers and other IT devices inside datacenters have hardware component and other software or virtual component. Some programs are required to create and secure everything related to the virtual environment on IT devices. These programs... more

Servers and other IT devices inside datacenters have hardware component and other software or virtual component. Some programs are required to create and secure everything related to the virtual environment on IT devices. These programs can be free or require a prepaid license to be accessed .Most of the proposed datacenter total cost of ownership (TCO) models focus only on calculating the costs of the hardware component of IT devices and they ignore the costs of the other virtual component.In this paper, we present a cost model for building a datacenter and provide through it an analysis way to calculate the IT software license cost. Our model helps in solving a real problem in faculty of computers and information which plans to establish a datacenter. Our model can help the faculty administrators to know how much money they need to buy the IT devices and how much IT software license cost. We also calculate the cost of the power distribution equipment(PDU) and the uninterruptable power supply (UPS) systems which are required for operating the IT devices. The cost of the cooling systems that take the heat away once the power is consumed from IT devices, PDU devices and UPS systems is also calculated in our model.

Grid systems and cloud servers are two distributed networks that deliver computing resources (e.g., file storages) to users’ services via a large and often global network of computers. Virtualization technology can enhance the efficiency... more

Grid systems and cloud servers are two distributed networks that deliver computing resources (e.g., file storages) to users’ services via a large and often global network of computers. Virtualization technology can enhance the efficiency of these networks by dedicating the available resources to multiple execution environments. This chapter describes applications of virtualization technology in grid systems and cloud servers. It presents different aspects of virtualized networks in systematic and teaching issues. Virtual machine abstraction virtualizes high-performance computing environments to increase the service quality. Besides, grid virtualization engine and virtual clusters are used in grid systems to accomplish users’ services in virtualized environments, efficiently. The chapter, also, explains various virtualization technologies in cloud severs. The evaluation results analyze performance rate of the high-performance computing and virtualized grid systems in terms of bandwidth, latency, number of nodes, and throughput.

Data centers are the building blocks of IT business enterprises providing the capabilities of centralized repository for storage, management, networking and dissemination of data. With the rapid increase in the capacity and size of data... more

Data centers are the building blocks of IT business enterprises providing the capabilities of centralized repository for storage, management, networking and dissemination of data. With the rapid increase in the capacity and size of data centers, there is a continuous increase in the demand for energy consumption. These data centers not only consume a tremendous amount of energy but are riddled with IT inefficiencies. Data center are plagued with thousands of servers as major components. These servers consume huge energy without performing useful work. In an average server environment, 30% of the servers are “dead” only consuming energy, without being properly utilized. This paper proposes a five step model using an emerging technology called virtualization to achieve energy efficient data centers. This process helps to make data centers green and energy efficient so as to ensure that IT infrastructure contributes as little as possible to the emission of green house gases, and helps to regain power and cooling capacity, recapture resilience and dramatically reducing energy costs and total cost of ownership.

This is a draft in the form of an article to describe a hidden language, spoken by the face and heard eyes. Only its simplified form reached our consciousness whilst the most works in us through emotions, unrecognized for millennia, and... more

This is a draft in the form of an article to describe a hidden language, spoken by the face and heard eyes. Only its simplified form reached our consciousness whilst the most works in us through emotions, unrecognized for millennia, and still the source for a very distinct development of brains, identity and beyond that a major cause for neurological disorders and foremost the golden thread leading to a conclusive understanding of the human mind.

Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and operational costs required for running datacenters. These tools are helpful for business owners to improve and evaluate the costs and... more

Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and operational costs required for running datacenters. These tools are helpful for business owners to improve and evaluate the costs and the underlying efficiency of such facilities or evaluate the costs of alternatives, such as off-site computing. Well understanding of the cost drivers of TCO models can provide more opportunities to business owners to control costs .In addition, they also introduce an analytical structure in which anecdotal information can be cross-checked for consistency with other well-known parameters driving data center costs. This work focuses on comparing between number of proposed tools and spreadsheets which are publicly available to calculate datacenter total cost of ownership (TCO) ,The comparison is based on many aspects such as what are the parameters included and not included in such tools and whether the tools are documented or not. Such an approach presents a solid ground for designing more and better tools and spreadsheets in the future.

Information security is one of the most important aspects of technology, we cannot protect the best interests of our organizations' assets (be that personnel, data, or other resources), without ensuring that these assetsare protected to... more

Information security is one of the most important aspects of technology, we cannot protect the best interests of our organizations' assets (be that personnel, data, or other resources), without ensuring that these assetsare protected to the best of their ability. Within the Defense Department, this is vital to the security of not just those assets but also the national security of the United States. Compromise insecurity could lead severe consequences. However, technology changes so rapidly that change has to be made to reflect these changes with security in mind. This article outlines a growing technological change (virtualization and cloud computing), and how to properly address IT security concerns within an operating environment. By leveraging a series of encrypted physical and virtual systems, andnetwork isolation measures, this paper delivered a secured high performance computing environment that efficiently utilized computing resources, reduced overall computer processing costs, and ensures confidentiality, integrity, and availability of systems within the operating environment 1 .

To serve millions of players, Massively Multiplayer Online Game (MMOG) operators pre-provision and then maintain thousands of computer resources. We investigate a hybrid resource provisioning model that uses smaller and cheaper data... more

To serve millions of players, Massively Multiplayer Online Game (MMOG) operators pre-provision and then maintain thousands of computer resources. We investigate a hybrid resource provisioning model that uses smaller and cheaper data centers, complemented during peak hours by virtualised cloud computing resources. Through trace-based simulation and empirical experimentation, we assess the impact of provisioning virtualised cloud resources, analyse the virtualisation overhead, and compare provisioning of virtualised resources with resource ownership. Using a simple cost model, we also investigate the costs of hosting MMOGs on the resources leased independently from three commercial cloud providers, including Amazon.
is only available to individual subscribers or to users at subscribing institutions.