Virtualization Research Papers - Academia.edu (original) (raw)

RESUMEN El trabajo desarrolla las líneas de acción y los resultados del Plan de Contingencia para la cursada en línea de todas las carreras de la Facultad de Psicología (UBA). Dicha intervención surge a partir del contexto crítico... more

RESUMEN
El trabajo desarrolla las líneas de acción y los resultados del Plan de Contingencia para la cursada en línea de todas las carreras de la Facultad de Psicología (UBA). Dicha intervención surge a partir del contexto crítico instalado debido a la pandemia por el COVID-19, que obligó a una transformación didáctico-pedagógica articulada al uso de la tecnología digital. Para hacer viable el objetivo de la virtualización se reconfiguraron y adaptaron los contenidos y las propuestas de enseñanza y aprendizaje a través del Campus Virtual de la Facultad. Se desarrollaron numerosas tareas para la gestión de recursos y procedimientos, interactuando con las diversas áreas comprometidas en el mantenimiento de la actividad académica, consolidándose un equipo que gestionó los distintos flujos de información, procedimientos y metodologías con el fin de producir una adaptación situada acorde al nuevo contexto. Como resultado logramos ortalecer la plataforma institucional desde el punto de vista tecnológico, así como también pudimos instalar una modalidad virtual que implicó desarrollar e incrementar las competencias digitales de estudiantes y de docentes. El modelo de trabajo colaborativo, horizontal y dinámico del equipo de desarrollo nos permitió sostener la educación en el nivel superior en medio de un contexto de incertidumbre y complejidad.
Palabras clave: Plan de contingencia, Virtualización, Tecnología educativa, Formador de formadores, Competencias digitales.

Entrare nell’immagine: un desiderio che accompagna l’umanità dall’alba dei tempi, perdendosi nelle nebbie del mito. Un desiderio che nei secoli ogni cultura visuale ha cercato di realizzare con i mezzi di volta in volta disponibili. E che... more

Entrare nell’immagine: un desiderio che accompagna l’umanità
dall’alba dei tempi, perdendosi nelle nebbie del mito. Un desiderio
che nei secoli ogni cultura visuale ha cercato di realizzare con
i mezzi di volta in volta disponibili. E che oggi le tecnologie di realtà
virtuale promettono di soddisfare: l’immagine si fa ambiente immersivo,
ci avvolge a 360 gradi, è presenza in carne e ossa. Saltano
le cornici che la confinavano in un mondo a parte. Persino il
medium in cui si concretizza sembra farsi trasparente. Una volta
aperto il passaggio fra il mondo reale e il mondo iconico, dobbiamo
però aspettarci un varco percorribile nei due sensi: penetriamo
nel mondo dell’immagine, ma l’immagine esonda nel nostro
mondo. Al desiderio si accompagna il timore per tale tracimazione.
Questo libro esplora quel doppio movimento, e ricostruisce la
storia di quel desiderio-timore, delle fantasie che ha innescato e
delle strategie escogitate per corrispondervi: dal proto-immersivo
Narciso ai caschi VR, passando per il trompe-l’oeil e le sculture
viventi, gli specchi e le architetture illusionistiche, i panorami e le
fantasmagorie, l’arte-ambiente, il cinema in 3D. Indugiando sulla
soglia che al contempo separa e congiunge quei due mondi, ne
subiamo il fascino, e ne apprezziamo il rischio.

This paper examines how the casino industry was transformed by slot technology between 1950 and 1990. The criminalization of slot machines in the 1950s led to their massive evacuation into Las Vegas casinos. In this concentrated... more

This paper examines how the casino industry was transformed by slot technology between 1950 and 1990. The criminalization of slot machines in the 1950s led to their massive evacuation into Las Vegas casinos. In this concentrated environment, slot machines revealed to casino operators an automated surveillance technology that could disassemble the player into streams of virtual data, not through any overt means, but through the very activity of play itself. Slot managers and gaming technologists found themselves empowered professionally as they experimented with ways to transform data into profits. From the 1970s to the 90s, this technological development effectively linked up every economic activity in various casinos across the US, creating a virtual network that defeated the geographical injunctions designed to segregate gambling from other spheres of life.

Cloud computing relies on the collection and distribution of services from internet-based data centers. With the large resource pool available in internet wide range of users are accessing the cloud. Load balance is important feature... more

Cloud computing relies on the collection and distribution of services from internet-based data centers. With the large resource pool available in internet wide range of users are accessing the cloud. Load balance is important feature involving resource allocation to prevent overloading of any system or optimal use of resources. Major load in cloud network are concerned with CPU, memory and network. This cloud computing aspect has not yet earned too much coverage. Although load balancing is an important feature for cloud computing, concurrent computing etc. In these areas, several algorithms were suggested to solve load balance problem. However, it does recommend very few cloud computing algorithms. Given that cloud storage differs considerably from all other environments, particular load balancing algorithm should will built in sort to serve its needs. This work proposes novel load-balancing algorithm based on artificial bee colony algorithm and load balancing min-min scheduling algorithm for balancing load in cloud computing network. Simulation here is carried out in clouds to generate comparative results. Improving on various parameters like power consumption, resource utilization, stability of system are some major areas focused on. This work has used algorithm that has the best efficiency of resources, optimal performance, minimal response time, scalability and durability in integrated resource planning.

NFV-based network implements a variety of network functions with software on general-purpose servers and this allows the network operator to select any capacity and location of network functions without any physical constraints. It is... more

NFV-based network implements a variety of network functions with software on general-purpose servers and this allows the network operator to select any capacity and location of network functions without any physical constraints. It is essential for economical NFV-based network design to determine the place where each network function should be located in the network and what its capacity should be. The authors proposed an algorithm of virtual routing function allocation in the NFV-based network for minimizing the network cost and provided effective allocation guidelines for virtual routing functions. This paper proposes the deployment algorithm of virtual firewall function in addition to virtual routing function for minimizing the network cost. Our evaluation results have revealed the following: (1) Installing a packet filtering function, which is a part of the firewall function, in the sending-side area additionally can reduce wasteful transit bandwidth and routing processing and thereby reduce the network cost. (2) The greater the number of packets filtered by packet filtering function in the sending-side area, the more the reduction of network cost is increased. (3) The greater the bandwidth cost relative to the routing function cost, the greater the effect of statistical multiplexing on reducing the network cost. (4) The proposed algorithm would be approaching about 95% of the deployment with the optimal solution.

Introduction: In 2012, 14.33 million tons of textile waste was generated in the United States, which represented 5.7% of total municipal solid waste. Of these textile waste, 15.7% was recovered and 12.08 million tons were discarded. The... more

Introduction: In 2012, 14.33 million tons of textile waste was generated in the United States, which represented 5.7% of total municipal solid waste. Of these textile waste, 15.7% was recovered and 12.08 million tons were discarded. The apparel giant Nike (Nike Considered Design, 2010) defines waste as any product or material purchased anywhere in the supply chain that does not ultimately end up in the consumer's closet. This definition includes non-product waste (such as packaging), manufacturing waste (such as scrap material in contract factories) and product waste such as samples. The purpose of this project was to focus on alternatives to waste generated in apparel sampling. Nike # of " salesman samples 2017, to integrate sound information about materials into an interactive development tool to educate and inform developers about their material selection decisions. By creating efficiencies in the product development phase of the apparel product life-cycle a significant amount of solid waste would be eliminated as well as a significant decrease in the products carbon footprint. Over 16,000 materials are used in apparel and footwear products each year. A pair of shoes alone can use more than 30 materials (Nike Considered Design, 2010). Sustainable Apparel Coalition has developed material sustainability index (MSI) that can help companies choose the best fabrications of textiles. We sought to mesh the possibilities of virtual prototyping with MSI data. There were 5 objectives for this project (a) operational investigations (virtual prototyping and MSI data); (b) develop tool that meshes 3D CAD with MSI data; (c) generate virtual apparel using tool; (d) develop physical prototypes; (e) evaluate the environmental and economic impacts of our products (g) develop educational tools. Waste Reduction in Product Development Phase. We hypothesized that waste reduction in apparel production can be achieved through increased efficiency in the product development phase by the utilization of 3D technology, the benefit of using 3D technology is that problems can be solved prior to generating physical waste. Virtual Prototyping enables designers to quickly visualize how a material will ultimately appear on a product. If it's not the desired look, the developer can try different materials until the perfect match is found. The virtual/visual data helps in transcending the technical aspects of fabric composition and improve tactility. Virtual prototyping saves time and money and allows the designer to explore and experience hundreds of options before making a final choice " (Bux, 2014). The Materials Sustainability Index (MSI) assesses material impacts in the areas energy, chemistry, water and waste. The current process of fabric selection in apparel product development does not take into consideration the environmental impacts of chosen materials. With a lack of access to information regarding material sustainability, developers and other decision makers can not make informed, sustainable decisions (Nike, 2013). By embedding material sustainability data into the 3D prototyping system serving we can educate product creation teams to use environmentally better materials. We hypothesize that by embedding MSI data into an interactive virtual product development tool, a platform is created to both educate and inform more sustainable choices. Process: A team of researchers investigated the potential of virtual prototyping by customizing a virtual fabric library based on measurements of textile mechanical property data from two knit (list fabrics) and two woven constructed fabrics. Researchers referred to MSI data during fabric selection, developing 3D virtual textile structures, applying texture mapping to virtual garments using Adobe photoshop software, embedding into the material section Optitex (CAD) software. Concurrent with virtual garment construction, researchers constructed physical garment samples out of selected textile materials, utilizing both knit and woven substrates. Researchers analyzed the virtual apparel prototypes through additional tests, measuring potential waste reductions of virtual apparel prototype production in contrast to physical apparel prototype production. Based on findings, students developed recommendations in optimizing apparel product development via virtual prototyping methods. Quantitative Evaluation. The custom fabric library is based on textile material characteristics such as weight, thickness, stiffness/softness and elongation. ASTM standard methods were used in measuring these characteristics. The weight of fabrics (g/ m2) will be measured using a scale in accordance with ASTM D3776 method (Mass per unit area (weight) of fabric). The thickness of the fabrics will be measured using a portable thickness gauge in accordance with ASTM D1777 (Thickness of textile materials). The stiffness/softness data will be measured using a Handle-a-meter in accordance with ASTM D6828 (Stiffness of fabric by blade/slot procedure) standard. Textile elongation data will be measured using a tensile tester in accordance with ASTM D5034 (Breaking strength and elongation of textile fabrics) standard. Qualitative Evaluation. Upon IRB approval, researchers developed and implemented an industry expert panel survey. This survey will request feedback of a physical apparel product prototype compared to a virtual apparel product. Results: Outputs of this project include an initial prototype of a hybrid 3D/MSI System. The initial prototype was used as a proof of concept for Phase 2 funding that would serve to develop an interactive product development " app ". Students developed a report on the virtual prototyping research and recommendations on product development solutions with the use of 3D/MSI hybrid models to minimize product waste; as well as a textile mechanical property report and virtual apparel prototypes produced by Optitex (CAD) software; physical apparel product prototypes. Educational tools will also be developed at the department, university, community and national levels. An article is forthcoming. (Pictures here of virtual and physical garment) Acknowledgement This project was supported by a grant from EPA P3 Student Design Competition Phase 1. References

Based on a brief overview of the history of ontology and on some philosophical problems of virtual reality, a new approach to virtuality is proposed. To characterize the representational (information, cognitive, cultural, communication)... more

Based on a brief overview of the history of ontology and on some philosophical problems of virtual reality, a new approach to virtuality is proposed. To characterize the representational (information, cognitive, cultural, communication) technologies in the Internet age, I suggest that Aristotle’s dualistic ontological system (which distinguishes between actual and potential being) be complemented with a third form of being: virtuality. In the virtual form of being actuality and potentiality are inseparably intertwined. Virtuality is potentiality considered together with its actualization. In this view, virtuality is reality with a measure, a reality which has no absolute character, but which has a relative nature. This situation can remind us the emergence of probability in the 17th century: then the concept of certainty, now the concept of reality is reconsidered and relativized. Currently, in the descriptions of the world created by representational technologies, there are two coh...

A study of VMware ESXi 5.1 server has been carried out to find the optimal set of parameters which suggest usage of different resources of the server. Feature selection algorithms have been used to extract the optimum set of parameters of... more

A study of VMware ESXi 5.1 server has been carried out to find the optimal set of parameters which
suggest usage of different resources of the server. Feature selection algorithms have been used to extract
the optimum set of parameters of the data obtained from VMware ESXi 5.1 server using esxtop command.
Multiple virtual machines (VMs) are running in the mentioned server. K-means algorithm is used for
clustering the VMs. The goodness of each cluster is determined by Davies Bouldin index and Dunn index
respectively. The best cluster is further identified by the determined indices. The features of the best cluster
are considered into a set of optimal parameters.

In the 1960s and 70s, videogames emerged as a new form of software technology which allowed people to experience innovative ways of playing games in a virtual context via a controller and a screen. Ever since, the gaming industry has... more

In the 1960s and 70s, videogames emerged as a new form of software technology which allowed people to experience innovative ways of playing games in a virtual context via a controller and a screen. Ever since, the gaming industry has grown exponentially and continues to attract new users and designers across the globe. Even though we are not going to go over the history of videogames or games in general. We will still explain some play and games concepts which we think to be relevant in the context of this research concerning videogames related advertisements and the gamer's body. Throughout this paper, I will use the term " gamer's body " to refer to any individual's body who's accustomed to playing videogames and the word " gameplay " to talk about the actual play experiences (the uses of the virtual gamer's body or the uses of the gamer's body in-game) which videogames are valued to be delivering especially in ads. However, the readers must keep in mind that they are different degrees and reasons to why an individual might identify himself as a gamer and 'play or be invested in a specific videogame.' Each game has an impact on the overall industry especially if it has been experienced and reviewed positively by a significant number of gamers. Similarly, each gameplay experience as well as each gamer's experience derived from a videogame's gameplay via the gamer's body will influence their next choice when buying a newly released one. Basically, we will argue that the social significance of videogames lies within the use of the gamer's body which able us to experience other appearances, in designed virtual worlds full of challenges and roles, that would otherwise be physically impossible to undertake. As a result, it will able us to draw conclusions about what ideas surround the gamer's body as both socially significant in the virtual worlds of games and the real physical life body.

Mit diesem Titel liegt eine veröffentlichte Magisterarbeit vor, deren redaktioneller Beschreibungstext auf dem Buchrücken einen „Blick über den Tellerrand hinaus“ verspricht. Dieses Versprechen soll sich durch drei Leitaspekte einlösen,... more

Mit diesem Titel liegt eine veröffentlichte Magisterarbeit vor, deren redaktioneller Beschreibungstext auf dem Buchrücken einen „Blick über den Tellerrand hinaus“ verspricht. Dieses Versprechen soll sich durch drei Leitaspekte einlösen, die den Einfluss von virtuellen Welten auf das menschliche Verhalten fokussieren, zweitens das Suchtpotenzial und die Isolation von der Außenwelt thematisieren und drittens die neuen sozialen Komponenten beleuchten, die durch die ‚Begehung’ virtueller Welten ermöglicht werden.

Cloud computing is fundamentally altering the expectations for how and when computing, storage and networking resources should be allocated, managed and consumed. End-users are increasingly sensitive to the latency of services they... more

Cloud computing is fundamentally altering the expectations for how and when computing, storage and networking resources should be allocated, managed and consumed. End-users are increasingly sensitive to the latency of services they consume. Service Developers want the Service Providers to ensure or provide the capability to dynamically allocate and manage resources in response to changing demand patterns in real-time. Ultimately, Service Providers are under pressure to architect their infrastructure to enable real-time end-to-end visibility and dynamic resource management with fine grained control to reduce total cost of ownership while also improving agility. What is needed is a rethinking of the underlying operating system and management infrastructure to accommodate the ongoing transformation of the datacenter from the traditional server-centric architecture model to a cloud or network-centric model. This paper proposes and describes a reference model for a network-centric datace...

This panel focuses on opening up a discussion on the role of art in the times when AI systems are becoming ubiquitous. It embraces the machine flaws, irregularities and errors that artists explore to push the boundaries of their art... more

This panel focuses on opening up a discussion on the role
of art in the times when AI systems are becoming ubiquitous. It embraces the machine flaws, irregularities and errors that artists explore to push the boundaries of their art
practice and find new thematic, technological and conceptual grounds for experimentation. The panel explores and
compares multiple views, case studies and projects produced in various disciplines including data driven city portraits, generative media facades, urban media art, spatiotemporal visualizations, machine learning narrative experiments in VR, neural networks and art; and discrepancies
or interrelatedness between human creativity and the future
of Artificial general intelligence (AGI).

In this paper, we introduce a novel framework for virtualizing FPGA resources in the cloud. The proposed framework targets hardware/software architectures that leverage the Virtio paradigm for efficient communication between virtual... more

In this paper, we introduce a novel framework for virtualizing FPGA resources in the cloud. The proposed framework targets hardware/software architectures that leverage the Virtio paradigm for efficient communication between virtual machines (VMs) and the FPGAs. Furthermore, we present an FPGA overlay that uses reconfigurable hardware tiles and a flexible network-on-chip (NoC) architecture for transparent and optimized allocation of FPGA resources to VMs. The proposed overlay makes it possible to merge several FPGA regions allocated to a VM into a larger area, thus allowing resizing of FPGA's resources on demand. Hardware sandboxes are then provided as a means to enforce domain separation between hardware tasks belonging to different VMs. The framework introduced prevents the overhead of context switches between the virtual machine and host address spaces by using the in-kernel network stack for transferring packets to FPGAs. Experimental results show a 2× to 35× performance increase compared to current state of the art virtualization approaches.

The museum, intended as a mere container aimed only at the conservation of objects, has long since given space to activities of cultural production and promotion that have finally allowed the entry of new forms of communication. Based on... more

The museum, intended as a mere container aimed only at the conservation of objects, has long since given space to activities of cultural production and promotion that have finally allowed the entry of new forms of communication. Based on dynamic storytelling and multimodal information, they propose to implement an effective dialogue with the anthropological, economic, social and cultural context, activating a set of various activities aimed at conveying complex and constantly evolving messages. From the simple linear and didactic communication mode, we move toward informative, persuasive and educational communication. But the most interesting aspect of this evolution is that it has strongly modified the exhibition logics and the ways in which cultural contents are used by the public. Based on these premises, we present in this article the results of some methodological approaches experimented in different museum installations. They consider different criteria, from passive to active visit, from serious games to immersive enjoyment.

A conference in the conventional form is a very resource-demanding process with considerable environmental impacts. As the host of the 15th International Environmental Informatics Symposium, held in Zurich, October 10-12, 2001, EMPA... more

A conference in the conventional form is a very resource-demanding process with considerable environmental impacts. As the host of the 15th International Environmental Informatics Symposium, held in Zurich, October 10-12, 2001, EMPA assessed the effectiveness of different measures to reduce the environmental impact of the conference using the Life Cycle Assessment (LCA) method, including the virtualization of the conference.

Information and Communication Technologies (ICT) have relevant positive and negative impacts on environmental sustainability on various levels: First-order effects such as increasing electronic waste streams; second-order effects such as... more

Information and Communication Technologies (ICT) have relevant positive and negative impacts on environmental sustainability on various levels: First-order effects such as increasing electronic waste streams; second-order effects such as improved energy-efficiency of production; third-order effects such as a product-to-service shift in consumption or rebound effects in transport. In the simulation study described in this article, all known relevant effects on all three levels were modeled using a System Dynamics approach in combination with scenario techniques and expert consultations. The prospective study for the European Union with a time-horizon until 2020 revealed great potential for ICT-supported energy management and for a structural change towards a less material-intensive economy, but strong rebound effects in the transport sector whenever ICT applications lead to time or cost savings for transport.

Hospital needs information technology to maintain whole information in the hospital. The information including medical record of patients, pharmacy, medicine and medical tools, doctor and nurse schedule, also payment and finance... more

Hospital needs information technology to maintain whole information in the hospital. The information including medical record of patients, pharmacy, medicine and medical tools, doctor and nurse schedule, also payment and finance information. Those informations should be well-maintained in one information management system called Hospital Management System (HMS). HMS is needed by hospitals to manage integrated and controlled hospital management system. Information technology provides simple bureaucracy and transparency. Hence it's creating a working environment that always improves the effectiveness, efficiency, flexibility and speed so that productivity and service quality increases. Hospital management systems require a reliable information technology infrastructure with a high level of availability capable of delivering zero-downtime 24/7. In this research will be used Software-Defined Data Center technology (SDDC) as information technology infrastructure. In this research discusses cost comparison between conventional infrastructure and SDDC-Based Cloud, also discusses about project management for building to migrating the system from old (conventional) infrastructure to Software-Defined Data Center (SDDC) based infrastructure.
Keywords-- virtualization, cloud, sddc, project management

Virtual space creates a new space of flows and a new environment of communication, it is like a mirror, reflects all the problems of modern society, and thanks to digital technologies, various forms and strategies of behavior receive a... more

Virtual space creates a new space of flows
and a new environment of communication,
it is like a mirror, reflects all the
problems of modern society, and thanks
to digital technologies, various forms and
strategies of behavior receive a new impetus
and mechanisms of development.
The article presents the analysis of virtual
aggression, causes and factors of its formation
and their correlation with reality.
The possibilities and ways of reducing destructive
behavior and the General degree
of aggression in the Russian segment of
the Internet are considered./Виртуальное пространство создает но-
вое пространство потоков и новую сре-
ду коммуникаций, в нем как в зерка-
ле находят отражение все проблемы
современного общества, а благода-
ря цифровым технологиям различные
формы и стратегии поведения получа-
ют новый импульс и механизмы разви-
тия. В статье представлен анализ вир-
туальной агрессии, причины и факто-
ры ее формирования и их корреляция
с реальностью. Рассмотрены возмож-
ности и пути снижения деструктивно-
го поведения и общего градуса агрес-
сии в российском сегменте интернета.

With increasing interest among mainstream users to run HPC applications, Infrastructure-as-a-Service (IaaS) cloud computing platforms represent a viable alternative to the acquisition and maintenance of expensive hardware, often out of... more

With increasing interest among mainstream users to run HPC applications, Infrastructure-as-a-Service (IaaS) cloud computing platforms represent a viable alternative to the acquisition and maintenance of expensive hardware, often out of the financial capabilities of such users. Also, one of the critical needs of HPC applications is an efficient, scalable and persistent storage. Unfortunately, storage options proposed by cloud providers are not standardized and typically use a different access model. In this context, the local disks on the compute nodes can be used to save large data sets such as the data generated by Checkpoint-Restart (CR). This local storage offers high throughput and scalability but it needs to be combined with persistency techniques, such as block replication or erasure codes. One of the main challenges that such techniques face is to minimize the overhead of performance and I/O resource utilization (i.e., storage space and bandwidth), while at the same time guar...

The Virtualization technology [VT] plays vital role in efficient utilization of computer resources in any organization and is the basic concept used in cloud computing. Main advantage of using VT is that we need less Information... more

The Virtualization technology [VT] plays vital role in efficient utilization of computer resources in any organization and is the basic concept used in cloud computing. Main advantage of using VT is that we need less Information technology [IT] infrastructure resources. We prove here that live migration feature provided by XEN hypervisor is quiet useful. It increases availability of application and is transparency to user. We executes live migrate command of XEN, the VM runs on source host till all memory pages are transferred to destination host (pre-copy). We can also observe that during and after migration VM preserves its IP address. The main use of virtualization technology is server consolidation i.e. to run multiple server operating systems (VMs) on single physical hardware. By running multiple VMs on single physical machine we can reduce expenditure on IT hardware.

We present a virtualized setup of a Hadoop cluster that provides greater computing capacity with lesser resources, since a virtualized cluster requires fewer physical machines. The master node of the cluster is set up on a physical... more

We present a virtualized setup of a Hadoop cluster that provides greater computing capacity with lesser resources, since a virtualized cluster requires fewer physical machines. The master
node of the cluster is set up on a physical machine, and slave nodes are set up on virtual machines (VMs) that may be on a common physical machine. Hadoop configured VM images are created by cloning of VMs, which facilitates fast addition and deletion of nodes in the cluster without much overhead. Also, we have configured the Hadoop virtualized cluster to use capacity scheduler instead of the default FIFO scheduler. The capacity scheduler schedules tasks based on the availability of RAM and virtual memory (VMEM) in slave nodes before allocating any job. So instead of queuing up the jobs, they are efficiently allocated on the VMs based on the memory avail- able. Various configuration parameters of Hadoop are analyzed and the virtualized cluster is fine-tuned to ensure best performance and maximum scalability.

There’s a new wind of change in the IT industry today. It’s called virtualization. Virtualization is a software technology designed to let us run multiple virtual machines with different operating systems on a single physical... more

There’s a new wind of change in the IT industry today. It’s called virtualization. Virtualization is a software technology designed to let us run multiple virtual machines with different operating systems on a single physical machine.Virtualization is changing almost every aspect of how we manage systems, storage, networks, security, operating systems, and applications. Server consolidation reduces maintenance cost while high availability and live migration allow us to minimize the downtime. With virtual machines, recovering from failure and disaster recovery is easier and more affordable than ever.

The virtualization of the network access layer has opened new doors in how we perceive networks. With this virtualization of the network, it is possible to transform a regular PC with several network interface cards into a switch.... more

The virtualization of the network access layer has opened new doors in how we perceive networks. With this virtualization of the network, it is possible to transform a regular PC with several network interface cards into a switch. PC-based switches are becoming an alternative to off-the-shelf switches, since they are cheaper. For this reason, it is important to evaluate the performance of PC-based switches. In this paper, we present a performance evaluation of two PC-based switches, using Open vSwitch and LiSA, and compare their performance with an off-the-shelf Cisco switch. The RTT, throughput, and fairness for UDP are measured for both Ethernet and Fast Ethernet technologies. From this research, we can conclude that the Cisco switch presents the best performance, and both PC-based switches have similar performance. Between Open vSwitch and LiSA, Open vSwitch represents a better choice since it has more features and is currently actively developed.

Virtualization, the technology that allows multiple guests (clients) to reside on a single host (provider) machine and share the resources of the host machine, is rapidly gaining prominence in the corporate business world of the 21st... more

Virtualization, the technology that allows multiple guests (clients) to reside on a single host (provider)
machine and share the resources of the host machine, is rapidly gaining prominence in the corporate business world
of the 21st century. Virtualization is a powerful technology for increasing the efficiency of computing services.
However, besides its advantages, its flexibility also raises a number of questions regarding security, especially as it
forms the bedrock of the revolutionary technology of cloud computing. Research has shown that Cloud Computing
directly has the potential to tremendously impact positively the profit margins of leading 21st century businesses but
the lack of satisfactory answers to some of these questions, especially by cloud service providers, has greatly limited
the rate of adoption of full virtualization by many governments and organizations. This paper aims to elucidate and
classify the questions and threats to virtualization, and suggest possible solutions to them.

Modern workloads often exceed the processing and I/O capabilities provided by resource virtualization, requiring direct access to the physical hardware in order to reduce latency and computing overhead. For computers interconnected in a... more

Modern workloads often exceed the processing and I/O capabilities provided by resource virtualization, requiring direct access to the physical hardware in order to reduce latency and computing overhead. For computers interconnected in a cluser, access to remote hardware resources often requires facilitation both in hardware and specialized drivers with virtualization support. This limits the availability of resources to specific devices and drivers that are supported by the virtualization technology being used, as well as what the interconnection technology supports. For PCI Express (PCIe) clusters, we have previously proposed Device Lending as a solution for enabling direct low latency access to remote devices. The method has extremely low computing overhead, and does not require any application-or device-specific distribution mechanisms. Any PCIe device, such as network cards disks, and GPUs, can easily be shared among the connected hosts. In this work, we have extended our solution with support for a virtual machine (VM) hypervisor. Physical remote devices can be ''passed through'' to VM guests, enabling direct access to physical resources while still retaining the flexibility of virtualization. Additionally, we have also implemented multi-device support, enabling shortest-path peer-to-peer transfers between remote devices residing in different hosts.Our experimental results prove that multiple remote devices can be used, achieving bandwidth and latency close to native PCIe, and without requiring any additional support in device drivers. I/O intensive workloads run seamlessly using both local and remote resources. With our added VM and multi-device support, Device Lending offers highly customizable configurations of remote devices that can be dynamically reassigned and shared to optimize resource utilization, thus enabling a flexible composable I/O infrastructure for VMs as well as bare-metal machines.

Software is usually affected by some kinds of security vulnerabilities. Vulnerabilities can be classified into several types, in order to ease their impact analysis, providing a common thought framework. Virtualization products aim to... more

Software is usually affected by some kinds of security vulnerabilities. Vulnerabilities can be classified into several types, in order to ease their impact analysis, providing a common thought framework. Virtualization products aim to allow users to abstract the physical hardware details and provide them with means to install multiple virtual machines. Some virtualization users often tend to forget or ignore that this additional software layer exposes them to additional attack vectors and potential vulnerabilities. In this paper, we analyze the known vulnerabilities for VMware, a well-known virtualization product.

Corporate adoption of new server virtualization technologies offered by VMWare, Microsoft, the open source community (Xen) and others raises both new opportunities and new risks for system security. Security issues of virtualization have... more

Corporate adoption of new server virtualization technologies offered by VMWare, Microsoft, the open source community (Xen) and others raises both new opportunities and new risks for system security. Security issues of virtualization have received some attention in trade periodicals and journals, but a comprehensive and authoritative understanding of virtualized system security under current models of information security has yet to be developed. Such an understanding requires that some fundamental questions be asked: What is the place of virtualized system components in security models as they are currently understood? How should the implementation of virtualization be expected to affect security planning under such models? Our paper presents a first attempt to address these questions. We present an integrated model of system security highlighting the effects of virtualization. We then use this model to analyze security impacts of virtualization within the overall system security co...

Erik Champion, ed., The Phenomenology of Real and Virtual Places. Routledge, 2019.

Perforated shells are often used to study socially mediated behavior in past hunter-gatherer groups. The assumption is that their exclusive symbolic function makes them ideally suited to investigate social networks, dispersal activity,... more

Perforated shells are often used to study socially mediated behavior in past hunter-gatherer groups. The assumption is that their exclusive symbolic function makes them ideally suited to investigate social networks, dispersal activity, and social interaction. Before making any statements regarding human behavior, however, it needs to be established whether perforated shells from archaeological assemblages were used as personal ornaments. One of the key issues regarding beach-collected marine taxa is whether beached specimens were purposefully collected, e.g., preferentially selected naturally holed specimens, or whether human-made perforations may be identified. Past studies have investigated these questions by comparing datasets from modern death assemblages of shells with archaeological collections and through manufacture and use-wear analysis (e.g., Bouzouggar et al. 2007; Vanhaeren et al. 2006) This study introduces a novel approach using μCT scans of pristine shells to create a three-dimensional model of shell thickness in Tritia (Nassarius) gibbosula. This model is used to map robust and fragile zones on shells of this taxon. The goal of this approach is to identify structurally weak zones that would be prone to natural perforations. Heat maps of shell thickness are then used to investigate perforation locations in modern natural death and archaeological assemblages. Our results show that in natural death assemblages, most perforations occur in structurally weak zones, and that their distribution is random. In our archaeological samples, from early Upper Paleolithic contexts at Ksâr 'Akil, (Lebanon), we found that perforations in T. gibbosula mainly occur in structurally weak zones, but their distribution within these zones is not random and favors locations facilitating easy suspension (e.g., on cordage). This suggests that at Ksâr 'Akil, T. gibbolusa shells were used as beads, and that shells with conveniently located natural perforations were intentionally sought for, that humans perforated the shells themselves, or that they used a combination of both. This in turn, warrants investigations into the social and behavioral implications of these perforated beads. Further aspects of human involvement with the shells should be explored, for example, through use-wear analysis.

Computer vision has been in high demand due to the Coronavirus pandemic to improve healthcare sector. During this time detecting small objects is a tougher task, as it uses both classification and detection using video illustration. This... more

Computer vision has been in high demand due to the Coronavirus pandemic to improve healthcare sector. During this time detecting small objects is a tougher task, as it uses both classification and detection using video illustration. This Object Detection demonstrated a superior feature ie, Mask Detection compared to other object detection models. This Face mask detection using YOLOv3 performed well. This Face mask detection measures performance at the same time with strong GPU and works with less computation power We add dataset which consists of both people wearing face masks and without facemask, The model is trained by this dataset consisting of face mask and no face mask. Real time video can also be used to verify whether the person is wearing mask or not. This Face Mask Detection model attained good output with 96% classification.

As society advances in terms of information technology, the dependency on cyber secure systems increases. Likewise, the need to enhance both the quality and relevance of education, training, and professional development for cybersecurity... more

As society advances in terms of information technology, the dependency on cyber secure systems increases. Likewise, the need to enhance both the quality and relevance of education, training, and professional development for cybersecurity defenders increases proportionately. Without a continued supply of capable cyber defenders that can come to the challenge well-prepared and continuously advance their skills, the reliability and thus the value of information technology systems will be compromised to the point that new information-driven societal structures in commerce, banking, education, infrastructure, and others across the globe would be put be at risk.
The body of research presented here provides a progressive building of capacity to support information technology, cybersecurity, and cyber defense training efforts. The work starts by designing infrastructure virtualization methods and problem modeling, then advances to creating and testing tunable models for both technical and social-psychological support capabilities. The initial research was designed to increase the capacity of Regis University in education simulations and cyber competitions. As this was achieved the goals evolved to include developing effective multi-agency cyber defense exercises for government and private sector participants.
The research developing hands-on computer laboratory infrastructure presents novel methods for enhancing the delivery of training and cyber competition resources. The multi-method virtualization model describes a strategy for analyzing a broad range of virtualization services for making agile cyber competition, training, and laboratory spaces that are the technical underpinning of the effort. The work adapts the agile development method SCRUM for producing training events with limited resources. Parallel to agile training systems provisioning, the research includes designing a 3D virtual world avatar-based resource to help students develop spatial skills associated with physical security auditing. It consists of a virtual world datacenter and training program.
The second category of contributions includes the presentation of new models for analyzing complex concepts in cybersecurity. These models provide students with tools that allow them to map out newly acquired skills and understanding within a larger context. One model maps how classical security challenges change as digital technologies are introduced using a concept called “bit induction.” The other model maps out how technology can affect one’s sense of identity, and how to manage its disruption.
The third area of contribution includes a rapid form of psychometric feedback, a customized quantitative longitudinal capability assessments, and an agile framework that is an extension of the earlier agile method adaptations.
The most recent category of contribution extends the training analysis to analyzing the resultant training capabilities and providing new models to describe live operation using operational load analysis to describe characteristic behaviors along an incident timeline.
The results of this research include novel cybersecurity frameworks, analytical methods, and education deployment models along with interpretation and documented implementation to support education institutions in meeting the emerging risks of society. Specific contributions include new models for understanding the disruptiveness of cyberattacks, models for agilely and virtually deploying immersive hands-on laboratory experiences, and interdisciplinary approaches to education that meet new psycho-sociological challenges in cyber defense. These contributions extend the forefront of Cybersecurity education and training in a coordinated way to contribute to the effectiveness and relevance of education solutions as society’s cybersecurity needs evolve.

Cloud computing in the last few years has emerged as a modern computing paradigm with its establishment. This is becoming quite popular due to the reduction in cost, scalability and improvements in the IT industry. Virtualization... more

Cloud computing in the last few years has emerged as a modern computing paradigm with its establishment. This is becoming quite popular due to the reduction in cost, scalability and improvements in the IT industry. Virtualization technology has great contribution in field of computing platform. Cloud computing allows you to use services and resources online. It is used to build and maintain the necessary infrastructure for computing resources. For deploying the cloud in an organization as a private, there are various open source platforms and OpenStack retained by the comparative study. The paper describes development of private cloud on OpenStack software by installing CentOS7 operating system for cloud deployment. Different services are installed such as web server, webmail server, FTP and SSH for communication between users within the private cloud. The results depict that hardware utilization of physical machine increases after deploying virtualization. In cloud without virtualization, hardware resources are not used efficiently because of number of users is limited but in cloud with virtualization hardware resources are utilized at maximum and used by more concurrent users. The aim of paper is to present the importance of virtualization and give the best solution for providing services in private organization.

Bu rapor, TBD Kamu Bilgi İslem Merkezleri Yöneticileri Birliği (TBD Kamu-BİB)’nin on ikinci dönem çalışmaları kapsamında, 1. Çalısma Grubu (ÇG1) tarafından hazırlanmıştır. Bilişim teknolojileri sektöründe sanallaştırmanın kullanılmasına... more

Bu rapor, TBD Kamu Bilgi İslem Merkezleri Yöneticileri Birliği (TBD Kamu-BİB)’nin on ikinci dönem çalışmaları kapsamında, 1. Çalısma Grubu (ÇG1) tarafından hazırlanmıştır. Bilişim teknolojileri sektöründe sanallaştırmanın kullanılmasına yönelik bilgiler sunmaktadır. Çalışmanın içeriği, bilişim teknolojileri çalışanları, yöneticileri ve bilişime ilgisi olan herkese yöneliktir.

Abstract—Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand... more

Abstract—Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand network access to a collection of physical resources which can be provisioned according to the need of cloud user under the supervision of Cloud Service providerinteraction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing.Virtualization technology is widely applied to modern data center for cloud computing. Virtualization is used computer resources to imitate other computer resources or whole computers. This paper provides a Virtualization model for cloud computing that may lead to faster access and better performance. This model may help to combine self-service capabilitie...

OpenNebula Means Openness, Excellence, Cooperation and Innovation resource pooling, and dynamic provisioning. Using this Develop a simplest, powerful, highly-scalable and adaptable solution for building and managing virtualized data... more

OpenNebula Means Openness, Excellence, Cooperation and Innovation resource pooling, and dynamic provisioning. Using this Develop a simplest, powerful, highly-scalable and adaptable solution for building and managing virtualized data centers and enterprise clouds. Provide cloud builders and integrators with a modular system that can implement a variety of cloud architectures and can interface with any datacenter service. Its Provide cloud developers and users with choice of cloud and system interfaces, from open cloud to de-facto standards, to support the creation of a rich ecosystem of higher level components. OpenNebula is the open-source industry standard for data center virtualization, offering a simple but feature rich and flexible solution to build and manage enterprise clouds and virtualized data centers. OpenNebula is designed to be simple. Simple to install, update and operate by the admins, and simple to use by end users. Being focused on simplicity, we integrate with existing technologies whenever possible.

Sparing of inordinate vitality utilization is turning into a key worry in networking, on account of the probable practical advantages. These worries, for the most part contended to as "green networking", identify with embeddings vitality... more

Sparing of inordinate vitality utilization is turning into a key worry in networking, on account of the probable practical advantages. These worries, for the most part contended to as "green networking", identify with embeddings vitality mindfulness in the procedure, in the devices and in the conventions of systems. In this work, author initially figures an increasingly exact meaning of the "green" property. Author further more arrange a couple of principles that are key empowering agents of vitality mindful systems administration examine. Then outline the forward-thinking best in class and offer a list of the applicable work, with a prevalent spotlight on green networking.

In the present article the Virtualization Systems are analyzed, as it is explained in the last years, the virtualization has played a fundamental role in the way that the information technology of the companies (IT) has been... more

In the present article the Virtualization Systems are analyzed, as it is explained in the last years, the virtualization has played a fundamental role in the way that the information technology of the companies (IT) has been administrated, now the virtualization is becomes a key piece to achieve the Cloud Computing model and thus achieve the increase in efficiency and flexibility of its resources.

This paper introduces overview of OpenStack and its components, good familiarity with virtualization is required as OpenStack problems requires knowledge on virtualization. This paper also introduces few commands which are useful for... more

This paper introduces overview of OpenStack and its components, good familiarity with virtualization is required as OpenStack problems requires knowledge on virtualization. This paper also introduces few commands which are useful for Installation and Configuration of OpenStack components and a brief description of Image management, Instance Management, Storage Management, Network Management etc.

Network Function Virtualization (NFV) is a promising solution for telecom operators and service providers to improve business agility, by enabling a fast deployment of new services, and by making it possible for them to cope with the... more

Network Function Virtualization (NFV) is a promising solution for telecom operators and service providers to improve business agility, by enabling a fast deployment of new services, and by making it possible for them to cope with the increasing traffic volume and service demand. NFV enables virtualization of network functions that can be deployed as virtual machines on general purpose server hardware in cloud environments, effectively reducing deployment and operational costs. To benefit from the advantages of NFV, virtual network functions (VNFs) need to be provisioned with sufficient resources and perform without impacting network quality of service (QoS). To this end, this paper proposes a model for VNFs placement and provisioning optimization while guaranteeing the latency requirements of the service chains. Our goal is to optimize resource utilization in order to reduce cost satisfying the QoS such as end- to-end latency. We extend a related VNFs placement optimization with a fine-grained latency model including virtualization overhead. The model is evaluated with a simulated network and it provides placement solutions ensuring the required QoS guarantees.

Besides the benefits are there possible challenges government agencies are likely to encounter should they decide to adopt cloud computing? What strategies should be deployed to overcome the inhibitors of cloud computing? These are but... more

Besides the benefits are there possible challenges government agencies are likely to encounter should they
decide to adopt cloud computing? What strategies should be deployed to overcome the inhibitors of cloud computing? These are but few questions this paper aims to investigate. Studies have shown that, cloud computing has become a strategic direction for many government agencies and is already being deployed in many critical areas of the government's cyber infrastructure. The benefits and the challenges of cloud adoption have heightened interest of academic research in recent times. We are however uncertain, per literature factors that hinder successful cloud adoption especially in the Ghanaian context. We posit that,
understanding the challenges of cloud adoption and overcoming them must accompany the use of the technology in order to prevent unwanted technical consequences, and even greater problems from government information management. This study is based on unstructured interviews from selected government agencies in Ghana. The study is grounded on the theory of technology, organization and
environment (TOE) framework. Major inhibiting factors identified include lack of basic infrastructure for cloud take-off, data security, unreliable internet connectivity, and general lack of institutional readiness

Docker provide some facilities, which are useful for developers and administrators. It is an open platform can be used for building, distributing, and running applications in a portable, lightweight runtime and packaging tool, known as... more

Docker provide some facilities, which are useful for developers and administrators. It is an open platform can be used for building, distributing, and running applications in a portable, lightweight runtime and packaging tool, known as Docker Engine. It also provide Docker Hub, which is a cloud service for sharing applications. Costs can be reduced by replacing traditional virtual machine with docker container. It excellently reduces the cost of rebuilding the cloud development platform.

Cloud computing and virtualization are new but indispensable components of computer engineering and information systems curricula for universities and higher education institutions. Learning about these topics is important for students... more

Cloud computing and virtualization are new but indispensable components of computer engineering and information systems curricula for universities and higher education institutions. Learning about these topics is important for students preparing to work in the IT industry. In many companies, information technology operates under tight financial constraints. Virtualization, (for example storage, desktop, and server virtualization), reduces overall IT costs through the consolidation of systems. It also results in reduced loads and energy savings in terms of the power and cooling infrastructure. Therefore it is important to investigate the practical aspects of this topic both for industry practice and for teaching purposes. This paper demonstrates some activities undertaken recently by students at the Eastern Institute of Technology New Zealand and concludes with general recommendations for IT educators, software developers, and other IT professionals.