Data Recovery Research Papers - Academia.edu (original) (raw)

2025

Análise sobre a estrutura e a organização da base de informações institucionais e estatísticas da Pesquisa Municipal Unificada -PMU, sob a perspectiva da disponibilização de seus dados. As informações institucionais e estatísticas, em... more

Análise sobre a estrutura e a organização da base de informações institucionais e estatísticas da Pesquisa Municipal Unificada -PMU, sob a perspectiva da disponibilização de seus dados. As informações institucionais e estatísticas, em âmbito municipal, constituem importante subsídio para a gestão pública, devendo ser disseminadas aos executores dos setores governamentais, bem como aos representantes da sociedade civil organizada. Assim, são avaliados os processos de comunicação em sistemas informacionais e os referenciais para organização e tratamento das informações. Procura-se mostrar que o uso de procedimentos documentários e terminológicos pode melhorar as formas de disponibilização, o acesso e a apropriação da informação, concluindo-se que esses princípios devem ser incorporados às políticas de informação institucionais, como meio de otimizar a disponibilização e a recuperação das informações. Palavras-Chave: sistemas de informação; informação institucional; organização da informação; recuperação da informação; pesquisa municipal; gestão pública; política de informação.

2025, 2007 Photonics in Switching

We demonstrate an all-optical clock-and-data recovery technology for 10-Gb/s NRZ-DPSK signals. With a relatively simple configuration, the clock-recovery scheme achieves less than 1.5-ps RMS jitter for signals after fiber transmission.... more

We demonstrate an all-optical clock-and-data recovery technology for 10-Gb/s NRZ-DPSK signals. With a relatively simple configuration, the clock-recovery scheme achieves less than 1.5-ps RMS jitter for signals after fiber transmission. The pattern-dependence of data-recovery is within 0.5-dB at BER 10 -9 .

2025, COMPEL - The international journal for computation and mathematics in electrical and electronic engineering

PurposeBang‐bang clock and data recovery (BBCDR) circuits are hard nonlinear systems due to the nonlinearity introduced by the binary phase detector (BPD). The specification of the CDR frequency response is determined by jitter tolerance... more

PurposeBang‐bang clock and data recovery (BBCDR) circuits are hard nonlinear systems due to the nonlinearity introduced by the binary phase detector (BPD). The specification of the CDR frequency response is determined by jitter tolerance and jitter transfer. In this paper, jitter transfer and jitter tolerance of the BBCDR are characterized.Design/methodology/approachThe presented method is enough to be used for designing the BBCDR loop parameters.FindingsIn this paper, jitter characteristics of the BBCDR are characterized. As a result, a new equation is presented to obtain angular frequency. Also, the jitter tolerance is expressed in closed form as a function of loop parameters. The analysis is verified using behavioral simulations in MATLAB. Simulation results show that good conformance between analytical equations and simulation results.Originality/valueThe proposed approach offers two advantages compared to conventional designing methods. First, this approach does not consider an...

2025, IEEE Journal of Solid-State Circuits

A low-power, single-channel clock-edge modulated serial link has been fabricated in a standard 0.18-m CMOS technology. The link core size is 343 188 m 2 for the transmitter and 173 83 m 2 for the receiver. The link consumes 3.12 mW when... more

A low-power, single-channel clock-edge modulated serial link has been fabricated in a standard 0.18-m CMOS technology. The link core size is 343 188 m 2 for the transmitter and 173 83 m 2 for the receiver. The link consumes 3.12 mW when operating at 270 Mb/s with a 1.2-V supply. The proposed link transfers all necessary signals between a graphic processor and a mobile display device over a single pair channel, thereby greatly saving the power and cost of the existing full swing parallel lines. The proposed clock edge modulation (CEM) encoding can keep the channel DC-balanced without an additional bit overhead. Since a clock edge is present for each bit, an external reference clock is not needed and its operating frequency can be varied without the possibility of harmonic locking typically found in a referenceless clock and data recovery circuit. A simple DLL-based CEM decoder is described that recovers the data with low power consumption and high jitter tolerance. An analysis of the jitter tolerance to sinusoidal jitter is included along with measured data. The measurement results show jitter tolerance of 20 UI P P with 1-MHz sinusoidal jitter. The use of a push-pull voltage-mode driver further reduces the power consumption.

2025, Third IEEE International Symposium on Dependable, Autonomic and Secure Computing (DASC 2007)

User applications and data in volatile memory are usually lost when an operating system crashes because of errors caused by either hardware or software faults. This is because most operating systems are designed to stop working when some... more

User applications and data in volatile memory are usually lost when an operating system crashes because of errors caused by either hardware or software faults. This is because most operating systems are designed to stop working when some internal errors are detected despite the possibility that user data and applications might still be intact and recoverable. Techniques like exception handling, code reloading, operating system component isolation, micro-rebooting, automatic system service restarts, watchdog timer based recovery and transactional components can be applied to attempt self-healing of an operating system from a wide variety of errors. Fault injection experiments show that these techniques can be used to continue running user applications after transparently recovering the operating system in a large percentage of cases. In cases where transparent recovery is not possible, individual process recovery can be attempted as a last resort.

2025, Procedia Technology

Data carving is a very important topic in digital investigation and computer forensic. Researches are needed to focus on improving data carving techniques to enable digital investigators to retrieve important data and evidences from... more

Data carving is a very important topic in digital investigation and computer forensic. Researches are needed to focus on improving data carving techniques to enable digital investigators to retrieve important data and evidences from damaged or corrupted data resources. This paper is the result of a systematic literature review which answer three main questions in data carving filed. The Results fall into four main directions. First it shows the need of realistic data sets for tools testing. Secondly, it points to the need of object validation under fragmented data storage. Thirdly, investigating content based validation and its benefits in digital investigation field. Finally, it points to a new direction for using semantic validation to reduce false positive rates.

2024, Academic Perspective Procedia

Data storage devices use a specific structure when storing or accessing the stored data. This is called file system. Before beginning to store data in the data storage device, it must be formatted absolutely. While this data storage... more

Data storage devices use a specific structure when storing or accessing the stored data. This is called file system. Before beginning to store data in the data storage device, it must be formatted absolutely. While this data storage device is being formatted, the file system should be selected.NTFS, the most commonly used file system, keeps the files in the disk as a list in the MFT (Master File Table) file. Even if the file is deleted, the file record in this table will not be deleted. The physical location of the file can be found by looking at these MFT records.In this study, computer software was created on the basis of restoring the disk using a MFT file of the NTFS file system, and the result was examined.When national studies are examined, data recovery programs on the market are compared with each other. When international studies are examined, it is seen that NTFS and MFT concepts are explained but data recovery method using MFT records is not examined in detail.

2024, Journal of Mathematical Imaging and Vision

Recently, compressed sensing techniques in combination with both wavelet and directional representation systems have been very effectively applied to the problem of image inpainting. However, a mathematical analysis of these techniques... more

Recently, compressed sensing techniques in combination with both wavelet and directional representation systems have been very effectively applied to the problem of image inpainting. However, a mathematical analysis of these techniques which reveals the underlying geometrical content is completely missing. In this paper, we provide the first comprehensive analysis in the continuum domain utilizing the novel concept of clustered sparsity, which besides leading to asymptotic error bounds also makes the superior behavior of directional representation systems over wavelets precise. First, we propose an abstract model for problems of data recovery and derive error bounds for two different recovery schemes, namely 1 minimization and thresholding. Second, we set up a particular microlocal model for an image governed by edges inspired by seismic data as well as a particular mask to model the missing data, namely a linear singularity masked by a horizontal strip. Applying the abstract estimate in the case of wavelets and of shearlets we prove that -provided the size of the missing part is asymptotically to the size of the analyzing functions -asymptotically precise inpainting can be obtained for this model. Finally, we show that shearlets can fill strictly larger gaps than wavelets in this model.

2024, Journal of Lightwave Technology

This paper compares two burst-mode clock and data recovery (BM-CDR) techniques suitable for bursty upstream data transmission, namely a gated voltage controlled oscillator (GVCO) and an oversampling clock phase alignment (CPA). Numeric... more

This paper compares two burst-mode clock and data recovery (BM-CDR) techniques suitable for bursty upstream data transmission, namely a gated voltage controlled oscillator (GVCO) and an oversampling clock phase alignment (CPA). Numeric models were deduced with timing jitter and duty cycle distortion (DCD) present in the received data. The performance of the two techniques are analyzed and compared especially in an optically amplified long-reach high-split PON (LR-PON) system. It is shown that an oversampling CPA exhibits a large DCD tolerance and better jitter rejection than a GVCO without line coding, resulting in high network efficiency while keeping fast bit synchronisation. This makes the oversampling CPA technique the best option for the proposed network.

2024

Data mining in databases is part of the interdisciplinary field of knowledge discovery used for extracting patterns and relationships amongst the data sets from a data source. This paper focuses on mining data from a database. The study... more

Data mining in databases is part of the interdisciplinary field of knowledge discovery used for extracting patterns and relationships amongst the data sets from a data source. This paper focuses on mining data from a database. The study further identifies how data mining techniques can be used to support digital forensic investigation in a digital crime case. Digital forensic models are reviewed, concepts mapped onto experimental cases and tested on a Pentium (R) Core TM 2 Duo CPU, 1.89 GHz with Windows XP Professional OS. The findings supported the benefits of integrating data mining principles in extracting data for digital evidence.

2024, Sensors

In this paper, we investigate a scenario in which protected and unprotected services coexist in an elastic optical network under dynamic traffic. In the investigated scenario, unprotected services can reuse the reserved idle bandwidth to... more

In this paper, we investigate a scenario in which protected and unprotected services coexist in an elastic optical network under dynamic traffic. In the investigated scenario, unprotected services can reuse the reserved idle bandwidth to provide protection to the protected services. Under this scenario, we propose a new heuristic algorithm that enables such reuse as well as define and introduce a new assignment problem in elastic optical networks, named a Transmission Spectrum Assignment (T-SA) problem. In this paper, we consider a scenario in which services may be routed using the multipath routing approach. Additionally, protection using bandwidth squeezing is also considered. We assess our proposal through simulations on three different network topologies and compare our proposal against the classical protection approach, in which bandwidth reuse is not allowed. For the simulated range of network loads, the maximum (minimum) blocking probability reduction obtained by our proposal is approximately 48% (10%) in the European topology, 46% (7%) in the NSFNET topology, and 32% (6%) in the German topology.

2024, Computer Networks

In survivable elastic optical networks, multi-path protection combined with traffic squeezing has gained attention. Link-disjoint multipath routing (LD-MPR) and bandwidth squeezing protection (BSP) when applied to the routing, modulation... more

In survivable elastic optical networks, multi-path protection combined with traffic squeezing has gained attention. Link-disjoint multipath routing (LD-MPR) and bandwidth squeezing protection (BSP) when applied to the routing, modulation level and spectrum assignment problem are efficient strategies to address the excessive bandwidth demanded by protected services, spectral fragmentation and link-load balancing. LD-MPR enables the division of service transmission bit rate into independent flows, whereas BSP tolerates traffic reduction under link failure. An open issue in the literature is how to efficiently divide demanded traffic among the link-disjoint routes under dynamic-traffic and BSP, while complying the service level agreement (SLA). We propose in this paper a multi-objective-optimization genetic algorithm that defines how service transmission bit rate should be partitioned among the candidate link-disjoint routes. A customized partitioning definition is made for each source–destination node pair in the network, aiming the simultaneous minimization of network blocking probability and average squeezing transmission bit rate experienced by the services during single-link failure. We also propose, a fixed-alternate routing using groups of disjoint paths (FARgdp). Complex dynamic-traffic network scenarios that simultaneously consider BSP, LD-MPR and FARgdp are addressed.

2024, International Journal of Linguistics and Computational Applications

Data Analysis and Investigation using Digital forensics from Digital Storage Devices, is a defined way towards effective data backup strategies, as well as a key aspect in Data Privacy and Confidentiality. Digital storage Devices like... more

Data Analysis and Investigation using Digital forensics from Digital Storage Devices, is a defined way towards effective data backup strategies, as well as a key aspect in Data Privacy and Confidentiality. Digital storage Devices like Hard Drives (internal or external), USB Drives, floppy disks, etc. provide a good medium for better utilisation and storage of data and information. So, the main task is to retrieve the stolen or lost data from these devices. Digital forensics provides the exact concept for this data extraction, in a systematic and effective manner. Now, there can be various conditions of a damaged digital storage device like it may be burnt, wet or physically damaged parts, all these conditions play a significant role in Data Extraction. Since, Data is the most important asset for any organisation, so compromising with its Security and Confidentiality, may be wrong or devastating option, for future. Just spending thousands and millions of dollars in finding the vulnerability (large-scale or small-scale), is not a solution for being secure. There has to be proper and effective choice of ways and tools for it.

2024, 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA)

There are many risks in moving data into public storage environments, along with an increasing threat around large-scale data leakage. Secret sharing scheme has been proposed as a keyless and resilient mechanism to mitigate this, but... more

There are many risks in moving data into public storage environments, along with an increasing threat around large-scale data leakage. Secret sharing scheme has been proposed as a keyless and resilient mechanism to mitigate this, but scaling through large scale data infrastructure has remained the bane of using secret sharing scheme in big data storage and retrievals. This work applies secret sharing methods as used in cryptography to create robust and secure data storage and retrievals in conjunction with data fragmentation. It outlines two different methods of distributing data equally to storage locations as well as recovering them in such a manner that ensures consistent data availability irrespective of file size and type. Our experiments consist of two different methodsdata and key shares. Using our experimental results, we were able to validate previous works on the effects of threshold on file recovery. Results obtained also revealed the varying effects of share writing to and retrieval from storage locations other than computer memory. The implication is that increase in fragment size at varying file and threshold sizes rather than add overheads to file recovery, do so on creation instead, underscoring the importance of choosing a varying fragment size as file size increases.

2024, Springer eBooks

This paper describes the Advanced Forensic Format (AFF), which is designed as an alternative to current proprietary disk image formats. AFF offers two significant benefits. First, it is more flexible because it allows extensive metadata... more

This paper describes the Advanced Forensic Format (AFF), which is designed as an alternative to current proprietary disk image formats. AFF offers two significant benefits. First, it is more flexible because it allows extensive metadata to be stored with images. Second, AFF images consume less disk space than images in other formats [e.g., EnCase images). This paper also describes the Advanced Disk Imager, a new program for acquiring disk images that compares favorably with existing alternatives.

2024

This paper presents a system level jitter tolerance estimation of a 3X oversampling Clock and Data Recovery (CDR) circuit used in high speed serial data communication receivers. It is critical to know the amount of jitter that can be... more

This paper presents a system level jitter tolerance estimation of a 3X oversampling Clock and Data Recovery (CDR) circuit used in high speed serial data communication receivers. It is critical to know the amount of jitter that can be tolerated by the CDR in order to recover the data with satisfied bit error ratio (BER) performance. The jitter tolerance of the CDR is estimated by an event-driven simulation model, developed in Matlab. The theoretical jitter tolerance value is derived analytically for different jitter frequencies. The simulated results show a very close match to the theoretical values. We also compared the simulation time for the event-driven and the conventional fixedtime-step models.

2024

This paper presents the design and implementation of a fully differential 3X oversampling clock and data recovery (CDR) circuit for high-speed serial data link. The CDR is capable of operating at any speed from 2 to 5 Gb/s. The... more

This paper presents the design and implementation of a fully differential 3X oversampling clock and data recovery (CDR) circuit for high-speed serial data link. The CDR is capable of operating at any speed from 2 to 5 Gb/s. The architecture of the CDR replaces the analog VCO and loop filter used in analog PLL based CDR with digital circuits. The CDR uses a digital threshold decision technique to improve the jitter tolerance performance. System level simulation shows that the CDR has a high frequency jitter tolerance of 0.67 UI and acquisition time of 8 baud period. The CDR is implemented in 65 nm CMOS process technology. The post-layout simulation is performed for 1 2 7 − PRBS data. The CDR consumes 39 mW from 1.1 V power supply at 5 Gb/s. The core CDR circuit occupies an area of 0.013 2 mm .

2024, 2009 NORCHIP

This paper presents the design and implementation of a fully differential 3X oversampling clock and data recovery (CDR) circuit for high-speed serial data link. The CDR is capable of operating at any speed from 2 to 5 Gb/s. The... more

This paper presents the design and implementation of a fully differential 3X oversampling clock and data recovery (CDR) circuit for high-speed serial data link. The CDR is capable of operating at any speed from 2 to 5 Gb/s. The architecture of the CDR replaces the analog VCO and loop filter used in analog PLL based CDR with digital circuits. The CDR uses a digital threshold decision technique to improve the jitter tolerance performance. System level simulation shows that the CDR has a high frequency jitter tolerance of 0.67 UI and acquisition time of 8 baud period. The CDR is implemented in 65 nm CMOS process technology. The post-layout simulation is performed for 1 2 7 − PRBS data. The CDR consumes 39 mW from 1.1 V power supply at 5 Gb/s. The core CDR circuit occupies an area of 0.013 2 mm .

2024, 2009 2nd Microsystems and Nanoelectronics Research Conference

This paper presents a system level jitter tolerance estimation of a 3X oversampling Clock and Data Recovery (CDR) circuit used in high speed serial data communication receivers. It is critical to know the amount of jitter that can be... more

This paper presents a system level jitter tolerance estimation of a 3X oversampling Clock and Data Recovery (CDR) circuit used in high speed serial data communication receivers. It is critical to know the amount of jitter that can be tolerated by the CDR in order to recover the data with satisfied bit error ratio (BER) performance. The jitter tolerance of the CDR is estimated by an event-driven simulation model, developed in Matlab. The theoretical jitter tolerance value is derived analytically for different jitter frequencies. The simulated results show a very close match to the theoretical values. We also compared the simulation time for the event-driven and the conventional fixedtime-step models.

2024, Proceedings of the 32nd ACM Symposium on Parallelism in Algorithms and Architectures

2024, Academic Perspective Procedia

Data storage devices use a specific structure when storing or accessing the stored data. This is called file system. Before beginning to store data in the data storage device, it must be formatted absolutely. While this data storage... more

Data storage devices use a specific structure when storing or accessing the stored data. This is called file system. Before beginning to store data in the data storage device, it must be formatted absolutely. While this data storage device is being formatted, the file system should be selected.NTFS, the most commonly used file system, keeps the files in the disk as a list in the MFT (Master File Table) file. Even if the file is deleted, the file record in this table will not be deleted. The physical location of the file can be found by looking at these MFT records.In this study, computer software was created on the basis of restoring the disk using a MFT file of the NTFS file system, and the result was examined.When national studies are examined, data recovery programs on the market are compared with each other. When international studies are examined, it is seen that NTFS and MFT concepts are explained but data recovery method using MFT records is not examined in detail.

2024

Data mining in databases is part of the interdisciplinary field of knowledge discovery used for extracting patterns and relationships amongst the data sets from a data source. This paper focuses on mining data from a database. The study... more

Data mining in databases is part of the interdisciplinary field of knowledge discovery used for extracting patterns and relationships amongst the data sets from a data source. This paper focuses on mining data from a database. The study further identifies how data mining techniques can be used to support digital forensic investigation in a digital crime case. Digital forensic models are reviewed, concepts mapped onto experimental cases and tested on a Pentium (R) Core TM 2 Duo CPU, 1.89 GHz with Windows XP Professional OS. The findings supported the benefits of integrating data mining principles in extracting data for digital evidence.

2024

A prominent type of cloud service is the Infrastructure-as-a-Service (IaaS), which delivers, on-demand, computing resources in the form of virtual machines (VMs) satisfying user needs. In such systems, penalties may be applied if the... more

A prominent type of cloud service is the Infrastructure-as-a-Service (IaaS), which delivers, on-demand, computing resources in the form of virtual machines (VMs) satisfying user needs. In such systems, penalties may be applied if the defined quality level of service level agreement (SLA) is not satisfied. Therefore, high availability is a critical requirement of these systems. A strategy to protect such systems from natural or manmade disasters corresponds to the utilization of multiple data centers located into different geographical locations to provide the service. Considering such systems, redundancy mechanisms can be adopted to receive copies of VM images during data center operation. Hence, whenever a disaster makes one data center unavailable, the VMs can be re-instantiated in other operational data center. Modeling techniques, with a strong mathematical foundation, such as Stochastic Petri Nets (SPN) can be adopted to evaluate survivability in these complex infrastructures. This work presents SPN models to evaluate survivability metrics in IaaS systems deployed into geographically distributed data centers taking into account disaster occurrences. Using the proposed models, IaaS providers can evaluate the impact of VM transmission time and the VM backup period on survivability metrics. A case study is provided to illustrate the effectiveness of the proposed work.

2024, Wyrzucamy algorytm Gutmanna do kosza

Skuteczność nadpisywania danych jako metody niszczenia informacji wciąż budzi wątpliwości i wciąż jest podważania. Blisko 30 lat temu próbę rozwiania tych wątpliwości podjął Peter Gutmann proponując swój 35-przebiegowy algorytm... more

2024, Niszczenie danych - regulacje, a rzeczywista skuteczność

Istnieje wiele norm i standardów opisujących niszczenie zbędnej lub przestarzałej informacji oraz usuwanie danych z nośników przekazywanych innym podmiotom lub wycofywanych z eksploatacji. Regulacje te różnią się wieloma istotnymi... more

Istnieje wiele norm i standardów opisujących niszczenie zbędnej lub przestarzałej informacji oraz usuwanie danych z nośników przekazywanych innym podmiotom lub wycofywanych z eksploatacji. Regulacje te różnią się wieloma istotnymi szczegółami oraz zaleceniami co do sposobów usuwania informacji. Ich analiza wskazuje, że często te zalecenia pozostają w bardzo luźnym związku z wiedzą techniczną o fizyce zapisu informacji i zasadach działania nośników danych.

2024, Proceedings of the 18th International Conference on Predictive Models and Data Analytics in Software Engineering

Disk storage continues to be an important medium for data recording in software engineering, and recovering data from a failed storage disk can be expensive and time-consuming. Unfortunately, while physical damage instances are well... more

Disk storage continues to be an important medium for data recording in software engineering, and recovering data from a failed storage disk can be expensive and time-consuming. Unfortunately, while physical damage instances are well documented, existing studies of data loss are limited, often only predicting times between failures. We present an empirical measurement of patterns of heat damage on indicative, low-cost commodity hard drives. Because damaged hard drives require many hours to read, we propose an efficient, accurate sampling algorithm. Using our empirical measurements, we develop LOGI, a formal mathematical model that, on average, predicts sector damage with precision, recall, F-measure, and accuracy values of over 0.95. We also present a case study on the usage of LOGI and discuss its implications for file carver software. We hope that this model is used by other researchers to simulate damage and bootstrap further study of disk failures, helping engineers make informed decisions about data storage for software systems. CCS CONCEPTS • Computing methodologies → Model development and analysis; • Hardware → Fault models and test metrics.

2024, APN Science Bulletin

This project's goal has been to work with Southeast Asian institutions, archives, agencies and National Meteorological Services (NMS) to build capacities for improving and extending historical instrumental, documentary and palaeo... more

This project's goal has been to work with Southeast Asian institutions, archives, agencies and National Meteorological Services (NMS) to build capacities for improving and extending historical instrumental, documentary and palaeo databases of Southeast Asian weather and climate. The work plan comprised research into and cataloging sources of historical weather observations in archives and repositories globally and, the recovery (imaging and/or digitization) of the same where necessary and appropriate. These long datasets will contribute to the generation of high-quality, high-resolution historical dynamical weather reconstructions (reanalyzes). These will allow scientists and policymakers across the region to address weather/ climate extremes, impacts and risks in ways and over time span not previously possible. The project launched officially at a workshop involving scientists, historians, and archivists, brought together with the aim of opening a multidisciplinary dialogue on historical records and their modern-day application. The project has been successful in establishing a regional arm of the international Atmospheric Circulation Reconstructions over the Earth (ACRE) initiative: ACRE SE Asia. ACRE SE Asia is unique, as no other body exists in the region with the same remit or aims.

2024

A 1.25 GB/s Burst-Mode Clock and Data Recovery Circuit Using the Jitter Reduction Technique

2024, International journal of engineering & technology

Recently the technologies are growing fast, so they have become the point of source and also the sink for data. Data is generated in large volume introducing the concept of structured and unstructured data evolving "Big Data" which needs... more

Recently the technologies are growing fast, so they have become the point of source and also the sink for data. Data is generated in large volume introducing the concept of structured and unstructured data evolving "Big Data" which needs large memory for storage. There are two possible solutions either increase the local storage or use the Cloud Storage. Cloud makes data available to the user anytime, anywhere, anything. Cloud allows the user to store their data virtually without investing much. However, this data is on cloud raising a concern of data security and recovery. This attack is made by the untrusted or unauthorized user remotely. The attacker may modify, delete or replace the data. Therefore, different models are proposed for a data integrity check and proof of retrievability. This paper includes the literature review related to various techniques for data integrity, data recovery and proof of retrievability.

2024

Context: Digital data is being stored in large quantities in Cloud, requiring data backup and recovery services. Due to many factors such as disasters and other disruptive events, the risk of data loss is huge. Therefore, backup and data... more

Context: Digital data is being stored in large quantities in Cloud, requiring data backup and recovery services. Due to many factors such as disasters and other disruptive events, the risk of data loss is huge. Therefore, backup and data recovery are essential and effective in improvement of system availability and maintaining Business Continuity. Nevertheless, the process to achieve the goal of business uninterrupted faces many challenges regarding data security, integrity and failure prediction. Objective: This paper has the following goals: analyzing systematically the current published research and presenting the most common factors leading to the need of Disaster Recovery and backup plan; investigating and identifying the adopted solutions and techniques to prevent data loss; and lastly, investigating the influence Data Recovery and Backup has in terms of business continuity and identifying the privacy and security issues regarding disaster recovery process. Method: A systematic mapping study was conducted, in which 45 papers, dated from 2010 to 2020 were evaluated. Results: A set of 45 papers is selected from an initial search of 250 papers, including 10 papers from snowball sampling, following the references from some paper of interest. These results are categorized based on the relevant research questions, such as causes of disasters, data loss, business continuity, and security and privacy issues. Conclusion: An overview of the topic is presented by investigating and identifying the following features: challenges, issues, solutions, techniques, factors, and effects regarding the backup and recovery process.

2024, Advances in Multidisciplinary and Scientific Research Journal

According to a survey, 90% of all information never leaves the digital form. The fundamental importance of data preservation is quite clear, as a small inattentive move could lead to a loss of pertinent data which could inform a major... more

According to a survey, 90% of all information never leaves the digital form. The fundamental importance of data preservation is quite clear, as a small inattentive move could lead to a loss of pertinent data which could inform a major decision. The majority of information these days is being created, modified, and consumed entirely in digital form. This means most spreadsheets, digital snapshots and databases will never make it onto paper. It is common to lose data due to storage errors or low performance. If this happens, the user/owner may want to quickly recover the files if the data is really important and if it is possible to retrieve the lost data. In this article, we look at ways data can be lost through deletion, recovery procedures and mechanisms, and recommendations to secure data against such events. (3)

2024, The Journal of Supercomputing

RAID has long been established as an effective way to provide highly reliable as well as high-performance disk subsystems. However, reliability in RAID systems comes at the cost of extra disks. In this paper, we describe a mechanism that... more

RAID has long been established as an effective way to provide highly reliable as well as high-performance disk subsystems. However, reliability in RAID systems comes at the cost of extra disks. In this paper, we describe a mechanism that we have termed RAID0.5 that enables striped disks with very high data reliability but low disk cost. We take advantage of the fact that most disk systems use offline backup systems for disaster recovery. With the use of these offline backup systems, the disk system needs to only replicate data since the last backup, thus drastically reducing the storage space requirement. Though RAID0.5 has the same data loss characteristics of traditional mirroring, the lower storage space comes at the cost of lower availability. Thus, RAID0.5 is a tradeoff between lower disk cost and lower availability while still preserving very high data reliability. We present analytical reliability models and experimental results that demonstrate the enhanced reliability and performance of the proposed RAID0.5 system.

2024, Communications earth & environment

Following a disaster, crucial decisions about recovery resources often prioritize immediate damage, partly due to a lack of detailed information on who will struggle to recover in the long term. Here, we develop a data-driven approach to... more

Following a disaster, crucial decisions about recovery resources often prioritize immediate damage, partly due to a lack of detailed information on who will struggle to recover in the long term. Here, we develop a data-driven approach to provide rapid estimates of nonrecovery, or areas with the potential to fall behind during recovery, by relating surveyed data on recovery progress with data that would be readily available in most countries. We demonstrate this approach for one dimension of recovery-housing reconstruction-analyzing data collected five years after the 2015 Nepal earthquake to identify a range of ongoing social and environmental vulnerabilities related to non-recovery in Nepal. If such information were available in 2015, it would have exposed regional differences in recovery potential due to these vulnerabilities. More generally, moving beyond damage data by estimating nonrecovery focuses attention on those most vulnerable sooner after a disaster to better support holistic and nuanced decisions.

2023, Journal of Computer Science and Technology

In web context, a self-healing database system which has the ability to automatically locate and undo the set of transactions that are corrupted by malicious attacks is in urgent need. The metrics of survivability and availability require... more

In web context, a self-healing database system which has the ability to automatically locate and undo the set of transactions that are corrupted by malicious attacks is in urgent need. The metrics of survivability and availability require the database provide continuous services in the period of recovery, which is referred to as dynamic recovery. In this paper, we present a Fine Grained Transaction Log to serve for the damage assessment. The log records all the data items of the read only and updateinvolved operations for the committed transactions, and even extracts data items read by the subqueries in the SQL statements. Based on the log, we propose a Dynamic Recovery System to implement the damage repair. The system retains the execution results for blind write transactions and gives a complete solution to the issues of recovery conflicts caused by Forward Recovery. Moreover, a confinement activity is imposed on the in-repairing data to prevent a further damage propagation while the data recovery is processing. The integrity measurement and performance evaluation in our experiments show that the system is reliable and high-efficient.

2023, International Journal of u- and e- Service, Science and Technology

In recent years, traffic requirement with variable bit rates and Quality of Service due to online High Definition video streaming, downloading movies and transferring of files are being satisfied with Elastic Optical Network (EON). Most... more

In recent years, traffic requirement with variable bit rates and Quality of Service due to online High Definition video streaming, downloading movies and transferring of files are being satisfied with Elastic Optical Network (EON). Most efficient spectrum utilization is required to satisfy the demand with minimum blocking probability. The network is unstable and a small failure can lead to tremendous loss of data, hence survivability is mandatory. Slot Capacity based Spectrum Allocation with Survivability (SCSAS) algorithm is proposed which aims at the efficient spectrum utilization and configuration of protection paths. Duty Cycle Division Multiplexing (DCDM) and hop based modulation technique are also introduced to manage the spectrum which further reduces the spectrum wastage. The formation of spectrum fragments on the termination of existing requests increases the blocking probability. In order to improve the spectrum utilization, a spectrum defragmentation technique is introduced in the proposed work. Dynamic configuration of backup paths provides survivability. Results show an increase in free spectrum for dynamic requests, lesser spectrum wastage, and reduced blocking probability than the traditional algorithms such as fixed, flexible and random spectrum allocation.

2023, Proceedings of the 18th International Conference on Predictive Models and Data Analytics in Software Engineering

Disk storage continues to be an important medium for data recording in software engineering, and recovering data from a failed storage disk can be expensive and time-consuming. Unfortunately, while physical damage instances are well... more

Disk storage continues to be an important medium for data recording in software engineering, and recovering data from a failed storage disk can be expensive and time-consuming. Unfortunately, while physical damage instances are well documented, existing studies of data loss are limited, often only predicting times between failures. We present an empirical measurement of patterns of heat damage on indicative, low-cost commodity hard drives. Because damaged hard drives require many hours to read, we propose an efficient, accurate sampling algorithm. Using our empirical measurements, we develop LOGI, a formal mathematical model that, on average, predicts sector damage with precision, recall, F-measure, and accuracy values of over 0.95. We also present a case study on the usage of LOGI and discuss its implications for file carver software. We hope that this model is used by other researchers to simulate damage and bootstrap further study of disk failures, helping engineers make informed decisions about data storage for software systems. CCS CONCEPTS • Computing methodologies → Model development and analysis; • Hardware → Fault models and test metrics.

2023, International Journal of Networked and Distributed Computing

Radio Frequency Identification (RFID) is an emerging wireless object identification technology with many potential applications such as supply chain management, personnel tracking and healthcare. However, security vulnerabilities of the... more

Radio Frequency Identification (RFID) is an emerging wireless object identification technology with many potential applications such as supply chain management, personnel tracking and healthcare. However, security vulnerabilities of the RFID system have been a serious concern for its wide adoption in many applications. Although there are lots of work to provide privacy and anonymity, little focus has been given to ensure confidentiality and integrity of RFID tag data. To this end, we propose a lightweight hybrid approach based on stenographic and watermarking to ensure data confidentiality, linkability resistance and integrity on the RFID tags data. The proposed technique is capable of tampered data recovering and restoring for RFID tag. It has been validated and tested on EPC class 1 gen2 tags.

2023, IOSR Journal of Electronics and Communication Engineering

The Motion Estimation Computing Array is used in Video Encoding applications to calculate the best motion between the current frame and reference frames. The MECA is in decoding application occupies large amount of area and timing... more

The Motion Estimation Computing Array is used in Video Encoding applications to calculate the best motion between the current frame and reference frames. The MECA is in decoding application occupies large amount of area and timing penalty. By introducing the concept of Built-in Self test technique the area overhead is increased in less amount of area. In this Paper the Built-in Self test Technique (BIST) is included in the MECA and in each of Processing Element in MECA is tested using residue codes .the quotient and remainder was cross checked across the processing element and test code generator. further the residue code is replaced with the boolen logic adder in trst code generator in order to reduce the area of the circuit.Thus by introducing the BIST Concept the testing is done internally without Connecting outside testing Requirements. So the area required is also reduces. And in this Project the Errors in MECA are Calculated and the Concept of Diagnoses i.e. Self Detect and Self Repair Concepts are introduced.

2023, Reliability Engineering & System Safety

2023, International Journal of Computer Applications

Computer forensics can be defined as obtaining computer storage media so that data can be used as evidence in court. Traditionally the analysis of sources of digital evidences is done by examining the artefacts and metadata of artefacts... more

Computer forensics can be defined as obtaining computer storage media so that data can be used as evidence in court. Traditionally the analysis of sources of digital evidences is done by examining the artefacts and metadata of artefacts for authenticating the gathered information and sequencing them in the manner they occurred. Analyzing the information acquired by forensic investigator in traditional way is a cumbersome task but it can be overcome if all the related artefacts are grouped together on the basis of metadata information they prevails. This paper is mainly focused on metadata based association of digital evidences which can simplify the task of forensic investigator and can also help in reducing human intervention making the process automatic. The main objective of this paper is to study working principal and compare different existing forensic tools on the basis of various parameters such as capability for accessing digital evidence, sources they can examine, metadata parsing capability, and analyzing them that whether they can provide grouping of different artefacts present in same or different investigating sources on the basis of metadata they contain.

2023

To achieve high bit rates link designers are using more sophisticated communication techniques, often turning to 4PAM transmission or decision-feedback equalization (DFE). Interestingly, with only minor modification the same hardware... more

To achieve high bit rates link designers are using more sophisticated communication techniques, often turning to 4PAM transmission or decision-feedback equalization (DFE). Interestingly, with only minor modification the same hardware needed to implement a 4PAM system can be used to implement a loop-unrolled single-tap DFE receiver. To get the maximum performance from either technique, the link has to be tuned to match the specific channel it is driving. Adaptive equalization using data based update filtering allows continuous updates while minimizing the required sampler front-end hardware and significantly reduces the cost of implementation in multi-level signaling schemes. A transceiver chip was designed and fabricated in 0.13pm CMOS process to investigate dual-mode operation and the modifications of the standard adaptive algorithms necessary to operate in high-speed link environments.

2023, International Journal of Science, Technology & Management

Data plays the important role, so that data recovery and data security be prioritized. Computer users often lose their data due to personal errors or by attacks. Digital forensics has one sub-field called computer forensic, which has an... more

Data plays the important role, so that data recovery and data security be prioritized. Computer users often lose their data due to personal errors or by attacks. Digital forensics has one sub-field called computer forensic, which has an important role in the process of secure data recovery. USB Flashdisk as the most widely used storage media has a probability of data loss. It is necessary to do computer forensic actions, especially secure data recovery, on it so that it can restore data securely to other media while protecting it by giving privilege root. In this research, computer forensic testing the 2781 files of various data formats that were erased on a 32 GB USB flash drive using Photorec. The media for collecting data recovery results using an Intel computer, 2 GB RAM, 1.8 GHz processor, the Linux operating system Xubuntu 20.04. Testing is carried out following the test scenarios that have been designed, then observed, recorded, and analyzed. Photorec places recovery data in ...

2023, Proceedings of the 18th International Conference on Predictive Models and Data Analytics in Software Engineering

Disk storage continues to be an important medium for data recording in software engineering, and recovering data from a failed storage disk can be expensive and time-consuming. Unfortunately, while physical damage instances are well... more

Disk storage continues to be an important medium for data recording in software engineering, and recovering data from a failed storage disk can be expensive and time-consuming. Unfortunately, while physical damage instances are well documented, existing studies of data loss are limited, often only predicting times between failures. We present an empirical measurement of patterns of heat damage on indicative, low-cost commodity hard drives. Because damaged hard drives require many hours to read, we propose an efficient, accurate sampling algorithm. Using our empirical measurements, we develop LOGI, a formal mathematical model that, on average, predicts sector damage with precision, recall, F-measure, and accuracy values of over 0.95. We also present a case study on the usage of LOGI and discuss its implications for file carver software. We hope that this model is used by other researchers to simulate damage and bootstrap further study of disk failures, helping engineers make informed decisions about data storage for software systems. CCS CONCEPTS • Computing methodologies → Model development and analysis; • Hardware → Fault models and test metrics.

2023, International Journal of Computer Applications

In today's world of fast changing technology where everything is governed by Internet directly or indirectly, the trend of crime has undergone a dramatic change over the past few years. Today, one can commit a crime with just a click of a... more

In today's world of fast changing technology where everything is governed by Internet directly or indirectly, the trend of crime has undergone a dramatic change over the past few years. Today, one can commit a crime with just a click of a button on laptop or computer and enjoy the garb of anonymity and impunity to a great extent. In such a scenario, it has become imperative to throw some light on the emerging issue of tackling cybercrimes in 21st century. This paper describes the extraction and analysis of volatile data that is available in computer's RAM that is in a running state on windows operating systems and shows the utility of RAM in Computer Forensics that is often neglected while crime scenario with running system is encountered. Keeping in view this necessity, it is essential to consider the issues of digital evidence and their collection, preservation, and admissibility in the court of law.

2023, Sensors (Basel, Switzerland)

The actor nodes are the spine of wireless sensor and actor networks (WSANs) that collaborate to perform a specific task in an unverified and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly... more

The actor nodes are the spine of wireless sensor and actor networks (WSANs) that collaborate to perform a specific task in an unverified and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power consumption of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. Therefore, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). In this paper, we propose an Efficient Actor Recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of a Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balancin...

2023, Custom Integrated Circuits Conference

This paper presents the behavioral implementation of jitter tolerance test benches for digital clock and data recovery circuits using Verilog-A. First, we encode a variable-length pseudo-random bit sequence (PRBS) generator. Such circuits... more

This paper presents the behavioral implementation of jitter tolerance test benches for digital clock and data recovery circuits using Verilog-A. First, we encode a variable-length pseudo-random bit sequence (PRBS) generator. Such circuits are widely used to generate test data for a variety of circuits and systems. Using this PRBS generator, we set up a test bench for the evaluation of

2023, Index of Texas Archaeology: Open Access Gray Literature from the Lone Star State

This is a work for hire produced for the Texas Department of Transportation (TxDOT), which owns all rights, title, and interest in and to all data and other information developed for this project under Contract 573XXSA006 and 57-3XXSA004.... more

This is a work for hire produced for the Texas Department of Transportation (TxDOT), which owns all rights, title, and interest in and to all data and other information developed for this project under Contract 573XXSA006 and 57-3XXSA004. Brief passages from this publication may be reproduced without permission provided that credit is given to TxDOT and TRC Environmental Corporation. Permission to reprint an entire chapter, section, figures or tables must be obtained in advance from the Supervisor of the