Allan Borodin - Academia.edu (original) (raw)

Papers by Allan Borodin

Research paper thumbnail of Toward a Model for Backtracking and Dynamic Programming

Electronic Colloquium on Computational Complexity, 2009

We propose a model called priority branching trees (pBT) for backtracking and dynamic programming... more We propose a model called priority branching trees (pBT) for backtracking and dynamic programming algorithms. Our model generalizes both the priority model of Borodin, Nielson and Rackoff, as well as a simple dynamic programming model due to Woeginger, and hence spans a wide spectrum of algorithms. After witnessing the strength of the model, we then show its limitations by providing lower bounds for algorithms in this model for several classical problems such as Interval Scheduling, Knapsack and Satisfiability.

Research paper thumbnail of Toward a Model for Backtracking and Dynamic Programming

Research paper thumbnail of An Experimental Study of Algorithms for Online Bipartite Matching

arXiv (Cornell University), Aug 14, 2018

We perform an experimental study of algorithms for online bipartite matching under the known i.i.... more We perform an experimental study of algorithms for online bipartite matching under the known i.i.d. input model with integral types. In the last decade, there has been substantial effort in designing complex algorithms with the goal of improving worst-case approximation ratios. Our goal is to determine how these algorithms perform on more practical instances rather than worst-case instances. In particular, we are interested in whether the ranking of the algorithms by their worst-case performance is consistent with the ranking of the algorithms by their average-case/practical performance. We are also interested in whether preprocessing times and implementation difficulties that are introduced by these algorithms are justified in practice. To that end we evaluate these algorithms on different random inputs as well as real-life instances obtained from publicly available repositories. We compare these algorithms against several simple greedy-style algorithms. Most of the complex algorithms in the literature are presented as being non-greedy (i.e., an algorithm can intentionally skip matching a node that has available neighbors) to simplify the analysis. Every such algorithm can be turned into a greedy one without hurting its worst-case performance. On our benchmarks, non-greedy versions of these algorithms perform much worse than their greedy versions. Greedy versions perform about as well as the simplest greedy algorithm by itself. This, together with our other findings, suggests that simplest greedy algorithms are competitive with the state-of-the-art worst-case algorithms for online bipartite matching on many average-case and practical input families. Greediness is by far the most important property of online algorithms for bipartite matching. * Research is supported by NSERC.

Research paper thumbnail of Lower Bounds on the Length of Universal Traversal Sequences (Detailed Abstract)

Symposium on the Theory of Computing, 1989

Universal traversal sequences for d-regular n-vertex graphs require length Q(d"7i2 + dn2 log f), ... more Universal traversal sequences for d-regular n-vertex graphs require length Q(d"7i2 + dn2 log f), for 3 < d _< n/3-2. This is nearly tight for d = 0:n). We also introduce and study several variations on the problem, e.g. edge-universal traversal sequences, showing how im:pr,oved lower bounds on these would improve the bounds given above. 1. Universal TraversaIl Sequences Universal traversal sequencles were introducecl by Cook (see Aleliunas [l] and Aleliunas et al. [2]), motivated by the complexity of graph traversad. Let G(d,n) be th e set of all connected, d-regular, n-vertex, edge-labeled, lmdirected graphs G =

Research paper thumbnail of Routing, Merging and Sorting on Parallel Models of Computation (Extended Abstract)

Symposium on the Theory of Computing, 1982

Research paper thumbnail of Greedy Bipartite Matching in Random Type Poisson Arrival Model

arXiv (Cornell University), May 2, 2018

We introduce a new random input model for bipartite matching which we call the Random Type Poisso... more We introduce a new random input model for bipartite matching which we call the Random Type Poisson Arrival Model. Just like in the known i.i.d. model (introduced by Feldman et al. [7]), online nodes have types in our model. In contrast to the adversarial types studied in the known i.i.d. model, following the random graphs studied in Mastin and Jaillet [2], in our model each type graph is generated randomly by including each offline node in the neighborhood of an online node with probability c/n independently. In our model, nodes of the same type appear consecutively in the input and the number of times each type node appears is distributed according to the Poisson distribution with parameter 1. We analyze the performance of the simple greedy algorithm under this input model. The performance is controlled by the parameter c and we are able to exactly characterize the competitive ratio for the regimes c = o(1) and c = ω(1). We also provide a precise bound on the expected size of the matching in the remaining regime of constant c. We compare our results to the previous work of Mastin and Jaillet who analyzed the simple greedy algorithm in the G n,n,p model where each online node type occurs exactly once. We essentially show that the approach of Mastin and Jaillet can be extended to work for the Random Type Poisson Arrival Model, although several nontrivial technical challenges need to be overcome. Intuitively, one can view the Random Type Poisson Arrival Model as the G n,n,p model with less randomness; that is, instead of each online node having a new type, each online node has a chance of repeating the previous type. * Research is supported by NSERC.

Research paper thumbnail of Competitive Paging with Locality of Reference (Preliminary Version)

Symposium on the Theory of Computing, 1991

Research paper thumbnail of Budgetary Effects on Pricing Equilibrium in Online Markets

Adaptive Agents and Multi-Agents Systems, May 9, 2016

Following the work of Babaioff et al [4], we consider the pricing game with strategic vendors and... more Following the work of Babaioff et al [4], we consider the pricing game with strategic vendors and a single buyer, modeling a scenario in which multiple competing vendors have very good knowledge of a buyer, as is common in online markets. We add to this model the realistic assumption that the buyer has a fixed budget and does not have unlimited funds. When the buyer's valuation function is additive, we are able to completely characterize the different possible pure Nash Equilibria (PNE) and in particular obtain a necessary and sufficient condition for uniqueness. Furthermore, we characterize the market clearing (or Walresian) equilibria for all submodular valuations. Surprisingly, for certain monotone submodular function valuations, we show that the pure NE can exhibit some counterintuitive phenomena; namely, there is a valuation such that the pricing will be market clearing and within budget if the buyer does not reveal the budget but will result in a smaller set of allocated items (and higher prices for items) if the buyer does reveal the budget. It is also the case that the conditions that guarantee market clearing in Babaioff et al [4] for submodular functions are not necessarily market clearing when there is a budget. Furthermore, with respect to social welfare, while without budgets all equilibria are optimal (i.e. POA = POS = 1), we show that with budgets the worst equilibrium may only achieve 1 n−2 of the best equilibrium.

Research paper thumbnail of Lower bounds for high dimensional nearest neighbor search and related problems

Research paper thumbnail of Any-Order Online Interval Selection

arXiv (Cornell University), Mar 10, 2023

We consider the problem of online interval scheduling on a single machine, where intervals arrive... more We consider the problem of online interval scheduling on a single machine, where intervals arrive online in an order chosen by an adversary, and the algorithm must output a set of nonconflicting intervals. Traditionally in scheduling theory, it is assumed that intervals arrive in order of increasing start times. We drop that assumption and allow for intervals to arrive in any possible order. We call this variant any-order interval selection (AOIS). We assume that some online acceptances can be revoked, but a feasible solution must always be maintained. For unweighted intervals and deterministic algorithms, this problem is unbounded. Under the assumption that there are at most k different interval lengths, we give a simple algorithm that achieves a competitive ratio of 2k and show that it is optimal amongst deterministic algorithms, and a restricted class of randomized algorithms we call memoryless, contributing to an open question by Adler and Azar [1]; namely whether a randomized algorithm without memory or with only "bounded" access to history can achieve a constant competitive ratio. We connect our model to the problem of call control on the line, and show how the algorithms of Garay et al. [23] can be applied to our setting, resulting in an optimal algorithm for the case of proportional weights. We also discuss the case of intervals with arbitrary weights, and show how to convert the single-length algorithm of Fung et al. [21] into a classify and randomly select algorithm that achieves a competitive ratio of 2k. Finally, we consider the case of intervals arriving in a random order, and show that for single-lengthed instances, a one-directional algorithm (i.e. replacing intervals in one direction), is the only deterministic memoryless algorithm that can possibly benefit from random arrivals.

Research paper thumbnail of COMPUTERS AND EMPLOYMENT11Parts of this chapter are reprinted, by permission, from C. C. Gotlieb and A. Borodin, Computers and employment, Comm. ACM 15, 695–702 (1972). Copyright 1972 by the Assoc. for Comput. Machinery

Publisher Summary This chapter highlights the rapid growth of computers technologies and their im... more Publisher Summary This chapter highlights the rapid growth of computers technologies and their impact on employment. Technological change primarily means automation, and automation in the office means computers. The relationship of computers and automation to employment is a part of the more general relation of technological change to employment. The most obvious effect is that increases in productivity as a result of technology can eliminate jobs. Technology affects the individual worker both in his attitude toward work and in the nature and amount of the work done by the individual. The structure of the entire economy and the organizational structure of individual firms are affected by such changes. The commonly accepted index of technological change is productivity or output per man-hour, measured in terms of some unit product or service. Despite the high rate of technological change in the twentieth century and the resulting gains in productivity, the total labor force of the industrialized nations has increased throughout the period. Increased consumer demands have kept pace with productivity increases, and employment in the aggregate sense has stayed relatively stable. When indirect labor is considered, the skill levels required within various industries have both increased and decreased. While alienation is still present, the introduction of automation and computing may have improved work attitudes when compared to the effects of earlier mechanization. But it is not possible to be complacent about the effects of computers and automation on employment.

Research paper thumbnail of Computer Capabilities and Limitations

This chapter focuses on capabilities and limitations of a computer. The identification of artific... more This chapter focuses on capabilities and limitations of a computer. The identification of artificial intelligence with mathematics may represent a view associated with the emotions and instincts that are found in the animal world and the human, but intelligence is logical and, hence, mathematical. However, the themes of artificial intelligence is to try to imbue programs with learning capabilities, that is, have the programs use and generalize upon past experience. In computer programs, this learning capability is usually limited to simple success or failure reinforcement schemes. Another substantial effort of artificial intelligence in a well-defined environment is the application of computerized mechanical theorem proving. The ability to learn natural language is the distinguishing characteristic of human intelligence, from which other abilities follow. The fact that present-day computers have limited capabilities for playing games, solving problems, and dealing with languages is not in itself a guide to predicting their eventual capabilities. Despite the converging efforts by scientists from many disciplines, it is turning out to be enormously difficult to understand the way in which memory, learning, recognition, and conceptualization take place in the brain.

Research paper thumbnail of The Distribution of Computer Products and Services

This chapter highlights the issue of distribution of computer products and service. The computing... more This chapter highlights the issue of distribution of computer products and service. The computing industry is increasingly complex, and there are many aspects to the production and distribution of computer hardware and software. However, the effects on social issues involving computers are less clear. Certain industries such as communication, where the advantages of scale are particularly important, are regarded as natural monopolies, and they are not subjected to the restrictions placed on monopolies. Instead they are usually subject to regulations that require them to offer service to everyone willing to pay for it, which limit the rates for services and the profits, and which prevent the company from using its monopoly as a competitive advantage in other situations. In many countries, communications are operated by the state, while in others, a few companies, the common carriers, are given monopolies or near-monopolies, often so as to allow each to provide specialized services such as telephone, telegraph, radio broadcasting, and television transmission.

Research paper thumbnail of Shifts in Power

This chapter focuses on the effects of computer on the relationships between different components... more This chapter focuses on the effects of computer on the relationships between different components of an organization, between components at different levels, and between a component and the individuals within it. One factor that enters into the influence of the computer is whether the computing facilities are centralized or decentralized. The increasing value of information as a commodity brings with it the potential to change the bases of power and create new ones. The systematic and purposeful acquisition of information and its systematic application rather than science or technology is emerging as the new foundations for work, productivity, and effort throughout the world. In computing, the general economic rule that favors large production systems over small ones, based on the economy of scale, has come to be known as Grosch's law. This states that the effectiveness of a computing system, as measured by throughput or speed, is proportional to the square of the cost.

Research paper thumbnail of Competitive paging with locality of reference (brief summary)

American Mathematical Society eBooks, Mar 10, 1992

Research paper thumbnail of Files, Data Banks, and Information Systems

Elsevier eBooks, 1973

This chapter reviews computer files, data banks, and information systems. It focuses on the chara... more This chapter reviews computer files, data banks, and information systems. It focuses on the characteristics of the files used in routine billing, record-keeping, integrated files, and management information systems. The components of a file are its records, the fields, and the keys. The number of components and the ways in which they are interrelated are determined by the requirements of the system for which the file is maintained. In most files, the record size lies between 100 and 1000 characters. Files can also be regarded as static or volatile according to the permanence of the records. The reliability imposed on the file processing by the system is another characteristic that greatly affects the type of processing, particularly with respect to duplication of processors, cost, and so on. The Data Bank and Information System of the Bureau of Labor Statistics, Washington, D. C., consists of a large number of time series on statistics related to employment originating from the monthly current population surveys conducted by the Census Bureau. It contains data on individuals such as employment status and duration, occupation, industry, sex, age, color, marital status, education, and household relationship.

Research paper thumbnail of Systems, Models, and Simulations

Elsevier eBooks, 1973

This chapter discusses computer systems, models, and simulations in the application of variety of... more This chapter discusses computer systems, models, and simulations in the application of variety of planning techniques. Many systems under study today are large to such an extent and have so many components that their behavior cannot be calculated directly from physical or statistical principles. If improvements are proposed in a system that is difficult or highly undesirable to experiment with, to evaluate the proposals, a computer model may be constructed and simulations can be carried out on it. Different kinds of models can be used to aid in systems analysis and design. In systems analysis, Churchman (1968) has described five basic aspects of a system, namely, the total system objectives, the environment, the components, the resources, and the management. However, there will be external variables and parameters that describe the environment and internal parameters that are associated with the components.

Research paper thumbnail of Problems and Issues

This chapter discusses the problems and issues associated with computers. The questions raised by... more This chapter discusses the problems and issues associated with computers. The questions raised by the presence of computers range from technical problems, which can be posed rigorously and solved mathematically or experimentally, through social issues, which can only be resolved, if at all, after there is agreement about goals, to philosophical probings, whose answers are essentially discussions of the meanings of the terms being used. An indication of the number and range of problems associated with computers can be given by attempting to classify the kinds of problems that arise. The areas that designate the types are shown as overlapping to indicate that the subject matter overlaps and to convey the fact that the answers to a problem will depend on the context in which it is raised. The issues involving computer subjects are so broad, and in some cases changing so rapidly, that it is not possible to consider them all in the same detail.

Research paper thumbnail of Information Systems and Privacy

This chapter reviews computer information systems and their privacy. The concept of privacy is st... more This chapter reviews computer information systems and their privacy. The concept of privacy is strongly dependent on the cultural and political context, and changes with time. Privacy, in one form or another, is basic to human relations. The concern about the ways in which modern technology affects privacy centers on the way information is gathered and also on the unsatisfactory practices for verifying, maintaining security on, and restricting the flow of information. Computers are primarily involved in these latter functions, but in examining the relation between computers and privacy, it is desirable to look at information gathering first, partly as background, and also because computers are often involved in the subsequent analysis. Devices designed for surveillance, wiretapping, and electronic eaves-dropping have been by far the most difficult to cope with in the matter of privacy. Computerized banking systems are just coming into use, and it is likely that they will grow and become general. It is significant that the Data Surveillance Bill is designed to deal with a specific type of damage, namely, that arising out of computerized credit reporting.

Research paper thumbnail of Advice Complexity of Priority Algorithms

Theory of computing systems, Nov 13, 2019

The priority model of "greedy-like" algorithms was introduced by Borodin, Nielsen, and Rackoff in... more The priority model of "greedy-like" algorithms was introduced by Borodin, Nielsen, and Rackoff in 2002. We augment this model by allowing priority algorithms to have access to advice, i.e., side information precomputed by an all-powerful oracle. Obtaining lower bounds in the priority model without advice can be challenging and may involve intricate adversary arguments. Since the priority model with advice is even more powerful, obtaining lower bounds presents additional difficulties. We sidestep these difficulties by developing a general framework of reductions which makes lower bound proofs relatively straightforward and routine. We start by introducing the Pair Matching problem, for which we are able to prove strong lower bounds in the priority model with advice. We develop a template for constructing a reduction from Pair Matching to other problems in the priority model with advice-this part

Research paper thumbnail of Toward a Model for Backtracking and Dynamic Programming

Electronic Colloquium on Computational Complexity, 2009

We propose a model called priority branching trees (pBT) for backtracking and dynamic programming... more We propose a model called priority branching trees (pBT) for backtracking and dynamic programming algorithms. Our model generalizes both the priority model of Borodin, Nielson and Rackoff, as well as a simple dynamic programming model due to Woeginger, and hence spans a wide spectrum of algorithms. After witnessing the strength of the model, we then show its limitations by providing lower bounds for algorithms in this model for several classical problems such as Interval Scheduling, Knapsack and Satisfiability.

Research paper thumbnail of Toward a Model for Backtracking and Dynamic Programming

Research paper thumbnail of An Experimental Study of Algorithms for Online Bipartite Matching

arXiv (Cornell University), Aug 14, 2018

We perform an experimental study of algorithms for online bipartite matching under the known i.i.... more We perform an experimental study of algorithms for online bipartite matching under the known i.i.d. input model with integral types. In the last decade, there has been substantial effort in designing complex algorithms with the goal of improving worst-case approximation ratios. Our goal is to determine how these algorithms perform on more practical instances rather than worst-case instances. In particular, we are interested in whether the ranking of the algorithms by their worst-case performance is consistent with the ranking of the algorithms by their average-case/practical performance. We are also interested in whether preprocessing times and implementation difficulties that are introduced by these algorithms are justified in practice. To that end we evaluate these algorithms on different random inputs as well as real-life instances obtained from publicly available repositories. We compare these algorithms against several simple greedy-style algorithms. Most of the complex algorithms in the literature are presented as being non-greedy (i.e., an algorithm can intentionally skip matching a node that has available neighbors) to simplify the analysis. Every such algorithm can be turned into a greedy one without hurting its worst-case performance. On our benchmarks, non-greedy versions of these algorithms perform much worse than their greedy versions. Greedy versions perform about as well as the simplest greedy algorithm by itself. This, together with our other findings, suggests that simplest greedy algorithms are competitive with the state-of-the-art worst-case algorithms for online bipartite matching on many average-case and practical input families. Greediness is by far the most important property of online algorithms for bipartite matching. * Research is supported by NSERC.

Research paper thumbnail of Lower Bounds on the Length of Universal Traversal Sequences (Detailed Abstract)

Symposium on the Theory of Computing, 1989

Universal traversal sequences for d-regular n-vertex graphs require length Q(d"7i2 + dn2 log f), ... more Universal traversal sequences for d-regular n-vertex graphs require length Q(d"7i2 + dn2 log f), for 3 < d _< n/3-2. This is nearly tight for d = 0:n). We also introduce and study several variations on the problem, e.g. edge-universal traversal sequences, showing how im:pr,oved lower bounds on these would improve the bounds given above. 1. Universal TraversaIl Sequences Universal traversal sequencles were introducecl by Cook (see Aleliunas [l] and Aleliunas et al. [2]), motivated by the complexity of graph traversad. Let G(d,n) be th e set of all connected, d-regular, n-vertex, edge-labeled, lmdirected graphs G =

Research paper thumbnail of Routing, Merging and Sorting on Parallel Models of Computation (Extended Abstract)

Symposium on the Theory of Computing, 1982

Research paper thumbnail of Greedy Bipartite Matching in Random Type Poisson Arrival Model

arXiv (Cornell University), May 2, 2018

We introduce a new random input model for bipartite matching which we call the Random Type Poisso... more We introduce a new random input model for bipartite matching which we call the Random Type Poisson Arrival Model. Just like in the known i.i.d. model (introduced by Feldman et al. [7]), online nodes have types in our model. In contrast to the adversarial types studied in the known i.i.d. model, following the random graphs studied in Mastin and Jaillet [2], in our model each type graph is generated randomly by including each offline node in the neighborhood of an online node with probability c/n independently. In our model, nodes of the same type appear consecutively in the input and the number of times each type node appears is distributed according to the Poisson distribution with parameter 1. We analyze the performance of the simple greedy algorithm under this input model. The performance is controlled by the parameter c and we are able to exactly characterize the competitive ratio for the regimes c = o(1) and c = ω(1). We also provide a precise bound on the expected size of the matching in the remaining regime of constant c. We compare our results to the previous work of Mastin and Jaillet who analyzed the simple greedy algorithm in the G n,n,p model where each online node type occurs exactly once. We essentially show that the approach of Mastin and Jaillet can be extended to work for the Random Type Poisson Arrival Model, although several nontrivial technical challenges need to be overcome. Intuitively, one can view the Random Type Poisson Arrival Model as the G n,n,p model with less randomness; that is, instead of each online node having a new type, each online node has a chance of repeating the previous type. * Research is supported by NSERC.

Research paper thumbnail of Competitive Paging with Locality of Reference (Preliminary Version)

Symposium on the Theory of Computing, 1991

Research paper thumbnail of Budgetary Effects on Pricing Equilibrium in Online Markets

Adaptive Agents and Multi-Agents Systems, May 9, 2016

Following the work of Babaioff et al [4], we consider the pricing game with strategic vendors and... more Following the work of Babaioff et al [4], we consider the pricing game with strategic vendors and a single buyer, modeling a scenario in which multiple competing vendors have very good knowledge of a buyer, as is common in online markets. We add to this model the realistic assumption that the buyer has a fixed budget and does not have unlimited funds. When the buyer's valuation function is additive, we are able to completely characterize the different possible pure Nash Equilibria (PNE) and in particular obtain a necessary and sufficient condition for uniqueness. Furthermore, we characterize the market clearing (or Walresian) equilibria for all submodular valuations. Surprisingly, for certain monotone submodular function valuations, we show that the pure NE can exhibit some counterintuitive phenomena; namely, there is a valuation such that the pricing will be market clearing and within budget if the buyer does not reveal the budget but will result in a smaller set of allocated items (and higher prices for items) if the buyer does reveal the budget. It is also the case that the conditions that guarantee market clearing in Babaioff et al [4] for submodular functions are not necessarily market clearing when there is a budget. Furthermore, with respect to social welfare, while without budgets all equilibria are optimal (i.e. POA = POS = 1), we show that with budgets the worst equilibrium may only achieve 1 n−2 of the best equilibrium.

Research paper thumbnail of Lower bounds for high dimensional nearest neighbor search and related problems

Research paper thumbnail of Any-Order Online Interval Selection

arXiv (Cornell University), Mar 10, 2023

We consider the problem of online interval scheduling on a single machine, where intervals arrive... more We consider the problem of online interval scheduling on a single machine, where intervals arrive online in an order chosen by an adversary, and the algorithm must output a set of nonconflicting intervals. Traditionally in scheduling theory, it is assumed that intervals arrive in order of increasing start times. We drop that assumption and allow for intervals to arrive in any possible order. We call this variant any-order interval selection (AOIS). We assume that some online acceptances can be revoked, but a feasible solution must always be maintained. For unweighted intervals and deterministic algorithms, this problem is unbounded. Under the assumption that there are at most k different interval lengths, we give a simple algorithm that achieves a competitive ratio of 2k and show that it is optimal amongst deterministic algorithms, and a restricted class of randomized algorithms we call memoryless, contributing to an open question by Adler and Azar [1]; namely whether a randomized algorithm without memory or with only "bounded" access to history can achieve a constant competitive ratio. We connect our model to the problem of call control on the line, and show how the algorithms of Garay et al. [23] can be applied to our setting, resulting in an optimal algorithm for the case of proportional weights. We also discuss the case of intervals with arbitrary weights, and show how to convert the single-length algorithm of Fung et al. [21] into a classify and randomly select algorithm that achieves a competitive ratio of 2k. Finally, we consider the case of intervals arriving in a random order, and show that for single-lengthed instances, a one-directional algorithm (i.e. replacing intervals in one direction), is the only deterministic memoryless algorithm that can possibly benefit from random arrivals.

Research paper thumbnail of COMPUTERS AND EMPLOYMENT11Parts of this chapter are reprinted, by permission, from C. C. Gotlieb and A. Borodin, Computers and employment, Comm. ACM 15, 695–702 (1972). Copyright 1972 by the Assoc. for Comput. Machinery

Publisher Summary This chapter highlights the rapid growth of computers technologies and their im... more Publisher Summary This chapter highlights the rapid growth of computers technologies and their impact on employment. Technological change primarily means automation, and automation in the office means computers. The relationship of computers and automation to employment is a part of the more general relation of technological change to employment. The most obvious effect is that increases in productivity as a result of technology can eliminate jobs. Technology affects the individual worker both in his attitude toward work and in the nature and amount of the work done by the individual. The structure of the entire economy and the organizational structure of individual firms are affected by such changes. The commonly accepted index of technological change is productivity or output per man-hour, measured in terms of some unit product or service. Despite the high rate of technological change in the twentieth century and the resulting gains in productivity, the total labor force of the industrialized nations has increased throughout the period. Increased consumer demands have kept pace with productivity increases, and employment in the aggregate sense has stayed relatively stable. When indirect labor is considered, the skill levels required within various industries have both increased and decreased. While alienation is still present, the introduction of automation and computing may have improved work attitudes when compared to the effects of earlier mechanization. But it is not possible to be complacent about the effects of computers and automation on employment.

Research paper thumbnail of Computer Capabilities and Limitations

This chapter focuses on capabilities and limitations of a computer. The identification of artific... more This chapter focuses on capabilities and limitations of a computer. The identification of artificial intelligence with mathematics may represent a view associated with the emotions and instincts that are found in the animal world and the human, but intelligence is logical and, hence, mathematical. However, the themes of artificial intelligence is to try to imbue programs with learning capabilities, that is, have the programs use and generalize upon past experience. In computer programs, this learning capability is usually limited to simple success or failure reinforcement schemes. Another substantial effort of artificial intelligence in a well-defined environment is the application of computerized mechanical theorem proving. The ability to learn natural language is the distinguishing characteristic of human intelligence, from which other abilities follow. The fact that present-day computers have limited capabilities for playing games, solving problems, and dealing with languages is not in itself a guide to predicting their eventual capabilities. Despite the converging efforts by scientists from many disciplines, it is turning out to be enormously difficult to understand the way in which memory, learning, recognition, and conceptualization take place in the brain.

Research paper thumbnail of The Distribution of Computer Products and Services

This chapter highlights the issue of distribution of computer products and service. The computing... more This chapter highlights the issue of distribution of computer products and service. The computing industry is increasingly complex, and there are many aspects to the production and distribution of computer hardware and software. However, the effects on social issues involving computers are less clear. Certain industries such as communication, where the advantages of scale are particularly important, are regarded as natural monopolies, and they are not subjected to the restrictions placed on monopolies. Instead they are usually subject to regulations that require them to offer service to everyone willing to pay for it, which limit the rates for services and the profits, and which prevent the company from using its monopoly as a competitive advantage in other situations. In many countries, communications are operated by the state, while in others, a few companies, the common carriers, are given monopolies or near-monopolies, often so as to allow each to provide specialized services such as telephone, telegraph, radio broadcasting, and television transmission.

Research paper thumbnail of Shifts in Power

This chapter focuses on the effects of computer on the relationships between different components... more This chapter focuses on the effects of computer on the relationships between different components of an organization, between components at different levels, and between a component and the individuals within it. One factor that enters into the influence of the computer is whether the computing facilities are centralized or decentralized. The increasing value of information as a commodity brings with it the potential to change the bases of power and create new ones. The systematic and purposeful acquisition of information and its systematic application rather than science or technology is emerging as the new foundations for work, productivity, and effort throughout the world. In computing, the general economic rule that favors large production systems over small ones, based on the economy of scale, has come to be known as Grosch's law. This states that the effectiveness of a computing system, as measured by throughput or speed, is proportional to the square of the cost.

Research paper thumbnail of Competitive paging with locality of reference (brief summary)

American Mathematical Society eBooks, Mar 10, 1992

Research paper thumbnail of Files, Data Banks, and Information Systems

Elsevier eBooks, 1973

This chapter reviews computer files, data banks, and information systems. It focuses on the chara... more This chapter reviews computer files, data banks, and information systems. It focuses on the characteristics of the files used in routine billing, record-keeping, integrated files, and management information systems. The components of a file are its records, the fields, and the keys. The number of components and the ways in which they are interrelated are determined by the requirements of the system for which the file is maintained. In most files, the record size lies between 100 and 1000 characters. Files can also be regarded as static or volatile according to the permanence of the records. The reliability imposed on the file processing by the system is another characteristic that greatly affects the type of processing, particularly with respect to duplication of processors, cost, and so on. The Data Bank and Information System of the Bureau of Labor Statistics, Washington, D. C., consists of a large number of time series on statistics related to employment originating from the monthly current population surveys conducted by the Census Bureau. It contains data on individuals such as employment status and duration, occupation, industry, sex, age, color, marital status, education, and household relationship.

Research paper thumbnail of Systems, Models, and Simulations

Elsevier eBooks, 1973

This chapter discusses computer systems, models, and simulations in the application of variety of... more This chapter discusses computer systems, models, and simulations in the application of variety of planning techniques. Many systems under study today are large to such an extent and have so many components that their behavior cannot be calculated directly from physical or statistical principles. If improvements are proposed in a system that is difficult or highly undesirable to experiment with, to evaluate the proposals, a computer model may be constructed and simulations can be carried out on it. Different kinds of models can be used to aid in systems analysis and design. In systems analysis, Churchman (1968) has described five basic aspects of a system, namely, the total system objectives, the environment, the components, the resources, and the management. However, there will be external variables and parameters that describe the environment and internal parameters that are associated with the components.

Research paper thumbnail of Problems and Issues

This chapter discusses the problems and issues associated with computers. The questions raised by... more This chapter discusses the problems and issues associated with computers. The questions raised by the presence of computers range from technical problems, which can be posed rigorously and solved mathematically or experimentally, through social issues, which can only be resolved, if at all, after there is agreement about goals, to philosophical probings, whose answers are essentially discussions of the meanings of the terms being used. An indication of the number and range of problems associated with computers can be given by attempting to classify the kinds of problems that arise. The areas that designate the types are shown as overlapping to indicate that the subject matter overlaps and to convey the fact that the answers to a problem will depend on the context in which it is raised. The issues involving computer subjects are so broad, and in some cases changing so rapidly, that it is not possible to consider them all in the same detail.

Research paper thumbnail of Information Systems and Privacy

This chapter reviews computer information systems and their privacy. The concept of privacy is st... more This chapter reviews computer information systems and their privacy. The concept of privacy is strongly dependent on the cultural and political context, and changes with time. Privacy, in one form or another, is basic to human relations. The concern about the ways in which modern technology affects privacy centers on the way information is gathered and also on the unsatisfactory practices for verifying, maintaining security on, and restricting the flow of information. Computers are primarily involved in these latter functions, but in examining the relation between computers and privacy, it is desirable to look at information gathering first, partly as background, and also because computers are often involved in the subsequent analysis. Devices designed for surveillance, wiretapping, and electronic eaves-dropping have been by far the most difficult to cope with in the matter of privacy. Computerized banking systems are just coming into use, and it is likely that they will grow and become general. It is significant that the Data Surveillance Bill is designed to deal with a specific type of damage, namely, that arising out of computerized credit reporting.

Research paper thumbnail of Advice Complexity of Priority Algorithms

Theory of computing systems, Nov 13, 2019

The priority model of "greedy-like" algorithms was introduced by Borodin, Nielsen, and Rackoff in... more The priority model of "greedy-like" algorithms was introduced by Borodin, Nielsen, and Rackoff in 2002. We augment this model by allowing priority algorithms to have access to advice, i.e., side information precomputed by an all-powerful oracle. Obtaining lower bounds in the priority model without advice can be challenging and may involve intricate adversary arguments. Since the priority model with advice is even more powerful, obtaining lower bounds presents additional difficulties. We sidestep these difficulties by developing a general framework of reductions which makes lower bound proofs relatively straightforward and routine. We start by introducing the Pair Matching problem, for which we are able to prove strong lower bounds in the priority model with advice. We develop a template for constructing a reduction from Pair Matching to other problems in the priority model with advice-this part