Scaling UP vs Scaling Out (original) (raw)
Scaling UP vs Scaling Out: In the design of intelligent systems
(DRAFT: Liable to change)
Aaron Sloman
School of Computer Science, University of Birmingham. (Philosopher in a Computer Science department)
Installed: 19 Aug 2012
Last updated: 19 Aug 2012
This paper is
http://tinyurl.com/BhamCog/misc/scaling-up-scaling-out.html
A PDF version may be added later.
As explainedbelow, this is part of the Meta-Morphogenesis project/conjecture:
http://tinyurl.com/BhamCog/misc/meta-morphogenesis.html
A partial index of discussion notes is inhttp://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html
CONTENTS
- Introduction: A potential confusion
- Relevance to meta-morphogenesis
- Scaling up
- Scaling out
- Possible examples of scaling out include
- Previous discussions and papers referring to scaling-up vs scaling-out
Introduction: A potential confusion
I have just discovered that there is a very different distinction between scaling-up and scaling-out used in connection with infrastructure options for computing services. A few randomly selected example web sites explaining and discussing the distinction and how to choose between options are:
http://en.wikipedia.org/wiki/Scalability
"To scale horizontally (or scale out) means to add more nodes to a system, such as adding a new computer to a distributed software application. An example might be scaling out from one Web server system to three.""To scale vertically (or scale up) means to add resources to a single node in a system, typically involving the addition of CPUs or memory to a single computer."
http://itknowledgeexchange.techtarget.com/storage-soup/scale-out-vs-scale-up-the-basics/
Feb 23 2011: Scale-out vs. scale-up: the basics
Posted by: Randy Kerns
"Scale-up, as the following simple diagram shows, is taking an existing storage system and adding capacity to meet increased capacity demands."
.......
"Scale-out storage usually requires additional storage (called nodes) to add capacity and performance. Or in the case of monolithic storage systems, it scales by adding more functional elements (usually controller cards).One difference between scaling out and just putting more storage systems on the floor is that scale-out storage continues to be represented as a single system."http://www.yellow-bricks.com/2011/07/21/scale-upout-and-impact-of-vram-part-2/
Scale Up/Out and impact of vRAM?!? (part 2)
21 July, 2011 by Duncan Epping - with 86 Comments
The distinction I am concerned with is totally different, and refers to different kinds of functionality, not two ways of providing the same functionality.
- Scaling up is concerned with efficiently coping with increased complexity, making good use of resources such as space, time, and CPU power.
- Scaling out is concerned with being able to interact with other subsystems within the same overall architecture, in a fruitful way. This is related to, but different from John McCarthy's concept of "Elaboration Tolerance" explained in:
http://www-formal.stanford.edu/jmc/elaboration/elaboration.html
Relevance to meta-morphogenesis
The meta-morphogenesis project is an attempt to survey changes in information processing in evolution, in development, in learning, in social systems and cultures, including changes that speed up or extend the mechanisms for producing future changes in information processing mechanisms - as explained in:http://tinyurl.com/BhamCog/misc/meta-morphogenesis.html
It seems likely that many of the examples of transitions producing meta-morphogenesis involve evolution, development or learning producing a new form of interaction between previously evolved, developed or learnt mechanisms.
Possible forms such transitions can take include the following (a tiny subset of the space of possibilities waiting to be investigated):
- Two previously existing subsystems A, and B, are in some way controlled and monitored for different purposes by a third subsystem. C. A later development in C could allow it to monitor and control both A and B simultaneously, for example, combining information available from both to answer questions or make predictions, or construct plans, or inform control decisions in new ways.
- A new communication channel could develop linking two previously existing subsystems A and B so that information from A can be used by B in addition to its previously available information. That could be extended to allow information to go in both directions, including control information and questions as well as factual information.
- The form of representation used by a subsystem A may be modified so as to become more compatible with or more useful to subsystem B. This could include such things as providing new syntax that can be manipulated by B, or extending the semantics so as to express information required by B. The above transitions can occur in individual learning, in genetically and environmentally facilitated developmental processes, in modifications to the genome, or in some cases in social collaboration and interaction, so that tasks originally performed by individuals can be performed better by pairs or groups.
It is sometimes suggested that that was what led to development of human language, but an alternative conjecture about evolution of language as initially supporting internal processes and only later being used for communication is offered in:
http://tinyurl.com/BhamCog/talks/#glang
Scaling up
For many years researchers in AI have emphasised the needs for system designs to "scale up", i.e. they should not only perform well on relatively simple problems but also continue to perform well as problems get more complex.
This can be interpreted in various ways, but it often refers to a need to avoid designs that have exponential complexity, so that if the size of the the problem increased by a factor of N (e.g. 20) then either the time, or the storage space required, or both, increases by a factor of 2**N (e.g. 2**20 which is 1,048,576).
The size measure may be number of data items on which a system needs to be trained, the size of an image to be processed, the size of sentence to be parsed, the size of a plan to be constructed, the size of "genome" to be evolved, and many more.
Much research in AI has been concerned with attempting to defeat the "combinatorial explosions" that usually arise out of exponential relations between problem size and time or space requirements. There have been huge improvements based on many different techniques, including use of powerful heuristics (e.g. detecting and using symmetry), structure sharing between partial solutions, and using statistical/stochastic methods for sampling solution spaces instead of ensuring exhaustive coverage. Some of these methods require the goal of optimality to be abandoned, but often very good but non-optimal solutions are found.
Scaling out
In parallel with all this for many years it has also been known that solutions that work well for a particular type of task may be hard to integrate with mechanisms that perform well on other tasks in systems that need to be able to combine competences. I have referred to this as the need for solutions to "scale out", in contrast with the need to scale up.
Possible examples of scaling out include
- A natural language processing system should be able to be combined with a visual system in a machine that can converse about visible structures and processes, for instance using what's visible in the scene to disambiguate a verbal reference to an object or location, or allowing a verbal cue to disambiguate a visual percept where an object is partly occluded or seen in shadow, or at a distance.
- A visual system should be able to interact with manipulation mechanisms in a robot, so that vision can play a role in controlling action, using visual servoing during the action, as opposed to merely providing information in advance to be used by a planner, or vision being used after action completion to judge whether goals have been achieved. and also so that information from haptic sensors gained during actions can contribute to the task of the visual system, e.g. by removing shape ambiguities.
- When listening to someone speaking there are many ways in which a visual system could aid in the interpretation of what is being said, e.g. using gaze direction of the speaker to resolve an ambiguity in what is being referred to (e.g. ruling out an object that cannot be seen by the speaker), or using visually perceived facial expressions or body language to guide the interpretation of an utterance as playful, threatening, or merely providing a friendly warning, and so on.
- It is frequently claimed that imitation is one of the main forms of learning, but very often merely perceiving what someone is doing does not provide an adequate basis for replicating the actions. E.g. you can't learn to play a violin just by trying to imitate a violinist. So often the teacher will help a learner trying to master a complex action by commenting on what is being done, at the same time as performing the action, e.g. explaining that making the violin bow move from one string to another is done by moving the orientation of the upper arm.
- Much teaching of mathematics extends the verbal and logical or algebraic formulation of a problem, or a piece of reasoning by providing a diagram. This, like some of the earlier examples, requires subsystem collaboration (scaling out) both in the teacher and in the learner.
- More examples can be found in the discussion of "toddler theorems" in:http://tinyurl.com/BhamCog/misc/toddler-theorems.html
Previous discussions and papers referring to scaling-up vs scaling-out
(In the sense considered here.)
- http://tinyurl.com/BhamCog/misc/fully-deliberative.html
Requirements for a Fully Deliberative Architecture (Or component of an architecture) - http://tinyurl.com/BhamCog/misc/grasping-grasping.html
Can a Robot Grasp Grasping? - http://www.cs.bham.ac.uk/~axs/my-doings.html
"Many AI systems are designed with the requirement to scale up (i.e. continue to
perform well as problem complexity increases). In contrast biological systems,
and subsystems in human-like robots need to be able to "scale out", namely they
need to be able to be integrated with many other components in a fully
functional architecture where subsystems often have to cooperate. Most AI
designs for mechanisms do not meet that requirement because they are designed
for and tested in limited test harnesses, often with fairly simple agreed
benchmarks that are nowhere near an adequate sample of requirements in a fully
functional robot." - http://tinyurl.com/BhamCog/misc/hawkins-numenta.html
Response to questions about Jeff Hawkins - The need for systems to be able to scale out led me to start thinking about requirements for complete architectures in the early 1970s, e.g. in
http://tinyurl.com/BhamCog/crp/chap6.htmlChapter 6 of "The Computer Revolution in Philosophy" 1978.
PART TWO: Mechanisms -- Chapter 6
Sketch of an intelligent mechanism - Get help from google:http://www.google.com/search?q=aaron%2Bsloman&q=scale+out+scale+up&btnG=Search+the+world
(This is a first draft web page and may be modified and extended later, especially if I get comments, criticisms or suggestions for improvement.)
Maintained byAaron Sloman
School of Computer Science
The University of Birmingham