Scalability


Scalability is the property of a system to handle a growing amount of work by adding resources to the system.
In an economic context, a scalable business model implies that a company can increase sales given increased resources. For example, a package delivery system is scalable because more packages can be delivered by adding more delivery vehicles. However, if all packages had to first pass through a single warehouse for sorting, the system would not be scalable, because one warehouse can handle only a limited number of packages.
In computing, scalability is a characteristic of computers, networks, algorithms, networking protocols, programs and applications. An example is a search engine, which must support increasing numbers of users, and the number of topics it indexes. Webscale is a computer architectural approach that brings the capabilities of large scale cloud computing companies into enterprise data centers.
In mathematics, scalability mostly refers to closure under scalar multiplication.

Examples

The Incident Command System is used by emergency response agencies in the United States. ICS can scale resource coordination from a single-engine roadside brushfire to an interstate wildfire. The first resource on scene establishes command, with authority to order resources and delegate responsibility. As an incident expands, more senior officers assume command.

Dimensions

Scalability can be measured over multiple dimensions, such as:
Resources fall into two broad categories: horizontal and vertical.

Horizontal or Scale Out

Scaling horizontally means adding more nodes to a system, such as adding a new computer to a distributed software application. An example might involve scaling out from one web server to three. High-performance computing applications such as seismic analysis and biotechnology workloads scaled horizontally to support tasks that once would have required expensive supercomputers. Other workloads, such as large social networks exceed the capacity of the largest supercomputer and can only be handled by scalable systems. Exploiting this scalability requires software for efficient resource management and maintenance.

Vertical or Scale Up

Scaling vertically means adding resources to a single node, typically involving the addition of CPUs, memory or storage to a single computer.
Larger numbers of elements increases management complexity, more sophisticated programming to allocate tasks among resources and handle issues such as throughput and latency across nodes, while some applications do not scale horizontally.
Note that network function virtualization defines these terms differently: scaling out/in is the ability to scale by add/remove resource instances, whereas scaling up/down is the ability to scale by changing allocated resources

Database scalability

Scalability for databases requires that the database system be able to perform additional work given greater hardware resources, such as additional servers, processors, memory and storage. Workloads have continued to grow and demands on databases have followed suit.
Algorithmic innovations have include row-level locking and table and index partitioning. Architectural innovations include shared nothing and shared everything architectures for managing multi-server configurations.

Strong versus eventual consistency (storage)

In the context of scale-out data storage, scalability is defined as the maximum storage cluster size which guarantees full data consistency, meaning there is only ever one valid version of stored data in the whole cluster, independently from the number of redundant physical data copies. Clusters which provide "lazy" redundancy by updating copies in an asynchronous fashion are called 'eventually consistent'. This type of scale-out design is suitable when availability and responsiveness are rated higher than consistency, which is true for many web file hosting services or web caches. For all classical transaction-oriented applications, this design should be avoided.
Many open source and even commercial scale-out storage clusters, especially those built on top of standard PC hardware and networks, provide eventual consistency only. Idem some NoSQL databases like CouchDB and others mentioned above. Write operations invalidate other copies, but often don't wait for their acknowledgements. Read operations typically don't check every redundant copy prior to answering, potentially missing the preceding write operation. The large amount of metadata signal traffic would require specialized hardware and short distances to be handled with acceptable performance.
Whenever strong data consistency is expected, look for these indicators:
Indicators for eventually consistent designs are:
It is often advised to focus system design on hardware scalability rather than on capacity. It is typically cheaper to add a new node to a system in order to achieve improved performance than to partake in performance tuning to improve the capacity that each node can handle. But this approach can have diminishing returns. For example: suppose 70% of a program can be sped up if parallelized and run on multiple CPUs instead of one. If is the fraction of a calculation that is sequential, and is the fraction that can be parallelized, the maximum speedup that can be achieved by using P processors is given according to Amdahl's Law:
Substituting the value for this example, using 4 processors we get
If we double the compute power to 8 processors we get
Doubling the processing power has only improved the speedup by roughly one-fifth. If the whole problem was parallelizable, the speed would also double. Therefore, throwing in more hardware is not necessarily the optimal approach.

Weak versus strong scaling

In the context of high performance computing there are two common notions of scalability: