In computing , a pipeline , also known as a data pipeline ,  is a set of data processing elements connected in series, where the output of one element is the input of the next one.
The elements of a pipeline are often executed in parallel or in time-sliced fashion. Some amount of buffer storage is often inserted between elements.
Some operating systems may provide UNIX-like syntax to string several program runs in a pipeline, but implement the latter as simple serial execution, rather than true pipelining — namely, by waiting for each program to finish before starting the next one. Pipelining is a commonly used concept in everyday life. For example, in the assembly line of a car factory, each specific task — such as installing the engine, installing the hood, and installing the wheels — is often done by a separate work station.
The stations carry out their tasks in parallel, each on a different car.
pipelining in hindi & pipeline stages & its advantage in hindi
Once a car had one task performed, it moves to the next station. Suppose that assembling one car requires three tasks that take 20, 10, and 15 minutes, respectively. Then, if all three tasks were performed by a single station, the factory would output one car every 45 minutes. By using a pipeline of three stations, the factory would output the first car in 45 minutes, and then a new one every 20 minutes.
As this example shows, pipelining does not decrease the latency , that is, the total time for one item to go through the whole system. It does however increase the system's throughput , that is, the rate at which new items are processed after the first one. Since the throughput of a pipeline cannot be better than that of its slowest element, the designer should try to divide the work and resources among the stages so that they all take the same time to complete their tasks.
In the car assembly example above, if the three tasks took 15 minutes each, instead of 20, 10, and 15 minutes, the latency would still be 45 minutes, but a new car would then be finished every 15 minutes, instead of Under ideal circumstances, if all processing elements are synchronized and take the same amount of time to process, then each item can be received by each element just as it is released by the previous one, in a single clock cycle.
That way, the items will flow through the pipeline at a constant speed, like waves in a water channel.
In such "wave pipelines"  , no synchronization or buffering is needed between the stages, besides the storage needed for the data items.
More generally, buffering between the pipeline stages is necessary when the processing times are irregular, or when items may be created or destroyed along the pipeline. For example, in a graphics pipeline that processes triangles to be rendered on the screen, an element that checks the visibility of each triangle may discard it, if it is invisible, or may output two or more triangular pieces of it, if it is partly hidden.
Buffering is also needed to accommodate irregularities in the rates at which the application feeds items to the first stage and consumes the output of the last one.
The buffer between two stages may be simply a hardware register with suitable synchronization and signalling logic between the two stages.
When a stage A stores a data item in the register, it sends a "data available" signal to the next stage B. Once B has used that data, it responds with a "data received" signal to A.
Synchronous and Asynchronous Pipeline Architecture
Stage A halts, waiting for this signal, before storing the next data item into the register. Stage B halts, waiting for the "data available" signal, if it is ready to process the next item but stage A has not provided it yet. If the processing times of an element are variable, the whole pipeline may often have to stop, waiting for that element and all the previous ones to consume the items in their input buffers.
The frequency of such pipeline stalls can be reduced by providing space for more than one item in the input buffer of that stage.
Intel® Threading Building Blocks Documentation
Such a multiple-item buffer is usually implemented as a first-in, first-out queue. The upstream stage may still have to be halted when the queue gets full, but the frequency of those events will decrease as more buffer slots are provided. Queuing theory can tell the number of buffer slots needed, depending on the variability of the processing times and on the desired performance.
If some stage takes or may take much longer than the others, and cannot be sped up, the designer can provide two or more processing elements to carry out that task in parallel, with a single input buffer and a single output buffer. As each element finishes processing its current data item, it delivers it to the common output buffer, and takes the next data item from the common input buffer.
नोट्स search करें
This concept of "non-linear" or "dynamic" pipeline is exemplified by shops or banks that have two or more cashiers serving clients from a single waiting queue. In some applications, the processing of an item Y by a stage A may depend on the results or effect of processing a previous item X by some later stage B of the pipeline.
In that case, stage A cannot correctly process item Y until item X has cleared stage B. This situation occurs very often in instruction pipelines.
For example, suppose that Y is an arithmetic instruction that reads the contents of a register that was supposed to have been modified by an earlier instruction X. Let A be the stage that fetches the instruction operands , and B be the stage that writes the result to the specified register. If stage A tries to process instruction Y before instruction X reaches stage B, the register may still contain the old value, and the effect of Y would be incorrect. In order to handle such conflicts correctly, the pipeline must be provided with extra circuitry or logic that detects them and takes the appropriate action.
Strategies for doing so include:. A pipelined system typically requires more resources circuit elements, processing units, computer memory, etc. Moreover, the transfer of items between separate processing elements may increase the latency, especially for long pipelines.
Much more than documents.
The additional complexity cost of pipelining may be considerable if there are dependencies between the processing of different items, especially if a guess-and-backtrack strategy is used to handle them.
Indeed, the cost of implementing that strategy for complex instruction sets has motivated some radical proposals to simplify computer architecture , such as RISC and VLIW. Compilers also have been burdened with the task of rearranging the machine instructions so as to improve the performance of instruction pipelines. From Wikipedia, the free encyclopedia. This article needs additional citations for verification.
Please help improve this article by adding citations to reliable sources.
Unsourced material may be challenged and removed. Hauck; Sorin A. Huss; M. Retrieved 14 September Categories : Instruction processing. Hidden categories: Articles needing additional references from September All articles needing additional references. Namespaces Article Talk.