Reconfigurable computing
Encyclopedia
Reconfigurable computing is a computer architecture
combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessor
s is the ability to make substantial changes to the datapath
itself in addition to the control flow. On the other hand, the main difference with custom hardware, i.e. application-specific integrated circuit
s (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric.
's landmark paper proposed the concept of a computer made of a standard processor and an array of "reconfigurable"
hardware. The main processor would control the behavior of the reconfigurable hardware. The latter would then be tailored to perform a specific task, such as image processing or pattern matching, as quickly as a dedicated piece of hardware. Once the task was done, the hardware could be adjusted to do some other task. This resulted in a hybrid computer structure combining the flexibility of software with the speed of hardware; unfortunately this idea was far ahead of its time in needed electronic technology.
In the 1980s and 1990s there was a renaissance in this area of research with many proposed reconfigurable architectures developed in industry and academia, such as: COPACOBANA, Matrix, Garp, Elixent, PACT XPP, Silicon Hive, Montium, Pleiades, Morphosys, PiCoGA. Such designs were feasible due to the constant progress of silicon technology that let complex designs be implemented on one chip. The world's first commercial reconfigurable computer, the Algotronix CHS2X4, was completed in 1991. It was not a commercial success, but was promising enough that Xilinx
(the inventor of the Field-Programmable Gate Array, FPGA) bought the technology and hired the Algotronix staff.
Computer scientist Reiner Hartenstein describes reconfigurable computing in terms of an anti machine that, according to him, represents a fundamental paradigm shift away from the more conventional von Neumann machine
.
Hartenstein calls it Reconfigurable Computing Paradox, that software-to-configware migration (software-to-FPGA migration) results in reported speed-up factors of up to more than four orders of magnitude, as well as a reduction in electricity consumption by up to almost four orders of magnitude—although the technological parameters of FPGAs are behind the Gordon Moore curve by about four orders of magnitude, and the clock frequency is substantially lower than that of microprocessors. This paradox is due to a paradigm shift, and is also partly explained by the Von Neumann syndrome.
The fundamental model of the reconfigurable computing machine paradigm, the data-stream-based anti machine is well illustrated by the differences to other machine paradigms that were introduced earlier, as shown by Nick Tredennick
's following classification scheme of computing paradigms (see "Table 1: Nick Tredennick’s Paradigm Classification Scheme").
The fundamental model of a Reconfigurable Computing Machine, the data-stream-based anti machine (also called Xputer), is the counterpart of the instruction-stream-based von Neumann machine paradigm. This is illustrated by a simple reconfigurable system (not dynamically reconfigurable), which has no instruction fetch at run time. The reconfiguration (before run time) can be considered as a kind of super instruction fetch. An anti machine does not have a program counter. The anti machine has data counters instead, since it is data-stream-driven. Here the definition of the term data streams is adopted from the systolic array
scene, which defines, at which time which data item has to enter or leave which port, here of the reconfigurable system, which may be fine-grained (e. g. using FPGAs) or coarse-grained, or a mixture of both.
The systolic array scene, originally (early 1980s) mainly mathematicians, only defined one half of the anti machine: the data path: the systolic array (also see Super systolic array). But they did not define nor model the data sequencer methodology, considering that this is not their job to take care where the data streams come from or end up. The data sequencing part of the anti machine is modeled as distributed memory, preferably on chip, which consists of auto-sequencing memory (ASM) blocks. Each ASM block has a sequencer including a data counter. An example is the Generic Address Generator (GAG), which is a generalization of the DMA
.
Computer architecture
In computer science and engineering, computer architecture is the practical art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals and the formal modelling of those systems....
combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessor
Microprocessor
A microprocessor incorporates the functions of a computer's central processing unit on a single integrated circuit, or at most a few integrated circuits. It is a multipurpose, programmable device that accepts digital data as input, processes it according to instructions stored in its memory, and...
s is the ability to make substantial changes to the datapath
Datapath
A datapath is a collection of functional units, such as arithmetic logic units or multipliers, that perform data processing operations. Most central processing units consist of a datapath and a control unit, with a large part of the control unit dedicated to regulating the interaction between the...
itself in addition to the control flow. On the other hand, the main difference with custom hardware, i.e. application-specific integrated circuit
Application-specific integrated circuit
An application-specific integrated circuit is an integrated circuit customized for a particular use, rather than intended for general-purpose use. For example, a chip designed solely to run a cell phone is an ASIC...
s (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric.
History and properties
The concept of reconfigurable computing has existed since the 1960s, when Gerald EstrinGerald Estrin
Prof. Gerald Estrin, an IEEE Fellow, a Guggenheim Fellow, and a member of the Board of Governors of the Weizmann Institute of Science, Israel. Estrin received his B.S, M.S. and Ph.D...
's landmark paper proposed the concept of a computer made of a standard processor and an array of "reconfigurable"
hardware. The main processor would control the behavior of the reconfigurable hardware. The latter would then be tailored to perform a specific task, such as image processing or pattern matching, as quickly as a dedicated piece of hardware. Once the task was done, the hardware could be adjusted to do some other task. This resulted in a hybrid computer structure combining the flexibility of software with the speed of hardware; unfortunately this idea was far ahead of its time in needed electronic technology.
In the 1980s and 1990s there was a renaissance in this area of research with many proposed reconfigurable architectures developed in industry and academia, such as: COPACOBANA, Matrix, Garp, Elixent, PACT XPP, Silicon Hive, Montium, Pleiades, Morphosys, PiCoGA. Such designs were feasible due to the constant progress of silicon technology that let complex designs be implemented on one chip. The world's first commercial reconfigurable computer, the Algotronix CHS2X4, was completed in 1991. It was not a commercial success, but was promising enough that Xilinx
Xilinx
Xilinx, Inc. is a supplier of programmable logic devices. It is known for inventing the field programmable gate array and as the first semiconductor company with a fabless manufacturing model....
(the inventor of the Field-Programmable Gate Array, FPGA) bought the technology and hired the Algotronix staff.
Reconfigurable computing as a paradigm shift: using the Anti Machine
Early Historic Computers: | |
Programming Source | |
---|---|
Resources fixed | none |
Algorithms fixed | none |
von Neumann Computer: | |
Programming Source | |
Resources fixed | none |
Algorithms variable | Software (instruction streams) |
Reconfigurable Computing Systems: | |
Programming Source | |
Resources variable | Configware (configuration) |
Algorithms variable | Flowware (data streams) |
Computer scientist Reiner Hartenstein describes reconfigurable computing in terms of an anti machine that, according to him, represents a fundamental paradigm shift away from the more conventional von Neumann machine
Von Neumann machine
Von Neumann machine may refer to:.* Von Neumann architecture, a conceptual model of a computer architecture* The IAS machine, a computer designed in the 1940s based on von Neuman's design...
.
Hartenstein calls it Reconfigurable Computing Paradox, that software-to-configware migration (software-to-FPGA migration) results in reported speed-up factors of up to more than four orders of magnitude, as well as a reduction in electricity consumption by up to almost four orders of magnitude—although the technological parameters of FPGAs are behind the Gordon Moore curve by about four orders of magnitude, and the clock frequency is substantially lower than that of microprocessors. This paradox is due to a paradigm shift, and is also partly explained by the Von Neumann syndrome.
The fundamental model of the reconfigurable computing machine paradigm, the data-stream-based anti machine is well illustrated by the differences to other machine paradigms that were introduced earlier, as shown by Nick Tredennick
Nick Tredennick
Harry L. Tredennick is an American manager, inventor, VLSI design engineer and author who was involved in the development for Motorola's MC68000 and for IBM's Micro/370 microprocessors. Tredennick was named a Fellow of the IEEE for contributions to microprocessor design.-Education:Tredennick...
's following classification scheme of computing paradigms (see "Table 1: Nick Tredennick’s Paradigm Classification Scheme").
The fundamental model of a Reconfigurable Computing Machine, the data-stream-based anti machine (also called Xputer), is the counterpart of the instruction-stream-based von Neumann machine paradigm. This is illustrated by a simple reconfigurable system (not dynamically reconfigurable), which has no instruction fetch at run time. The reconfiguration (before run time) can be considered as a kind of super instruction fetch. An anti machine does not have a program counter. The anti machine has data counters instead, since it is data-stream-driven. Here the definition of the term data streams is adopted from the systolic array
Systolic array
In computer architecture, a systolic array is a pipe network arrangement of processing units called cells. It is a specialized form of parallel computing, where cells , compute data and store it independently of each other.thumb|240px...
scene, which defines, at which time which data item has to enter or leave which port, here of the reconfigurable system, which may be fine-grained (e. g. using FPGAs) or coarse-grained, or a mixture of both.
The systolic array scene, originally (early 1980s) mainly mathematicians, only defined one half of the anti machine: the data path: the systolic array (also see Super systolic array). But they did not define nor model the data sequencer methodology, considering that this is not their job to take care where the data streams come from or end up. The data sequencing part of the anti machine is modeled as distributed memory, preferably on chip, which consists of auto-sequencing memory (ASM) blocks. Each ASM block has a sequencer including a data counter. An example is the Generic Address Generator (GAG), which is a generalization of the DMA
Direct memory access
Direct memory access is a feature of modern computers that allows certain hardware subsystems within the computer to access system memory independently of the central processing unit ....
.
See also
- Reconfigurable computing terminology
- Partial re-configurationPartial re-configurationPartial Reconfiguration is the process of configuring a portion of a field programmable gate array while the other part is still running/operating....
- SprinterSprinter (computer)The Sprinter is a microcomputer made by Russian Peters Plus, Ltd.; it is the last model of ZX Spectrum being produced in a factory. It's built using what the company calls a "Flex architecture"...
- 1chipMSX1chipMSXThe One chip MSX, or 1chipMSX as the D4 Enterprise distributional name for the ESE MSX System 3, is a re-implementation of an MSX-2 home computer that uses a single FPGA to implement all the electronics of an MSX-2, including the MSX-MUSIC and SCC+ audio extensions.The system is housed in a...
- PSoCPSoCPSoC is a family of integrated circuits made by Cypress Semiconductor. These chips include a CPU and mixed-signal arrays of configurable integrated analog and digital peripherals.- History :...
- PipeRenchPipeRenchThe PipeRench Reconfigurable Computing Project is a project from the Carnegie Mellon University intended to improve reconfigurable computing systems. It aims to allow hardware virtualization through high-speed reconfiguration, in order to minimize resource constraints in FPGAs and similar...
- Computing with MemoryComputing with MemoryComputing with Memory refers to computing platforms where function response is stored in memory array, either one or two-dimensional, in the form of lookup tables and functions are evaluated by retrieving the values from the LUTs...
- High-Performance Reconfigurable ComputingHigh-performance reconfigurable computingHigh-Performance Reconfigurable Computing is a computer architecture combining reconfigurable computing-based accelerators like field-programmable gate arrays with CPUs, manycore microprocessors, or other parallel computing systems....
Further reading
- S. Hauck and A. DeHon, Reconfigurable Computing: The Theory and Practice of FPGA-Based Computing, Morgan Kaufmann, 2008.
- J. Henkel, S. Parameswaran (editors): Designing Embedded Processors. A Low Power Perspective; Springer Verlag, March 2007
- J. Teich (editor) et al.: Reconfigurable Computing Systems. Special Topic Issue of Journal it — Information Technology, Oldenbourg Verlag, Munich. Vol. 49(2007) Issue 3
- T.J. Todman, G.A. Constantinides, S.J.E. Wilton, O. Mencer, W. Luk and P.Y.K. Cheung, "Reconfigurable Computing: Architectures and Design Methods", IEE Proceedings: Computer & Digital Techniques, Vol. 152, No. 2, March 2005, pp. 193–208.
- A. Zomaya (editor): Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies; Springer Verlag, 2006
- J. M. Arnold and D. A. Buell, "VHDL programming on Splash 2," in More FPGAs, Will Moore and Wayne Luk, editors, Abingdon EE & CS Books, Oxford, England, 1994, pp. 182–191. (Proceedings,International Workshop on Field-Programmable Logic, Oxford, 1993.)
- J. M. Arnold, D. A. Buell, D. Hoang, D. V. Pryor, N. Shirazi, M. R. Thistle, "Splash 2 and its applications, "Proceedings, International Conference on Computer Design, Cambridge, 1993, pp. 482–486.
- D. A. Buell and Kenneth L. Pocek, "Custom computing machines: An introduction," The Journal of SupercomputingThe Journal of SupercomputingThe Journal of Supercomputing is an academic computer science journal concerned with theoretical and practical aspects of supercomputing. Tutorial and survey papers are also included.- External links :* from SpringerLink...
, v. 9, 1995, pp. 219–230.
External links
- The Fine-grained Computing Group at Information Sciences Institute
- Reconfigurable computing lectures and tutorials at Brown University
- A Decade of Reconfigurable Computing: a Visionary Retrospective
- Reconfigurable Computing: Coming of Age
- The University of South Carolina Reconfigurable Computing Laboratory
- The Virginia Tech Configurable Computing Laboratory
- Reconfigurable Systems Summer Institute (RSSI)
- IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM)
- International Conference on Field-Programmable Logic and Applications (FPL)
- BYU Configurable Computing Laboratory's FPGA CAD tool set
- The Morphware Forum
- NSF Center for High-Performance Reconfigurable Computing (CHREC)
- The OpenFPGA effort
- RC Education Workshop
- Reconfigurable Architectures Workshop
- The George Washington University High Performance Computing Laboratory
- The University of Florida High-Performance Computing & Simulation Research Laboratory
- The University of Kansas Hybridthreads Project - OS for Hybrid CPU/FPGA chips
- Reconfigurable computing tools and O/S Support from the University of Wisconsin
- Circuits and Systems Group, Imperial College London
- Why we need reconfigurable computing education
- The on-line version of the MEANDER FPGA design framework
- FHPCA: FPGA High Performance Computing Alliance
- Website of the DRESD (Dynamic Reconfigurability in Embedded System Design) research project
- Advanced topics in computer architecture: chip multiprocessors and polymorphic processors (2003)
- UT Austin TRIPS multiprocessor
- UNC Charlotte reconfigurable computing cluster
- XiRisc/PiCoGA project at University of Bologna, Italy
- COPACOBANA Project, Germany