UPCRC Illinois
Encyclopedia
UPCRC Illinois is one of two Universal Parallel Computing Research Centers launched in 2008 by Microsoft Corporation
Microsoft
Microsoft Corporation is an American public multinational corporation headquartered in Redmond, Washington, USA that develops, manufactures, licenses, and supports a wide range of products and services predominantly related to computing through its various product divisions...

 and Intel Corporation
Intel Corporation
Intel Corporation is an American multinational semiconductor chip maker corporation headquartered in Santa Clara, California, United States and the world's largest semiconductor chip maker, based on revenue. It is the inventor of the x86 series of microprocessors, the processors found in most...

 to accelerate the development of mainstream parallel computing
Parallel computing
Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently . There are several different forms of parallel computing: bit-level,...

 for consumer and business applications such as desktop and mobile computing. UPCRC Illinois is a joint research effort of the Illinois Department of Computer Science and the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign. Research is conducted by faculty members and graduate students from the departments of Computer Science and Electrical and Computer Engineering. UPCRC Illinois research faculty are led by Co-Directors Marc Snir
Marc Snir
Marc Snir holds a Michael Faiman and Saburo Muroga Professorship in the Department of Computer Science at the University of Illinois at Urbana-Champaign and has a courtesy appointment in the Graduate School of Library and Information Science. He currently pursues research in parallel computing...

 and Wen-mei Hwu
Wen-mei Hwu
Wen-mei Hwu is a professor at University of Illinois at Urbana-Champaign specializing in compiler design, computer architecture, computer microarchitecture, and parallel processing. He currently holds the Walter J. Sanders III-Advanced Micro Devices Endowed Chair in Electrical and Computer...

.

Research

The UPCRC Illinois whitepaper, Parallel Computing Research at Illinois: The UPCRC Agenda, expands in great detail about three primary research themes:


Focus on Disciplined Parallel Programming—Sequential languages have evolved to support well-structured programming
Computer programming
Computer programming is the process of designing, writing, testing, debugging, and maintaining the source code of computer programs. This source code is written in one or more programming languages. The purpose of programming is to create a program that performs specific operations or exhibits a...

, and provide safety and modularity. Mechanisms for parallel control, synchronization, and communication have not yet undergone a similar evolution. The UPCRC Illinois takes the optimistic view that parallelism can be tamed for all to use by providing disciplined parallel programming models, supported by sophisticated development and execution environments.


Multi-Front Attack on Multicore Programming—UPCRC Illinois is taking an integrated broad-based attack on parallelism
Parallelism
Parallelism may refer to:* Angle of parallelism, the angle at one vertex of a right hyperbolic triangle that has two hyperparallel sides* Conscious parallelism, price-fixing between competitors in an oligopoly that occurs without an actual spoken agreement between the parties* Parallel computing,...

 at all levels of the system stack from applications down to hardware, using every weapon in the arsenal to enable performance, scalability, and programmability. This includes investigating disciplined parallel languages, metaprogramming
Metaprogramming
Metaprogramming is the writing of computer programs that write or manipulate other programs as their data, or that do part of the work at compile time that would otherwise be done at runtime...

 and autotuners, and domain-specific environments; developing a powerful translation environment to exploit information from multiple sources at different times in the life of a program; developing an adaptive runtime to handle heterogeneity and automate resource management; developing new hardware mechanisms to enhance performance, scalability and programmability; and rethinking the customary division of labor among the layers of the system stack. Refactoring tools will help move existing code to new environments and formal methods-based techniques and tools will help ensure correctness.



Human-Centric Vision of Future Consumer Applications—Driving the agenda is a human-centric vision of future consumer applications, backed up by research on application technologies to enable quantum-leaps in immersive visual realism, reliable natural-language processing
Natural language processing
Natural language processing is a field of computer science and linguistics concerned with the interactions between computers and human languages; it began as a branch of artificial intelligence....

, and robust telepresence
Telepresence
Telepresence refers to a set of technologies which allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location....

. Investigating these applications reveals new parallel patterns and serves as a testbed for evaluating, refining, and ultimately proving UPCRC Illinois ideas on multicore programming.

Parallel Computing History at Illinois

Illinois history in parallel computing stretches more than 40 years. From the first academic parallel supercomputer
Supercomputer
A supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation.Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics, weather forecasting, climate research, molecular modeling A supercomputer is a...

, the ILLIAC IV started in 1964, to today’s work to install the first petascale
Petascale
In computing, petascale refers to a computer system capable of reaching performance in excess of one petaflop, i.e. one quadrillion floating point operations per second. The standard benchmark tool is LINPACK and Top500.org is the organisation which tracks the fastest supercomputers...

 computer, Blue Waters
Blue Waters
Blue Waters is the name of a petascale supercomputer to be deployed at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign...

, Illinois has defined the landscape of parallel computing. Contributions from past and current Illinois faculty include:
  • ILLIAC
    ILLIAC
    ILLIAC was a series of supercomputers built at a variety of locations, some at the University of Illinois at Urbana-Champaign. In all, five computers were built in this series between 1951 and 1974...

  • CEDAR
  • Illinois Cache Coherence (MESI) Protocol
    MESI protocol
    The MESI protocol is a widely used cache coherency and memory coherence protocol. It is the most common protocol which supports write-back cache...

  • OpenMP
    OpenMP
    OpenMP is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, on most processor architectures and operating systems, including Linux, Unix, AIX, Solaris, Mac OS X, and Microsoft Windows platforms...

  • MPI
    Message Passing Interface
    Message Passing Interface is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computers...

  • Path Pascal
  • Actors
    Actor model
    In computer science, the Actor model is a mathematical model of concurrent computation that treats "actors" as the universal primitives of concurrent digital computation: in response to a message that it receives, an actor can make local decisions, create more actors, send more messages, and...

  • Java and C++ memory models
    Memory model (computing)
    In computing, a memory model describes the interactions of threads through memory and specifies the assumptions the compiler is allowed to make when generating code for segmented memory or paged memory platforms.-History and significance:...

  • Compilers and auto-parallelization techniques
    Automatic parallelization
    Automatic parallelization, also auto parallelization, autoparallelization, or parallelization, the last one of which implies automation when used in context, refers to converting sequential code into multi-threaded or vectorized code in order to utilize multiple processors simultaneously in a...

     – Analyzer, Polaris, Parafrase, IMPACT, LLVM
    Low Level Virtual Machine
    The Low Level Virtual Machine is a compiler infrastructure written in C++ that is designed for compile-time, link-time, run-time, and "idle-time" optimization of programs written in arbitrary programming languages...

  • Race detection techniques
  • Parallel runtime systems – Chare Kernel, Charm++
  • IBM/DARPA PERCS – a precursor to IBM’s Power 7
  • AVIO to detect atomicity violations
  • Parallel programming patterns
    Design pattern (computer science)
    In software engineering, a design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. A design pattern is not a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that...


External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK