Decomposition (computer science)
Encyclopedia
Decomposition in computer science
Computer science
Computer science or computing science is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems...

, also known as factoring, refers to the process by which a complex problem or system is broken down into parts that are easier to conceive, understand, program, and maintain.

Overview

Decomposition in computer science
Computer science
Computer science or computing science is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems...

 refers to the process by which a complex problem or system is broken down into parts that are easier to conceive, understand, program, and maintain. There are different types of decomposition defined in computer sciences:
  • In structured programming
    Structured programming
    Structured programming is a programming paradigm aimed on improving the clarity, quality, and development time of a computer program by making extensive use of subroutines, block structures and for and while loops - in contrast to using simple tests and jumps such as the goto statement which could...

    , algorithmic decomposition breaks a process down into well-defined steps.
  • Structured analysis
    Structured analysis
    Structured Analysis in software engineering and its allied technique, Structured Design , are methods for analyzing and converting business requirements into specifications and ultimately, computer programs, hardware configurations and related manual procedures.Structured analysis and design...

     breaks down a software system from the system context level to system functions and data entities as described by Tom DeMarco
    Tom DeMarco
    Tom DeMarco is an American software engineer, author, teacher and speaker on software engineering topics. He is known as one of the developers of Structured analysis in the 1980s.- Biography :...

    .
  • Object-oriented
    Object-oriented programming
    Object-oriented programming is a programming paradigm using "objects" – data structures consisting of data fields and methods together with their interactions – to design applications and computer programs. Programming techniques may include features such as data abstraction,...

     decomposition
    , on the other hand, breaks a large system down into progressively smaller classes or objects that are responsible for some part of the problem domain.
  • According to Booch
    Grady Booch
    Grady Booch is an American software engineer. Booch is best known for developing the Unified Modeling Language with Ivar Jacobson and James Rumbaugh. Grady is recognized internationally for his innovative work in software architecture, software engineering, and collaborative development environments...

    , algorithmic decomposition is a necessary part of object-oriented analysis and design, but object-oriented systems start with and emphasize decomposition into classes.


More in general functional decomposition
Functional decomposition
Functional decomposition refers broadly to the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed from those parts by function composition...

 in computer science is a technique for mastering the complexity of the function of a model. A functional model of a systems is hereby replaced by a series of functional models of subsystems.

Decomposition paradigm

A decomposition paradigm in computer programming is a strategy for organizing a program as a number of parts, and it usually implies a specific way to organize a program text. Usually the aim of using a decomposition paradigm is to optimize some metric related to program complexity, for example the modularity of the program or its maintainability.

Most decomposition paradigms suggest breaking down a program into parts so as to minimize the static dependencies among those parts, and to maximize the cohesiveness of each part. Some popular decomposition paradigms are the procedural, modules, abstract data type and object oriented ones.

The concept of decomposition paradigm is entirely independent and different from that of model of computation
Model of computation
In computability theory and computational complexity theory, a model of computation is the definition of the set of allowable operations used in computation and their respective costs...

, but the two are often confused, most often in the cases of the functional model of computation being confused with procedural decomposition, and of the actor model
Actor model
In computer science, the Actor model is a mathematical model of concurrent computation that treats "actors" as the universal primitives of concurrent digital computation: in response to a message that it receives, an actor can make local decisions, create more actors, send more messages, and...

 of computation being confused with object oriented decomposition.

Decomposition diagram



A decomposition diagram shows a high-level function, process, organization, data subject area, or other type of object broken down into lower level, more detailed components. For example, decomposition diagrams may represent organizational structure or functional decomposition into processes. Decomposition diagrams provide a logical hierarchical decomposition of a system.

See also

  • Component-based software engineering
    Component-based software engineering
    Component-based software engineering is a branch of software engineering that emphasizes the separation of concerns in respect of the wide-ranging functionality available throughout a given software system...

  • Dynamization
    Dynamization
    In computer science, Dynamization is the process of transforming a static data structure into a dynamic one. Although static data structures may provide very good functionality and fast queries, their utility is limited because of their inability to grow/shrink fast, thus making them inapplicable...

  • Duplicate code
    Duplicate code
    Duplicate code is a computer programming term for a sequence of source code that occurs more than once, either within a program or across different programs owned or maintained by the same entity. Duplicate code is generally considered undesirable for a number of reasons...

  • ERROL
    ERROL
    ERROL is a declarative database query and manipulation language for the Entity-relationship model . It is applicable to any data model on which ERM can be mapped, virtually any general purpose database data model...

  • Event partitioning
    Event partitioning
    The goal of event partitioning is to be an easy-to-apply systems analysis technique that helps the analyst organise requirements for large systems into a collection of smaller, simpler, minimally-connected, easier-to-understand ‘mini systems’ / use cases. The approach is explained by Stephen M....

  • How to Solve It
    How to Solve It
    How to Solve It is a small volume by mathematician George Pólya describing methods of problem solving.- Four principles :How to Solve It suggests the following steps when solving a mathematical problem:...

  • Integrated Enterprise Modeling
    Integrated Enterprise Modeling
    Integrated enterprise modeling is an enterprise modeling method used for the admission and for the reengineering of processes both in producing enterprises and in the public area and service providers. In integrated enterprise modeling different aspects as functions and data become described in...

  • Personal information management
    Personal information management
    Personal information management refers to the practice and the study of the activities people perform in order to acquire, organize, maintain, retrieve and use information items such as documents , web pages and email messages for everyday use to complete tasks and fulfill a person’s various...

  • Readability
    Readability
    Readability is the ease in which text can be read and understood. Various factors to measure readability have been used, such as "speed of perception," "perceptibility at a distance," "perceptibility in peripheral vision," "visibility," "the reflex blink technique," "rate of work" , "eye...

  • Subroutine
    Subroutine
    In computer science, a subroutine is a portion of code within a larger program that performs a specific task and is relatively independent of the remaining code....


External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK