Extract, transform, load
Encyclopedia
Extract, transform and load (ETL) is a process in database
Database
A database is an organized collection of data for one or more purposes, usually in digital form. The data are typically organized to model relevant aspects of reality , in a way that supports processes requiring this information...

 usage and especially in data warehousing
Data warehouse
In computing, a data warehouse is a database used for reporting and analysis. The data stored in the warehouse is uploaded from the operational systems. The data may pass through an operational data store for additional operations before it is used in the DW for reporting.A data warehouse...

 that involves:
  • Extracting data
    Data extraction
    Data extraction is the act or process of retrieving data out of data sources for further data processing or data storage...

     from outside sources
  • Transforming
    Data transformation
    In metadata and data warehouse, a data transformation converts data from a source data format into destination data.Data transformation can be divided into two steps:...

     it to fit operational needs (which can include quality levels)
  • Loading it into the end target (database or data warehouse
    Data warehouse
    In computing, a data warehouse is a database used for reporting and analysis. The data stored in the warehouse is uploaded from the operational systems. The data may pass through an operational data store for additional operations before it is used in the DW for reporting.A data warehouse...

    )

Extract

The first part of an ETL process involves extracting the data from the source systems. In many cases this is the most challenging aspect of ETL, as extracting data correctly will set the stage for how subsequent processes will go. Most data warehousing projects consolidate data from different source systems. Each separate system may also use a different data organization/format
Data format
Data format in information technology can refer to either one of:* Data type, constraint placed upon the interpretation of data in a type system* Signal , a format for signal data used in signal processing...

. Common data source formats are relational database
Relational database
A relational database is a database that conforms to relational model theory. The software used in a relational database is called a relational database management system . Colloquial use of the term "relational database" may refer to the RDBMS software, or the relational database itself...

s and flat file
Flat file database
A flat file database describes any of various means to encode a database model as a single file .- Overview :...

s, but may include non-relational database structures such as Information Management System (IMS)
Information Management System
IBM Information Management System is a joint hierarchical database and information management system with extensive transaction processing capabilities.- History :...

 or other data structures such as Virtual Storage Access Method (VSAM) or Indexed Sequential Access Method (ISAM)
ISAM
ISAM stands for Indexed Sequential Access Method, a method for indexing data for fast retrieval. ISAM was originally developed by IBM for mainframe computers...

, or even fetching from outside sources such as through web spidering or screen-scraping. The streaming of the extracted data source and load on-the-fly to the destination database is another way of performing ETL when no intermediate data storage is required. In general, the goal of the extraction phase is to convert the data into a single format which is appropriate for transformation processing.

An intrinsic part of the extraction involves the parsing
Parsing
In computer science and linguistics, parsing, or, more formally, syntactic analysis, is the process of analyzing a text, made of a sequence of tokens , to determine its grammatical structure with respect to a given formal grammar...

 of extracted data, resulting in a check if the data meets an expected pattern or structure. If not, the data may be rejected entirely or in part.

Transform

The transform stage applies a series of rules or functions to the extracted data from the source to derive the data for loading into the end target. Some data sources will require very little or even no manipulation of data. In other cases, one or more of the following transformation types may be required to meet the business and technical needs of the target database:
  • Selecting only certain columns to load (or selecting null
    Null (SQL)
    Null is a special marker used in Structured Query Language to indicate that a data value does not exist in the database. Introduced by the creator of the relational database model, E. F. Codd, SQL Null serves to fulfill the requirement that all true relational database management systems support...

     columns not to load). For example, if the source data has three columns (also called attributes), for example roll_no, age, and salary, then the extraction may take only roll_no and salary. Similarly, the extraction mechanism may ignore all those records where salary is not present (salary = null).
  • Translating coded values (e.g., if the source system stores 1 for male and 2 for female, but the warehouse stores M for male and F for female), this calls for automated data cleansing
    Data cleansing
    Data cleansing, data cleaning, or data scrubbing is the process of detecting and correcting corrupt or inaccurate records from a record set, table, or database. Used mainly in databases, the term refers to identifying incomplete, incorrect, inaccurate, irrelevant, etc...

    ; no manual cleansing occurs during ETL
  • Encoding free-form values (e.g., mapping "Male" to "1")
  • Deriving a new calculated value (e.g., sale_amount = qty * unit_price)
  • Sorting
  • Joining data from multiple sources (e.g., lookup, merge) and deduplicating
    Deduplication
    The term deduplication refers generally to eliminating duplicate or redundant information.* Data deduplication, in computer storage, refers to the elimination of redundant data...

     the data
  • Aggregation (for example, rollup — summarizing multiple rows of data — total sales for each store, and for each region, etc.)
  • Generating surrogate-key
    Surrogate key
    A surrogate key in a database is a unique identifier for either an entity in the modeled world or an object in the database. The surrogate key is not derived from application data.- Definition :There are at least two definitions of a surrogate:...

     values
  • Transposing
    Transpose
    In linear algebra, the transpose of a matrix A is another matrix AT created by any one of the following equivalent actions:...

     or pivoting
    Pivot table
    In data processing, a pivot table is a data summarization tool found in data visualization programs such as spreadsheets or business intelligence software. Among other functions, pivot-table tools can automatically sort, count, total or give the average of the data stored in one table or spreadsheet...

     (turning multiple columns into multiple rows or vice versa)
  • Splitting a column into multiple columns (e.g., putting a comma-separated list specified as a string in one column as individual values in different columns)
  • Disaggregation of repeating columns into a separate detail table (e.g., moving a series of addresses in one record into single addresses in a set of records in a linked address table)
  • Lookup and validate the relevant data from tables or referential files for slowly changing dimensions.
  • Applying any form of simple or complex data validation. If validation fails, it may result in a full, partial or no rejection of the data, and thus none, some or all the data is handed over to the next step, depending on the rule design and exception handling. Many of the above transformations may result in exceptions, for example, when a code translation parses an unknown code in the extracted data.

Load

The load phase loads the data into the end target, usually the data warehouse
Data warehouse
In computing, a data warehouse is a database used for reporting and analysis. The data stored in the warehouse is uploaded from the operational systems. The data may pass through an operational data store for additional operations before it is used in the DW for reporting.A data warehouse...

 (DW). Depending on the requirements of the organization, this process varies widely. Some data warehouses may overwrite existing information with cumulative information, frequently updating extract data is done on daily, weekly or monthly basis. Other DW (or even other parts of the same DW) may add new data in a historicized form, for example, hourly. To understand this, consider a DW that is required to maintain sales records of the last year. Then, the DW will overwrite any data that is older than a year with newer data. However, the entry of data for any one year window will be made in a historicized manner. The timing and scope to replace or append are strategic design choices dependent on the time available and the business
Business
A business is an organization engaged in the trade of goods, services, or both to consumers. Businesses are predominant in capitalist economies, where most of them are privately owned and administered to earn profit to increase the wealth of their owners. Businesses may also be not-for-profit...

 needs. More complex systems can maintain a history and audit trail of all changes to the data loaded in the DW.

As the load phase interacts with a database, the constraints defined in the database schema — as well as in triggers activated upon data load — apply (for example, uniqueness, referential integrity
Referential integrity
Referential integrity is a property of data which, when satisfied, requires every value of one attribute of a relation to exist as a value of another attribute in a different relation ....

, mandatory fields), which also contribute to the overall data quality performance of the ETL process.
  • For example, a financial institution might have information on a customer in several departments and each department might have that customer's information listed in a different way. The membership department might list the customer by name, whereas the accounting department might list the customer by number. ETL can bundle all this data and consolidate it into a uniform presentation, such as for storing in a database or data warehouse.

  • Another way that companies use ETL is to move information to another application permanently. For instance, the new application might use another database vendor and most likely a very different database schema. ETL can be used to transform the data into a format suitable for the new application to use.

  • An example of this would be an Expense and Cost Recovery System (ECRS)
    Expense and Cost Recovery System (ECRS)
    Expense and Cost Recovery Systems An Expense and Cost Recovery System is a specialized subset of “Extract – Transform – Load” functioning as a powerful and flexible set of applications, including programs, scripts and databases designed to improve the cash flow of businesses and organizations by...

     such as used by accountancies
    Accountancy
    Accountancy is the process of communicating financial information about a business entity to users such as shareholders and managers. The communication is generally in the form of financial statements that show in money terms the economic resources under the control of management; the art lies in...

    , consultancies and lawyers
    Law firm
    A law firm is a business entity formed by one or more lawyers to engage in the practice of law. The primary service rendered by a law firm is to advise clients about their legal rights and responsibilities, and to represent clients in civil or criminal cases, business transactions, and other...

    . The data usually ends up in the time and billing system
    Law practice management software
    Law Practice Management software is software designed to manage a law firm's case and client records, billing and bookkeeping, schedules and appointments, deadlines, computer files and to facilitate any compliance requirements such as with document retention policies, courts' electronic filing...

    , although some businesses may also utilize the raw data for employee productivity reports to Human Resources (personnel dept.) or equipment usage reports to Facilities Management.

Real-life ETL cycle

The typical real-life ETL cycle consists of the following execution steps:
  1. Cycle initiation
  2. Build reference data
  3. Extract (from sources)
  4. Validate
  5. Transform (clean, apply business rules, check for data integrity, create aggregates or disaggregates)
  6. Stage (load into staging tables, if used)
  7. Audit reports (for example, on compliance with business rules. Also, in case of failure, helps to diagnose/repair)
  8. Publish (to target tables)
  9. Archive
  10. Clean up

Challenges

ETL processes can involve considerable complexity, and significant operational problems can occur with improperly designed ETL systems.

The range of data values or data quality in an operational system may exceed the expectations of designers at the time validation and transformation rules are specified. Data profiling
Data profiling
Data profiling is the process of examining the data available in an existing data source and collecting statistics and information about that data...

 of a source during data analysis can identify the data conditions that will need to be managed by transform rules specifications. This will lead to an amendment of validation rules explicitly and implicitly implemented in the ETL process.

Data warehouses are typically assembled from a variety of data sources with different formats and purposes. As such, ETL is a key process to bring all the data together in a standard, homogeneous environment.

Design analysts should establish the scalability
Scalability
In electronics scalability is the ability of a system, network, or process, to handle growing amount of work in a graceful manner or its ability to be enlarged to accommodate that growth...

 of an ETL system across the lifetime of its usage. This includes understanding the volumes of data that will have to be processed within service level agreement
Service Level Agreement
A service-level agreement is a part of a service contract where the level of service is formally defined. In practice, the term SLA is sometimes used to refer to the contracted delivery time or performance...

s. The time available to extract from source systems may change, which may mean the same amount of data may have to be processed in less time. Some ETL systems have to scale to process terabytes of data to update data warehouses with tens of terabytes of data. Increasing volumes of data may require designs that can scale from daily batch
Batch processing
Batch processing is execution of a series of programs on a computer without manual intervention.Batch jobs are set up so they can be run to completion without manual intervention, so all input data is preselected through scripts or command-line parameters...

 to multiple-day microbatch to integration with message queue
Message queue
In computer science, message queues and mailboxes are software-engineering components used for interprocess communication, or for inter-thread communication within the same process. They use a queue for messaging – the passing of control or of content...

s or real-time change-data capture for continuous transformation and update

Performance

ETL vendors benchmark their record-systems at multiple TB (terabytes) per hour (or ~1 GB per second) using powerful servers with multiple CPUs, multiple hard drives, multiple gigabit-network connections, and lots of memory.

In real life, the slowest part of an ETL process usually occurs in the database load phase. Databases may perform slowly because they have to take care of concurrency, integrity maintenance, and indices. Thus, for better performance, it may make sense to employ:
  • Direct Path Extract method or bulk unload whenever is possible (instead of querying the database) to reduce the load on source system while getting high speed extract
  • most of the transformation processing outside of the database
  • bulk load operations whenever possible.

Still, even using bulk operations, database access is usually the bottleneck in the ETL process. Some common methods used to increase performance are:
  • Partition tables (and indices). Try to keep partitions similar in size (watch for null values which can skew the partitioning).
  • Do all validation in the ETL layer before the load. Disable integrity checking (disable constraint ...) in the target database tables during the load.
  • Disable triggers (disable trigger ...) in the target database tables during the load. Simulate their effect as a separate step.
  • Generate IDs in the ETL layer (not in the database).
  • Drop the indices (on a table or partition) before the load - and recreate them after the load (SQL: drop index ...; create index ...).
  • Use parallel bulk load when possible — works well when the table is partitioned or there are no indices. Note: attempt to do parallel loads into the same table (partition) usually causes locks — if not on the data rows, then on indices.
  • If a requirement exists to do insertions, updates, or deletions, find out which rows should be processed in which way in the ETL layer, and then process these three operations in the database separately. You often can do bulk load for inserts, but updates and deletes commonly go through an API (using SQL
    SQL
    SQL is a programming language designed for managing data in relational database management systems ....

    ).


Whether to do certain operations in the database or outside may involve a trade-off. For example, removing duplicates using distinct may be slow in the database; thus, it makes sense to do it outside. On the other side, if using distinct will significantly (x100) decrease the number of rows to be extracted, then it makes sense to remove duplications as early as possible in the database before unloading data.

A common source of problems in ETL is a big number of dependencies among ETL jobs. For example, job "B" cannot start while job "A" is not finished. You can usually achieve better performance by visualizing all processes on a graph, and trying to reduce the graph making maximum use of parallelism, and making "chains" of consecutive processing as short as possible. Again, partitioning of big tables and of their indices can really help.

Another common issue occurs when the data is spread between several databases, and processing is done in those databases sequentially. Sometimes database replication may be involved as a method of copying data between databases - and this can significantly slow down the whole process. The common solution is to reduce the processing graph to only three layers:
  • Sources
  • Central ETL layer
  • Targets


This allows processing to take maximum advantage of parallel processing. For example, if you need to load data into two databases, you can run the loads in parallel (instead of loading into 1st - and then replicating into the 2nd).

Of course, sometimes processing must take place sequentially. For example, you usually need to get dimensional (reference) data before you can get and validate the rows for main "fact" tables.

Parallel processing

A development in ETL software is the implementation of parallel processing
Parallel processing
Parallel processing is the ability to carry out multiple operations or tasks simultaneously. The term is used in the contexts of both human cognition, particularly in the ability of the brain to simultaneously process incoming stimuli, and in parallel computing by machines.-Parallel processing by...

. This has enabled a number of methods to improve overall performance of ETL processes when dealing with large volumes of data.

ETL applications implement three main types of parallelism:
  • Data: By splitting a single sequential file into smaller data files to provide parallel access
    Parallel Random Access Machine
    In computer science, Parallel Random Access Machine is a shared memory abstract machine. As its name indicates, the PRAM was intended as the parallel computing analogy to the random access machine...

    .
  • Pipeline: Allowing the simultaneous running of several components on the same data stream
    Data stream
    In telecommunications and computing, a data stream is a sequence of digitally encoded coherent signals used to transmit or receive information that is in the process of being transmitted....

    . For example: looking up a value on record 1 at the same time as adding two fields on record 2.
  • Component: The simultaneous running of multiple processes
    Process (computing)
    In computing, a process is an instance of a computer program that is being executed. It contains the program code and its current activity. Depending on the operating system , a process may be made up of multiple threads of execution that execute instructions concurrently.A computer program is a...

     on different data streams in the same job, for example, sorting one input file while removing duplicates on another file.


All three types of parallelism usually operate combined in a single job.

An additional difficulty comes with making sure that the data being uploaded is relatively consistent. Because multiple source databases may have different update cycles (some may be updated every few minutes, while others may take days or weeks), an ETL system may be required to hold back certain data until all sources are synchronized. Likewise, where a warehouse may have to be reconciled to the contents in a source system or with the general ledger, establishing synchronization and reconciliation points becomes necessary.

Rerunnability, recoverability

Data warehousing procedures usually subdivide a big ETL process into smaller pieces running sequentially or in parallel. To keep track of data flows, it makes sense to tag each data row with "row_id", and tag each piece of the process with "run_id". In case of a failure, having these IDs will help to roll back and rerun the failed piece.

Best practice also calls for "checkpoints", which are states when certain phases of the process are completed. Once at a checkpoint, it is a good idea to write everything to disk, clean out some temporary files, log the state, and so on.

Virtual ETL

data virtualization
Data virtualization
Data virtualization describes the process of abstracting disparate data sources through a single data access layer ....

 had begun to advance ETL processing. The application of data virtualization to ETL allowed solving the most common ETL tasks of data migration
Data migration
Data migration is the process of transferring data between storage types, formats, or computer systems. Data migration is usually performed programmatically to achieve an automated migration, freeing up human resources from tedious tasks...

 and application integration for multiple dispersed data sources. So-called Virtual ETL operates with the abstracted representation of the objects or entities gathered from the variety of relational, semi-structured and unstructured data sources. ETL tools can leverage object-oriented modeling and work with entities' representations persistently stored in a centrally located hub-and-spoke architecture. Such a collection that contains representations of the entities or objects gathered from the data sources for ETL processing is called a metadata repository and it can reside in memory or be made persistent. By using a persistent metadata repository, ETL tools can transition from one-time projects to persistent middleware, performing data harmonization and data profiling
Data profiling
Data profiling is the process of examining the data available in an existing data source and collecting statistics and information about that data...

 consistently and in near-real time.

Best practices

Four-layered approach for ETL architecture design
  • Functional layer: Core functional ETL processing (extract, transform, and load).
  • Operational management layer: Job-stream definition and management, parameters, scheduling, monitoring, communication and alerting.
  • Audit, balance and control (ABC) layer: Job-execution statistics, balancing and controls, rejects- and error-handling, codes management.
  • Utility layer: Common components supporting all other layers.


Use file-based ETL processing where possible
  • Storage costs relatively little
  • Intermediate files serve multiple purposes:
    • Used for testing and debugging
    • Used for restart and recover processing
    • Used to calculate control statistics
  • Helps to reduce dependencies - enables modular programming.
  • Allows flexibility for job execution and scheduling
  • Better performance if coded properly, and can take advantage of parallel processing capabilities when the need arises.


Use data-driven methods and minimize custom ETL coding
  • Parameter-driven jobs, functions, and job-control
  • Code definitions and mapping in database
  • Consideration for data-driven tables to support more complex code-mappings and business-rule application.


Qualities of a good ETL architecture design
  • Performance
  • Scalable
  • Migratable
  • Recoverable (run_id, ...)
  • Operable (completion-codes for phases, re-running from checkpoints, etc.)
  • Auditable (in two dimensions: business requirements and technical troubleshooting)


Handling of non-desirable values (NULL values, erroneous values, etc.)

See: Dealing With Nulls In The Dimensional Model (Kimball University)
  • NULL DIMENSIONAL values
  • NULL FACT values
  • NULL PRIMARY and/or FOREIGN KEY values
  • Erroneous or undesirable values

Dealing with keys

Keys are some of the most important objects in all relational databases as they tie everything together. A primary key
Unique key
In relational database design, a unique key can uniquely identify each row in a table, and is closely related to the Superkey concept. A unique key comprises a single column or a set of columns. No two distinct rows in a table can have the same value in those columns if NULL values are not used...

 is a column which is the identifier for a given entity, where a foreign key
Foreign key
In the context of relational databases, a foreign key is a referential constraint between two tables.A foreign key is a field in a relational table that matches a candidate key of another table...

 is a column in another table which refers a primary key.

These keys can also be made up from several columns, in which case they are composite keys. In many cases the primary key is an auto generated integer which has no meaning for the business entity being represented, but solely exists for the purpose of the relational database - commonly referred to as a surrogate key
Surrogate key
A surrogate key in a database is a unique identifier for either an entity in the modeled world or an object in the database. The surrogate key is not derived from application data.- Definition :There are at least two definitions of a surrogate:...

.

As there will usually be more than one datasource being loaded into the warehouse the keys are an important concern to be addressed.

Your customers might be represented in several data sources, and in one their SSN (Social Security Number
Social Security number
In the United States, a Social Security number is a nine-digit number issued to U.S. citizens, permanent residents, and temporary residents under section 205 of the Social Security Act, codified as . The number is issued to an individual by the Social Security Administration, an independent...

) might be the primary key, their phone number in another and a surrogate in the third.
All of the customers information needs to be consolidated into one dimension table.

A recommended way to deal with the concern is to add a warehouse surrogate key, which will be used as foreign key from the fact table.

Usually updates will occur to a dimension's source data, which obviously must be reflected in the data warehouse.
If the primary key of the source data is required for reporting, the dimension already contains that piece of information for each row.
If the source data uses a surrogate key, the ware house must keep track of it even though it is never used in queries or reports.

That is done by creating a lookup table
Lookup table
In computer science, a lookup table is a data structure, usually an array or associative array, often used to replace a runtime computation with a simpler array indexing operation. The savings in terms of processing time can be significant, since retrieving a value from memory is often faster than...

 which contains the warehouse surrogate key and the originating key. This way the dimension is not polluted with surrogates from various source systems, while the ability to update is preserved.

The lookup table is used in different ways depending on the nature of the source data.
There are 5 types to consider, where three selected ones are included here:

Type 1:

- The dimension row is simply updated to match the current state of the source system. The warehouse does not capture history. The lookup table is used to identify which dimension row to update/overwrite.

Type 2:

- A new dimension row is added with the new state of the source system. A new surrogate key is assigned. Source key is no longer unique in the lookup table.

Fully logged:

- A new dimension row is added with the new state of the source system, while the previous dimension row is updated to reflect it is no longer active and record time of deactivation.




Work should be put in to guidance on which situations the options apply to. Is that solely a business decision?
Which factors influence the choice? The update strategy might (full wipe, incremental etc.)

Tools

Programmers can set up ETL processes using almost any programming language
Programming language
A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely....

, but building such processes from scratch can become complex. Increasingly, companies are buying ETL tools to help in the creation of ETL processes.

By using an established ETL framework, one may increase one's chances of ending up with better connectivity and scalability
Scalability
In electronics scalability is the ability of a system, network, or process, to handle growing amount of work in a graceful manner or its ability to be enlarged to accommodate that growth...

. A good ETL tool must be able to communicate with the many different relational database
Relational database
A relational database is a database that conforms to relational model theory. The software used in a relational database is called a relational database management system . Colloquial use of the term "relational database" may refer to the RDBMS software, or the relational database itself...

s and read the various file formats used throughout an organization. ETL tools have started to migrate into Enterprise Application Integration
Enterprise application integration
Enterprise Application Integration is defined as the use of software and computer systems architectural principles to integrate a set of enterprise computer applications.- Overview :...

, or even Enterprise Service Bus
Enterprise service bus
An enterprise service bus is a software architecture model used for designing and implementing the interaction and communication between mutually interacting software applications in Service Oriented Architecture...

, systems that now cover much more than just the extraction, transformation, and loading of data. Many ETL vendors now have data profiling
Data profiling
Data profiling is the process of examining the data available in an existing data source and collecting statistics and information about that data...

, data quality
Data quality
Data are of high quality "if they are fit for their intended uses in operations, decision making and planning" . Alternatively, the data are deemed of high quality if they correctly represent the real-world construct to which they refer...

, and metadata capabilities.

Commercial

  • Pervasive Software
    Pervasive Software
    Pervasive Software develops and distributes data infrastructure software and ETL tools that integrate, analyze, secure, manage and harvest data from disparate sources. Pervasive Data Integrator and Pervasive Data Profiler are the flagship integration products, and the Pervasive PSQL relational...

    ;
  • SAS Data Integration Server (in the earlier versions known as «SAS ETL Studio» (version 8) и «SAS Data Integration Studio» (вер. 9);
  • SAP BusinessObjects Data Integrator (before acquisition of BusinessObjects by SAP
    SAP AG
    SAP AG is a German software corporation that makes enterprise software to manage business operations and customer relations. Headquartered in Walldorf, Baden-Württemberg, with regional offices around the world, SAP is the market leader in enterprise application software...

     corporation, famous as «BusinessObjects Data Integrator»);
  • Integration Services of Microsoft SQL Server (included in Microsoft SQL Server
    Microsoft SQL Server
    Microsoft SQL Server is a relational database server, developed by Microsoft: It is a software product whose primary function is to store and retrieve data as requested by other software applications, be it those on the same computer or those running on another computer across a network...

     product line, SSIS)
  • IBM WebSphere DataStage;
  • Informatica PowerCenter;
  • Oracle Data Integrator (earlier owned by Sunopsis
    Sunopsis
    Sunopsis is software company based near Lyon, France. It also has a United States headquarters in Burlington, Massachusetts. The company was bought by Oracle in October 2006....

    ).

Free Demo software

  • Pentaho
    Pentaho
    The Pentaho BI Suite is open source Business Intelligence suite with integrated reporting, dashboard, data mining, workflow and ETL capabilities. Pentaho is headquartered in Orlando, USA.- Overview :...

    ;
  • Talend Open Studio
    Talend Open Studio
    Talend Open Studio is an open source data integration product developed by Talend and designed to combine, convert and update data in various locations across a business.- History :...

    ;
  • Scriptella
    Scriptella
    Scriptella is an open source ETL and script execution tool written in Java.Its primary focus is simplicity. It doesn't require the user to learn another complex XML-based language to use it, but allows the use of SQL or another scripting language suitable for the data source to perform required...

    ;

  • JasperETL from JasperSoft;
  • CloverETL.

See also

  • Architecture Patterns (EA Reference Architecture)
  • Data cleansing
    Data cleansing
    Data cleansing, data cleaning, or data scrubbing is the process of detecting and correcting corrupt or inaccurate records from a record set, table, or database. Used mainly in databases, the term refers to identifying incomplete, incorrect, inaccurate, irrelevant, etc...

  • Data integration
    Data integration
    Data integration involves combining data residing in different sources and providing users with a unified view of these data.This process becomes significant in a variety of situations, which include both commercial and scientific domains...

  • Data mart
    Data mart
    A data mart is the access layer of the data warehouse environment that is used to get data out to the users. The data mart is a subset of the data warehouse which is usually oriented to a specific business line or team.- Terminology :...

  • Data mediation
  • Data migration
    Data migration
    Data migration is the process of transferring data between storage types, formats, or computer systems. Data migration is usually performed programmatically to achieve an automated migration, freeing up human resources from tedious tasks...

  • Electronic Data Interchange (EDI)
    Electronic Data Interchange
    Electronic data interchange is the structured transmission of data between organizations by electronic means. It is used to transfer electronic documents or business data from one computer system to another computer system, i.e...

  • Enterprise architecture
    Enterprise architecture
    An enterprise architecture is a rigorous description of the structure of an enterprise, which comprises enterprise components , the externally visible properties of those components, and the relationships between them...

  • Expense and Cost Recovery System (ECRS)
    Expense and Cost Recovery System (ECRS)
    Expense and Cost Recovery Systems An Expense and Cost Recovery System is a specialized subset of “Extract – Transform – Load” functioning as a powerful and flexible set of applications, including programs, scripts and databases designed to improve the cash flow of businesses and organizations by...

  • Legal Electronic Data Exchange Standard (LEDES)
    Legal Electronic Data Exchange Standard
    The Legal Electronic Data Exchange Standard is a set of file format specifications intended to standardize bill/invoice data transmitted electronically from a law firm to a corporate client...

  • Metadata discovery
    Metadata discovery
    In metadata, metadata discovery is the process of using automated tools to discover the semantics of a data element in data sets. This process usually ends with a set of mappings between the data source elements and a centralized metadata registry....

  • Online analytical processing
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK