Hi have 100 records in my flat file while loading into staging table---- if any error even in single row then entire file be canceled HoW

Yes, there is the ROWS sqlldr parameter that controls the commit frequency. For example, If you use the SQL*Loader parameter ROWS=1000, we asking sql*loader to commit after every thousand rows are loaded. You may also be able to try ROWS=9999_high_values to get SQL*Loader to commit at the end of the job, but beware of potential undo log (rollback segment) issues, such as the ORA-01555 Snapshot Too Old error.

ERRORS (errors to allow)

Default: To see the default value for this parameter, invoke SQL*Loader without any parameters, as described in "Invoking SQL*Loader".
ERRORS specifies the maximum number of insert errors to allow. If the number of errors exceeds the value specified for ERRORS, then SQL*Loader terminates the load. To permit no errors at all, set ERRORS=0. To specify that all errors be allowed, use a very high number.
On a single-table load, SQL*Loader terminates the load when errors exceed this error limit. Any data inserted up that point, however, is committed.
SQL*Loader maintains the consistency of records across all tables. Therefore, multitable loads do not terminate immediately if errors exceed the error limit. When SQL*Loader encounters the maximum number of errors for a multitable load, it continues to load rows to ensure that valid rows previously loaded into tables are loaded into all tables and rejected rows are filtered out of all tables.
In all cases, SQL*Loader writes erroneous records to the bad file.

The default commit is after 28 records. Youc an increase it to any number.

Use ERRORS=N in your sqlldr command line or parameter file, where N is an arbitrary large number. I don't know if there is any upper limit for this parameter. As you've allready found ot, the default is 50. 
You can use Error=501 and rows = high number so whenever it will hit an maximum error limit your sqlldr job will quit and it will rollback.

SQLLDR CONTROL=sample.ctl, LOG=bar.log, BAD=baz.bad, DATA=etc.dat
USERID=scott/tiger, ERRORS=0, LOAD=2000, DISCARD=toss.dis,DISCARDMAX=5 ROWS=9999



Invoking SQL*Loader

Invoking SQL*Loader

When you invoke SQL*Loader, you can specify certain parameters to establish session characteristics. Parameters can be entered in any order, optionally separated by commas. You specify values for parameters, or in some cases, you can accept the default without entering a value.
For example:
SQLLDR CONTROL=sample.ctl, LOG=sample.log, BAD=baz.bad, DATA=etc.dat 
   USERID=scott/tiger, ERRORS=999, LOAD=2000, DISCARD=toss.dsc,

If you invoke SQL*Loader without specifying any parameters, SQL*Loader displays a help screen similar to the following. It lists the available parameters and their default values.
> sqlldr
Usage: SQLLDR keyword=value [,keyword=value,...]

Valid Keywords:

    userid -- ORACLE username/password           
   control -- control file name                  
       log -- log file name                      
       bad -- bad file name                      
      data -- data file name                     
   discard -- discard file name                  
discardmax -- number of discards to allow          (Default all)
      skip -- number of logical records to skip    (Default 0)
      load -- number of logical records to load    (Default all)
    errors -- number of errors to allow            (Default 50)
      rows -- number of rows in conventional path bind array or between direct
path data saves
               (Default: Conventional path 64, Direct path all)
  bindsize -- size of conventional path bind array in bytes  (Default 256000)
    silent -- suppress messages during run (header,feedback,errors,discards,
    direct -- use direct path                      (Default FALSE)
   parfile -- parameter file: name of file that contains parameter specifications
  parallel -- do parallel load                     (Default FALSE)
      file -- file to allocate extents from      
skip_unusable_indexes -- disallow/allow unusable indexes or index partitions
(Default FALSE)
skip_index_maintenance -- do not maintain indexes, mark affected indexes as 
unusable  (Default FALSE)
commit_discontinued -- commit loaded rows when load is discontinued  (Default
  readsize -- size of read buffer                  (Default 1048576)
external_table -- use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE 
 (Default NOT_USED)
columnarrayrows -- number of rows for direct path column array  (Default 5000)
streamsize -- size of direct path stream buffer in bytes  (Default 256000)
multithreading -- use multithreading in direct path  
resumable -- enable or disable resumable for current session  (Default FALSE)
resumable_name -- text string to help identify resumable statement
resumable_timeout -- wait time (in seconds) for RESUMABLE  (Default 7200)
date_cache -- size (in entries) of date conversion cache  (Default 1000)

PLEASE NOTE: Command-line parameters may be specified either by position or by keywords.
An example of the former case is 'sqlldr scott/tiger foo'; an example of the latter 
is 'sqlldr control=foo userid=scott/tiger'.One may specify parameters by position before
but not after parameters specified by keywords.For example, 'sqlldr scott/tiger control=foo
logfile=log' is allowed, but 'sqlldr scott/tiger control=foo log' is not, even though the
position of the parameter 'log' is correct.

PL/SQL: Stop Making the Same Performance Mistakes

PL/SQL: Stop Making the Same Performance Mistakes

PL/SQL is great, but like any programming language it is capable of being misused. This article highlights the common performance mistakes made when developing in PL/SQL, turning what should be an elegant solution into a resource hog. This is very much an overview, but each section includes links to the relevant articles on this site that discuss the topic in greater depth, including example code, so think of this more like a check-list of things to avoid, that will help you get the best performance from your PL/SQL.

Stop using PL/SQL when you could use SQL

The first sentence in the first chapter of the PL/SQL documentation states the following.
"PL/SQL, the Oracle procedural extension of SQL, is a portable, high-performance transaction-processing language."
So PL/SQL is an extension to SQL, not a replacement for it. In the majority of cases, a pure SQL solution will perform better than one made up of a combination of SQL and PL/SQL. Remember, databases are designed to work with sets of data. As soon as you start to process data in a row-by-row (or slow-by-slow) manner, you are stopping the database from doing what it does best. With that in mind, a PL/SQL programmer should aim to be an expert in SQL that knows a bit of PL/SQL, rather than an expert in PL/SQL that knows a little bit of SQL.
SQL has evolved greatly over the last 20 years. The introduction of features like analytic functions and SQL/XML mean you can perform very complex tasks directly from SQL. The following points describe some of the common situations where people use PL/SQL when SQL would be more appropriate.
  • Stop using UTL_FILE to read text files if you can external tables. Using the UTL_FILE package to read data from flat files is very inefficient. Since Oracle 7 people have been using SQL*Loader to improve performance, but since Oracle 9i the recommended way to read data from flat files is to use external tables. Not only is is more efficient by default, but it is easy to read the data in parallel and allows preprocessor commands to do tasks like unzipping files on the fly before reading them. In many cases, your PL/SQL load process can be replaced by a single INSERT ... SELECT statement with the data sourced from an external table.
  • Stop writing PL/SQL merges if you can use the MERGE statement. Merging, or upserting, large amounts of data using PL/SQL is a terrible waste of resources. Instead you should use the MERGE statement to perform the action in a single DML statement. Not only is it quicker, but it looks simpler and is easily made to run in parallel.
  • Stop coding multitable insert manually. Why send multiple DML statements to the server when an action can be performed in a single multitable insert? Since Oracle 9i multitable inserts have provided a flexible way of reducing round-trips to the server.
  • Stop using bulk binds (FORALL) when you can use DML error logging (DBMS_ERRLOG) to trap failures in DML. By default, if a single row in a DML statement raises an exception, all the work done by that DML statement is rolled back. In the past this meant operations that were logically a single INSERT ... SELECTUPDATE or DELETE statement affecting multiple rows had to be coded as a PL/SQL bulk operation using the FORALL ... SAVE EXCEPTIONS construct, for fear that a single exception would trash the whole process. Oracle 10g Release 2 introduced DML error logging, allowing us to revert back to using a single DML statement to replace the unnecessary bulk bind operation.
The thing to remember about all these points is they replace PL/SQL with DML. In addition to them being more efficient, provided the server has enough resources to cope with it, it is very easy to make them even faster on large operations by running them in parallel. Making PL/SQL run in parallel is considerably more difficult in comparison (see parallel-enabled pipelined table functions and DBMS_PARALLEL_EXECUTE).

Stop avoiding bulk binds

Having just told you to avoid bulk binds in favor of single DML statements, I'm now going to tell you to stop avoiding bulk binds where they are appropriate. If you are in a situation where a single DML statement is not possible and you need to process many rows individually, you should use bulk binds as they can often provide an order of magnitude performance improvement over conventional row-by-row processing in PL/SQL.
Bulk binds have been available since Oracle 8i, but it was the inclusion of record processing in bulk bind operations in Oracle 9i Release 2 that made them significantly easier to work with.
The BULK COLLECT clause allows you to pull multiple rows back into a collection. The FORALL construct allows you to bind all the data in a collection into a DML statement. In both cases, the performance improvements are achieved by reducing the number of context switches between PL/SQL and SQL that are associated with row-by-row processing.

Stop using pass-by-value (NOCOPY)

As the Oracle database and PL/SQL have matured it has become increasingly common to work with large objects (LOBs)collections and complex object types, such as XMLTYPE. When these large and complicated types are passed as OUT and IN OUT parameters to procedures and functions, the default pass-by-value processing of these parameters can represent a significant performance overhead.
The NOCOPY hint allows you to switch from the default pass-by-value to pass-by-reference, eliminating this overhead. In many cases, this can represent a significant performance improvement with virtually no effort.

Stop using the wrong data types

When you use the wrong data types, Oracle is forced to do an implicit conversion during assignments and comparisons, which represents an unnecessary overhead. In some cases this can lead to unexpected and dramatic issues, like preventing the optimizer from using an index or resulting in incorrect date conversions.
Oracle provide a variety of data types, many of which have dramatically difference performance characteristics. Nowhere is this more evident than with the performance of numeric data types.
Make sure you pick the appropriate data type for the job you are doing!

Quick Points

SQL*Loader and external Tables

This post gives a brief introduction to the two oracle’s technologies for loading external data into a database tables – SQL*Loader (SQLLDR) and External Table and later provides some guidelines on when to chose what.


SQL*Loader loads data from external files into tables in the Oracle database. SQL*Loader uses two primary files: the datafile, which contains the information to be loaded, and the control file, which contains information on the format of the data, the records and fields within the file, the order in which they are to be loaded, and even, when needed, the names of the multiple files that will be used for data.
 When executed, SQL*Loader will automatically create a log file and a “bad” file. The log file records the status of the load, such as the number of rows processed and the number of rows committed. The “bad” file will contain all the rows that were rejected during the load due to data errors, such as nonunique values in primary key columns. Within the control file, you can specify additional commands to govern the load criteria. If these criteria are not met by a row, the row will be written to a “discard” file. The log, bad, and discard files will by default have the extensions .log, .bad, and .dsc, respectively.
It has two modes of operation:
• Conventional path: SQLLDR will employ SQL inserts on our behalf to load data.
• Direct path: SQLLDR does not use SQL in this mode; it formats database blocks directly.
The direct path load allows you to read data from a flat file and write it directly to formatted database blocks, bypassing the entire SQL engine, undo generation and, optionally, redo generation at the same time. Parallel direct path load is among the fastest ways to go from having no data to a fully loaded database.

External Tables

External tables were first introduced in Oracle9i Release 1. Put simply, they allow us to treat an operating system file as if it is a read-only database table. External tables have limits—
1. No DML operations are allowed.
2. Indexes cannot be created. The lack of indexes on external tables does not have to be aTnegative factor in application performance. Queries of external tables complete very quickly, even though a full table scan is required with each access. There is I/O involved, but modern I/O systems use caching and RAID techniques to significantly reduce the performance penalty associated with repeated full scans of the same file.
3. Virtual columns cannot be added.
4. Constraints cannot be added.
Access Driver
When you create an external table, you specify its type. Each type of external table has its own access driver that provides access parameters unique to that type of external table. The access driver ensures that data from the data source is processed so that it matches the definition of the external table.
The default type for external tables is ORACLE_LOADER, which lets you read table data from an external table and load it into a database. Oracle Database also provides the ORACLE_DATAPUMP type, which lets you unload data (that is, read data from a table in the database and insert it into an external table) and then reload it into an Oracle database.
To access external files from within Oracle, you must first use the create directory command to define a directory object pointing to the external files’ location. After this, execute the create table command with the organization external clause.
create directory BOOK_DIR as 'e:\oracle\external';
grant read on directory BOOK_DIR to practice;
grant write on directory BOOK_DIR to practice;
create table BOOKSHELF_EXT
(Title VARCHAR2(100),
Publisher VARCHAR2(20),
CategoryName VARCHAR2(20),
Rating VARCHAR2(2))
organization external
         default directory BOOK_DIR
         access parameters (records delimited by newline
         fields terminated by "~"
         (Title CHAR(100),
         Publisher CHAR(20),
         CategoryName CHAR(20),
         Rating CHAR(2)))
location ('bookshelf_dump.lst'));

External Tables vs SQL*Loader

Some of the key functionality features that external tables have over SQLLDR in my experience are as follows:
• The ability to use complex WHERE conditions to selectively load data. SQLLDR has a WHEN clause to select rows to load, but you are limited to using only AND expressions and expressions using equality—no ranges (greater than, less than), no OR expressions, no IS NULL, and so on.
• The ability to MERGE data. You can take an operating system file full of data and update existing database records from it.
• The ability to perform efficient code lookups. You can join an external table to other database tables as part of your load process.
• Easier multitable inserts using INSERT. Starting in Oracle9i, an INSERT statement can insert into one or more tables using complex WHEN conditions. While SQLLDR can load into multiple tables, it can be quite complex to formulate the syntax.
• A shallower learning curve for new developers. SQLLDR is “yet another tool” to learn, in addition to the programming language, the development tools, the SQL language, and so on. As long as a developer knows SQL, he can immediately apply that knowledge to bulk data loading, without having to learn a new tool (SQLLDR).
SQLLDR should be chosen over external tables in the following three situations:
• You have to load data over a network—in other words, when the input file is not on the database server itself. Because the access driver is part of the Oracle software, only the files accessible to the database can be access as external tables.
• Multiple users must concurrently work with the same external table processing different input files.
• You have to work with LOB types. External tables do not support LOBs.

Oracle external tables

Oracle external tables  may help you to import external  flat file data like comma seperated values (CSV-Format).
We briefly discuss the concept of oracle external tables and then define a sample external table of type ORACLE_LOADER connected to a flat CSV-file.

Concept of External Tables

Since Oracle 9i external tables provide a comfortable mean for directly accessing external source data with SQL as it were in a table of your database.  With external tables you can load data into your database, e.g. as part of an ETL process.
External tables provide an alternative for using SQL*Loader, a CLI-based utility provided by oracle, which is a more powerfull but more complicated, too. Typically you will use SQL*Loader for migrating a full scale data base into your oracle database server. By contrast External tables are the means to enter specifically external data channels into your data model. Often this is done periodically as part of a staging process.
See for a comparative list of diffrences between external tables and SQL*Loader.
The concept of external tables let you provide an external file in the filesystem of the database server, define the flatfiles row structure and then access this data like any other table directly with SQL. Compared to querying a relational data base table, such access is slow. Every access to a external table performs a ‘full table scan’ through the entire file. Thus normally you should access an external table only once, to insert the external data into a relational data table inside your DB tablespace.
The external table is defined with a specific DDL-command of the form SQL CREATE TABLE…ORGANIZATION EXTERNAL statement.  In the reminder of this article we go through the standard case of importing a flatfile with comma seperated values, a so-called CSV-File.
External tables access is bound via file driver. There are two drivers types. The driver ORACLE_LOADER only support upload of flatfile data into the relationals database. If you wan’t to INSERT or UPDATE  data in a flatfile you need to use the DATAPUMP Driver. Note that even is External tables and SQL*LOADER are different instruments to work with, both may still use the same driver underneath.

Creating an External Table

This DDL-creates an external table ext_csv_emp, that access the flat file data with driver type ORACLE_LOADER.
CREATE TABLE ext_csv_emp
employee_number      NUMBER(8,0),
employee_job         VARCHAR2(64 BYTE),
employee_last_name   VARCHAR2 (40),
employee_first_name  VARCHAR2 (40),
employee_middle_name VARCHAR2 (40),
employee_hire_date   DATE)
    badfile TMP_LPNO_DIR: 'ext_csv_emp.bad'
    logfile TMP_LPNO_DIR: 'ext_csv_emp.log'
    skip 1
( employee_number      CHAR(8),
employee_job         CHAR(20),
employee_last_name   CHAR(18),
employee_first_name  CHAR(11),
employee_middle_name CHAR(11),
employee_hire_date   CHAR(10) date_format DATE mask "mm/dd/yyyy"
LOCATION ('ext_csv_emp.csv')
Such a external table is visible in SQL Developer like
 Picture: External table in SQL Developer
Syntactically spoken, an external table is a relational table with specific physical propertiesdefined by the keywords ORGANIZATION EXTERNAL followed by an external table clause.
We choose the file access driver by the type parameter ORACLE_LOADER, then we define the external data properties. First we  specify the server path by referring an Oracle Directory name, we defined before.  Next we define the access parameters. CSV data rows are presented line by line, thus records are delimited by NEWLINE character.
Badfile is a logfile that contains all data rows that do not conform to the interface specification given in these access parameters. Bad rows are appended to badfile each time an SQL access the external table with SQL SELECT. Watch the size of this file and your free disk space on the database servers file system, when working on large CSV-datasources during setup of a new data fed process.
The logfile logs each access to the external table, by any SELECT on the database. What the size of this file. As the file may be looked by the database, you be obliged to delete the file while the database is shutdown.
In Switzerland the default field separator is a semicolon. Enter comma or whatever separator you want to use. Often it’s wise to use the local country defaults. Anyhow watch for the escape character problem. The field separator should not be part of the field data. If this can sometime not be avoided, we specify that fields may optionally be enclosed in double quotes, thus allowing the field separator be part of the field data.
The last part of the access parameters is the field list. These type definitions are opaque to SQL. They are not evaluated when SQL processes this DDL. You will note that type checking of the FIELDS list is deferred to runtime, when it is done by the driver specified in the file access type.  As we have chosen file access driver type ORACLE_LOADER we have to use SQL*LOADER field type here. Watch out that SQL*LOADER field type have similar names as SQL field types, but not the same. Don’t confuse and mix these in your external table definitions.

Sampe Input File

The sample inputfile ext_csv_emp.csv in path TMP_LPNO_DIR may be
7415;DB Administrator;Miller;John;Jon;03.09.2008
The first line acts as a header line to hold the columns name.This is useful when checking the file with texteditors, EXCEL and the like. As we defined it, this first line is skipped. Of course it is your responsibility to assert, that this headerline is present, otherwise any first dataline might get skipped.
The external table can now be queried like
resulting in
 Picture: Result of SQL Query on External Table

Choosing Types in Access Parameters Definitions

You using other type than CHAR in the FIELDS-Clause like DATE or INTEGER you may spend a lot time with strange error messages in the LOG-File, that are difficult to interpret and impossible to debug or trace. Therefore I prefer to use CHAR in the access parameter FIELDS. Define it wide enough to hold any possible row values. Try to expect the unexpected.  In the sample above I rely on implicit type conversion of SQL.
If an erroneous DDL of an external table is accepted by SQL, you might get confused. Because of deference of access field type checking from compile time to runtime, you might get some unexpected, and maybe difficult to interpret messages when trying to access the external table.
If your input is more unstable, or if you want to make your data pipeline more bullet-prof define the external table like this and defer any type checking und conversion to a later step controlled by our own procedures:
CREATE TABLE ext_csv_emp2
     employee_number      VARCHAR2 (8),
     employee_job         VARCHAR2(64 BYTE),
     employee_last_name   VARCHAR2 (40),
     employee_first_name  VARCHAR2 (40),
     employee_middle_name VARCHAR2 (40),
     employee_hire_date   VARCHAR2 (12))
    badfile TMP_LPNO_DIR: 'ext_csv_emp.bad'
    logfile TMP_LPNO_DIR: 'ext_csv_emp.log'
    skip 1
        ( employee_number      CHAR(8),
          employee_job         CHAR(64),
          employee_last_name   CHAR(40),
          employee_first_name  CHAR(40),
          employee_middle_name CHAR(40),
          employee_hire_date   CHAR(12)
     LOCATION ('ext_csv_emp.csv')

Possible Errors

When setting up an external table the posiiblities for errors are manifold, most of them appear at runtime, an thus are painful.

Runtime Errors while driver is parsing Access Parameter Clause

As said the access parameter clause is only parsed at runtime. No checks are done at compile time.
When you get something like this
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "optionally": expecting one of: "column, (, reject"
KUP-01007: at line 6 column 8
29913. 00000 - "error in executing %s callout"
*Cause: The execution of the specified callout caused an error.
*Action: Examine the error messages take appropriate action.
recheck you DDL. This type of error message is from parsing a syntactical wrong access parameter clause. This message generator’s line number counting oddly starts at the beginning of the access parameter clause, not at the total line number of you DDL statement in the editor.
Thinking about these implicit given rules may help:
• Order and naming of the SQL Table and it’s access parameter fields must match
• Syntax of all access parameter pass unchecked at compile time. Study the syntax diagrams carefully.
• comment lines are accepted at compile time but rejected by the driver at runtime

Reject Limit Reached

 Picture: ORA-30653: reject limit reached
When you get this error your DDL and access parameters seems to be well defined. Parsing of CSV input data has started, but as it does not match the field definitions these rows are rejected. After reaching the reject limit the external data access is aborted.
With ORA-30653 it is time to check you LOG file. You will find further information like
LOG file opened at 09/03/12 13:53:50
Field Definitions for table EXT_CSV_EMP
Data in file has same endianness as the platform
Rows with all null fields are accepted
Fields in Data Source:
Terminated by ";"
Trim whitespace same as SQL Loader
Terminated by ";"
Trim whitespace same as SQL Loader
Terminated by ";"
Trim whitespace same as SQL Loader
Terminated by ";"
Trim whitespace same as SQL Loader
Terminated by ";"
Trim whitespace same as SQL Loader
Terminated by ";"
Trim whitespace same as SQL Loader
KUP-04021: field formatting error for field EMPLOYEE_NUMBER
KUP-04026: field too long for datatype
KUP-04101: record 1 rejected in file C:\oraclexe\app\oracle\product\11.2.0\server\TMP_LPNO_DIR\ext_csv_emp.csv
KUP-04021: field formatting error for field EMPLOYEE_NUMBER
KUP-04026: field too long for datatype
KUP-04101: record 2 rejected in file C:\oraclexe\app\oracle\product\11.2.0\server\TMP_LPNO_DIR\ext_csv_emp.csv
KUP-04021: field formatting error for field EMPLOYEE_NUMBER
KUP-04026: field too long for datatype
KUP-04101: record 3 rejected in file C:\oraclexe\app\oracle\product\11.2.0\server\TMP_LPNO_DIR\ext_csv_emp.csv

Increase your fields characters widths or make sure your input columns in CSV are truncated to their maximum width before being delivered.

Prerequisited Directory Missing

The referenced directory must be defined and point to an server path accessible to the database server. Otherwise you get
SQL-ERROR: ORA-06564: object TMP_LPNO_DIR2 does not exist