Oracle Architecture and Administration (3 of 3)

  1. Maintaining Data Integrity
    1. Implement Data Integrity and Constraints
    2. To add an integrity constriant to a table, it can either be defined at tabel creation or with the command ALTER TABLE

      At table creation:

        create table unicas (
        los_valores integer primary key,
        descripcion varchar2(40),
        fecha date,
        constraint solo_una unique(los_valores,fecha)
        );
      
        create table exclusive (
        value integer primary key,
        description varchar2(40),
        year date,
        constraint just_one unique(value,year)
        );
      

      Once a table has been created, a constraint can be added with the alter table add command.

        alter table nueva add (constraint solo_una primary key (id));
      
        alter table los_carros add ( constraint lejana foreign key (id)
        references proprietarios (id));
      
        alter table los_carros add ( constraint marcas check (modelos in
        ('ford','gm','vw')));
        alter table exclusive add (constraint just_one primary key (id));
      
        alter table cars add ( constraint drivers foreign key (id) references
        owners (id));
      
        alter table cars add ( constraint models check (models in
        ('ford','gm','vw')));
      
    3. Maintain Integrity Constraints
    4. Integrity constraints can also be dropped with the command alter table XXXX drop . There can be three parameters for constraint elimination.

        alter table XXXX drop primary key ( This drop whatever primary key there is)
        alter table XXXX drop unique (This drops the unique constraint on a table)
        alter table XXXX drop constraint <constraint_name> (Drops the constraint by name, this can be queried at USER_CONSTRAINTS )
      

      NOTE: In case any constraint is referenced by a foreign key, then Oracle will return an error, in order to pass through this the key word cascade is specified in order to drop all dependencies on the constraint.

    5. Obtain Constraint Information from the data dictonary
    6. Constraint information is located at USER_CONSTRAINTS, the search condition in this table indicates what are the specifics for the constraints. This table can be queried based on the constraint name or on a specific table.

  2. Loading Data
    1. Load Data using direct-load insert
    2. Direct load INSERT takes advantage of the same Direct Path mode of SQL*Loader capability, which bypasses the database buffer cache, generates no redo-log information, and writes directly to the datafiles. You can also put a table,index or Tablespace in no-logging mode,which additionally enhances your direct load insert. Direct load INSERTs trade off space for performance because they insert rows above the HWM (High Water Mark) of a segment. This wastes space but is not a major issue if a segment is often close to full, does not experience major grow-shrink patterns, and (especially) has a tablespace to itself.

      Direct load insert appends the inserted data after existing data in a table; free space within the existing data is not reused . This functionality is similar to that of the Direct Loader utility (SQL*LOADER). A major benefit of direct-load INSERT is that you can load data without logging redo or undo entries, which imporves insert performanace significantly. With the conventional path insert, in contrast, free space in the object is reused and referential integrity can be maintained. The conventional path for insertions cannot be parallelized

      Direct Load insert can be performed through one of these options:

      • Serially or in Parallel
      • Into partitioned or nonpartitioned tables
      • With or without logging of the redo data
      •     INSERT INTO emp
            SELECT * FROM t_emp;
          

        NOTE on COMPARASION with CTAS (Create table as select) : With direct-load INSERT, you can insert data into existing tables instead of having to create new tables.Direct-load INSERT updates the indexes of the table, but CREATE TABLE.. AS SELECT only creates a new table which does not have any indexes.

        SQL*LOADER

        To invoke SQL*Loader use:

          sqlldr userid=daniel/ic control=ulcaso1.ctl log=ulcaso1.log
        
      • Load data into Oracle Tables using SQL*Loader: Conventional Path
      • During Conventional Path (the default) loads, the input records are parsed accordingly to the field specifications, and each data field is copied to its corresponding bind array. When the bind array is full ( or there is no more data left to read), an array insert is executed. When SQL*Loader performs this type of load (conventional path) it competes equally with all other proccesses for buffer resources, this can slow the load significantly.

      • Load data into Oracle Tables using SQL*Loader: Direct Path
      • Instead of filling a bind array buffer and passing it to Oracle with a SQL insert command, a direct load path parses the input data according to the description given in the loader control file, converts the data for each input field to its corresponding Oracle column datatype, and builds a column array structure. SQL*Loader then uses the column array structures to format Oracle data blocks,the newly formatted database blocks are then written directly to the database bypassing most RDBMS processing. Direct path load is much faster than conventional path load, but entails several restrictions. You cannot use direct path for LOBs, VARRAYS, objects , or nested tables.

        The script CATLDR.SQL creates the necessary views in order for direct path loads to work.

        To start SQL*Loader in direct load mode, the parameter DIRECT must be set to TRUE on the command line or in the parameter file.Examples of SQL loader control file:

          LOAD DATA
          INFILE *
          INTO TABLE dept
          FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
            (deptno,dname, loc)
          BEGINDATA
            12,RESEARCH,"SARATOGA"
            10,"ACCOUNTING",CLEVELAND
        

        The * in INFILE indicates that the DATA is in the control file itself.

          LOAD DATA
          INFILE 'ucaso2.dat'
          INTO TABLE emp
          (empno POSISITION(01:04) INTEGER EXTERNAL,
          ename POSITION(06:15) CHAR
          job POSITION(17:25) CHAR)
        

        The position parameter supresses the need for the terminated by parameter

  3. Reorganizing Data
    1. Reorganize data using the Export and Import Utilities
    2. The command to invoke Export is exp <username/password> PARFILE=<parameter_file>

      Parameters

      • CONSISTENT : Default N , You should specify consistent to "Y" when you anticipate that other application will be updating the target data after an export has started.
      • CONSTRAINTS : Specifiys wheter to export constraints.
      • DIRECT : Specifies wheter to use direct path or conventional path Export.
      • FILE : Default:expdat.dmp, specified the names of the export files.
      • FULL : default to N,indicates that the export is a full database mode Export ( it exports the entire database).
      • GRANTS : Default to Y, Specifies wheter or not the Export utility exports object grants.
      • PARFILE : Specifies a filename for a file that contains a list of export parameters.
      • QUERY : This parameter allows you to select a subset of rows from a ser of tables when doing a mde export.

      Import uses the same parameters and is invoked with the imp command.

    3. Move data using transportable Tablespaces
    4. Transportable Tablespaces enable you to move tablespaces without doing a full export/import. DBAs can copy datafiles of tablespaces between identical systems (hardware arquitecture) thereby transaferring data quickly and easily. The transport must be self contained,meaning there are no reference pointing from inside the tablespace to outside the tablespace. The PL/SQL procedure transport_set_check checks whether the tablespace is self contained.

      First the tablespace to be moved is put into read-only mode to ensure that the data captured is a consistent image. Then an export is run of the metadata for the tablespace; this is fast because the amount of data is small. Next, the datafiles for the Tablespace are copied to the new system using an operating system command.At the new system, an import is done of the metadata for the tablespace, and again, this is fast, due to the amount of data. This is also a way to perform a point in time recovery on a tablespace. My moving a good copy of the tablespace into the production database,recovery can be quickly achieved.

      You can use transportable tablespaces to move a subset of an Oracle database and "plug" it in to another Oracle Database, essentially moving tablespaces between the databases. Moving data via transportable tablespace can be much faster than performing either an import/export or unload/load of the same data, because transporting a tablespace only requires the copying of datafiles and integrating the tablespace structural infromation.You can also use transportable tablepspaces to move index data,therby avoiding the index rebuilds you would have to perform when importing or loading table data.

      A transportable tablespace has the following limitations:

      • The source and target database must be on the same OS platform. Either both on Sun,Linux or NT, but not on separate platforms.
      • The source and target database must have the same database block size.
      • The source and target database must use the same character set.
      • You cannot transport a tablespace to a target database in which a tablespace with the same name already exist.
      • Transporatable Tablespace do not support: snapshot/replication, function-based indexs,scoped REFs, domain indexes.

      The steps for using Transportable Tablespace

      • Pick a Self-Contained set of Tablespaces: Self contained means that there are no references from inside the set of tablespaces pointing outside of the tablespaces.To determine wheter a tablespace is self-contained, you can invoke a built-in PL/SQL procedure, giving it a list of tablspace names, suppose the tablespaces that wants to be checked are OSMOSIS and MEXICO.
          execute dbms_tts.transport_set_check('OSMOSIS,MEXICO',TRUE)
        
        In case the tablespace is not self-contained (includes a foreign key constraint) then the view TRANSPORT_SET_VIOLATIONS will contain all the violations.
      • Generate a Transportable Tablespace Set, which you do by performing :
        • Make all tablespaces in the set you are copying read-only.
          ALTER TABLESPACE osmosis READ ONLY;
        • Invoke the export utility and specify which tablespace are in the transportable set, as:
                EXP TRANSPORT_TABLSPACE=y TABLESPACES=osmosis,mexico
                TRIGGERS=y/n CONSTRAINTS=y/n GRANTS=y/n FILE=expdat.dmp
              

      Even though the Export utility is used, only data dictonary structural information is exported.

      When prompted connect as "sys as sysdba"

      • Copy the datafiles to a separate storage space or to the target database.
      • If necessary,put the tablespace in the copied set back into read-write mode
        ALTER TABLESPACE osmosis READ WRITE;
      • Transport both the datafiles and the export file to a place accessible to the target database. Using normally the cp command.
      • Plug in the tablespace.To plug in the Tablspace:
        • Put the copied tablespace set datafiles in a location where the target database can access them.
        • Plug in the Tablespace and integrate the structural information using the following import statement
          	  IMP TRANSPORT_TABLESPACE=y
          	  DATAFILE='/db/osmosis_daniel','/db/mexico_daniel_2,......
          	  TABLSPACE=osmosis,mexico,..... TTS_OWNERS=drubio,aveytia
          	  FROMUSER=drubio,aveytia TOUSER=paraiza,faraiza FILE=expdat.dmp
          	
        • When prompted connect as "sys as sysdba", following are examples:
          	  IMP TRANPSORT_TABLESPACE=y
          	  DATAFILES='(/u01/oradata/staging.f,/u01/oradata/staging2.f)'
          	  IMPORT TRANSPORT_TABLESPACE=y DATAFILES='/u01/staging.f'
          	  TABLESPACE=daniel OWNERS=rubio
          	
      You must specify DATAFILES.

      TABLESPACES,TTS_OWNERS,FROMUSER and TOUSER are optional. The FILE parameter specifies the name of the structural information export file. TTS_OWNERS lists all users who own data in the tablespace set. When you specify TTS_OWNERS, the user names are compared to those in the export file. Import returns an error if there is any mismatch. If you do not specify FROMUSER and TOUSER, all database objects (such as tables and indexes) will be created under the same user as in the source database.Those users must already exist in the target database. If not, import will return an error indicating that some required users do not exist in the target database.

  4. Managing Password Security and Resources
    1. Manage passwords using profiles
    2. A profile is a database entity that specified resource limits. The database must have the resource limits turned on in order for the profile resource limits to take affect. This is done with the parameter RESOURCE_LIMIT in the init.ora file set to true . A profile can limit the number of sessions, the CPU, usage per session, and connect time. A profile can also enforce password management rules:

      • Locking a User Account
      • Password Lifetime and Expiration
      • Password History
      • Password Complexity Verification
      • Database Administration Authentication
      • Operating System
      • Using a Password file to Authenticate DBAs
    3. Administer Profiles
    4. If no profiles are specified then the default profile will be enabled for every user ( which is unlimited resources ). The following resources can be limited via profiles:

      • SESSIONS_PER_USER : The number of concurrent sessions a user can have in an instance.
      • CPU_PER_SESSION : The CPU time, in hundreths of seconds that a session can use.
      • CPU_PER_CALL : The CPU time, in hundreths of seconds that a parse,execute, or fetch can use.
      • CONNECT_TIME : The number of minutes a session can be connected to the database.
      • IDLE_TIME : The number of minutes a session can be conencted to the database without being actively used.
      • LOGICAL_READS_PER_SESSION : The number of database blocks that can be read in a session.
      • LOGICAL_READS_PER_CALL : The number of database blocks that can be read in a session.
      • PRIVATE_SGA : The amount of private space a session can allocate in the SGA's Shared SQL Pool (for MTS).
      • COMPOSITE_LIMIT : A compounding limit, based on the preceding limit.
      • FAILED_LOGIN_ATTEMPTS : The number of consecutive failed login attempts that will cause the account to be locked.
      • PASSWORD_LIFE_TIME: The number of days a password can be used before it expires.
      • PASSWORD_REUSE_TIME : The number of days that must pass before a password can be reused.
      • PASSWORD_REUSE_MAX : The number of times a password must be changed before a password can be reused.
      • PASSWORD_LOCK_TIME : The number of days an account will be locked if the FAILED_LOGIN_ATTEMPTS setting is exceeded.
      • PASSWORD_GRACE_TIME : The length , in days, of the "grace period" during which a password can still be changed when it has reached its PASSWORD_LIFE_TIME setting.
      • PASSWORD_VERIFY_FUNCTION : The name of a function used to evaluate the complexity of a password; Oracle provides one you can edit.Oracle provides a function called VERIFY_FUNCTION. The function is created if you run the >utlpwdmg.sql script. This function should be created in the SYS schema.

      In order to create a profile the follwing commands are issued:

        create profile JAVA_DES
        FAILED_LOGIN_ATTEMPTS 5
        PASSWORD_REUSE_MAX 2;
      

      To add parameters to the profile:

        alter profile JAVA_DES
        limit idle_time 60;
      

      To assign this profile to an existing user or newly generated user:

        create user alexia identified by alexia
        profile JAVA_DES
      

      In the profile defined above in case the user fails to login after five attempts the account will be locked. In order to unlock it (the DBA) you have to issue alter user alexia account unlock;, you can also lock the account manually with the command alter user alexia account lock. NOTE: A DBA can also expire a password manually with a command like: alter user alexia password expired

    5. Control use of resources using profiles
    6. See previous parameters to limit resources

    7. Obtain information about profiles, password management, and resources.
      • DBA_ROLES : Names of roles and their password status.
      • DBA_ROLE_PRIVS : Users who have been granted roles.
      • DBA_SYS_PRIVS: Users who have been granted system privileges.
      • DBA_TAB_PRIVS: Users who have been granted privileges on tables.
      • DBA_COL_PRIVS: Users who have been granted privileges on columns.
      • ROLE_ROLE_PRIVS : Roles that have been granted to other roles.
      • ROLE_SYS_PRIVS : System privileges that have been granted to roles.
      • ROLE_TAB_PRIVS : Table privileges that have been granted to roles.
      • USER_PASSWORD_LIMITS : Describe the password profile parameters for the user.
      • DBA_PROFILES : Lists the resource limits for all profiles.
      • SESSION_PRIVS : List all the system privileges available to the session, wheter granted directly or via roles.
      • SESSION_ROLES : The role column lists all the roles that are currently enables for the session.

  5. Managing Users
    1. Create new database users
    2. The parameters which are necessary to create a user are identified which will eventually become the password for the user. Other parameters which are recommended at creation are default tablespace> and temp tablespace.

      These parameters can always be specified at a later time with the command alter user, when the user is created the user still needs to be granted privileges on the database in order to use it, at the very minimum the system role connect is granted, if this role is granted, the grant quota will still need to be issued in order for the user to create objects on the tablespaces.

    3. Alter and drop exisiting users
    4. A user is normally altered to grant privileges, or to modify the tablespace and quotas the user is allowed access to:

        alter user daniel
        default tablespace
        quota 50M on OCP
        quota 0M on SYSTEM
      

      To drop a user the command drop user is used. The drop user has one optional parameter cascade which drops all objects in the users schema before dropping the user. NOTE on cascade: Any views, synonyms, procedures, functions or packages that referenced object in the schema of the dropped user will be marked as invalid.

    5. Monitor information about exisiting users
    6. Via DBA_USERS, USER_USERS, ALL_USERS and:

      • The view DBA_ROLE_PRIVS: Displays which users have been assigned certain roles.
      • The view DBA_SYS_PRIVS: Displays which users have been assigned system privileges.
      • The view DBA_TAB_PRIVS: Displays which users have been assigned table privileges.
      • The view DBA_COL_PRIVS: Displays which users have been assigned column privileges.

  6. Managing Priviliges
    1. Identify system and object privileges
    2. The views:

      • DBA_SYS_PRIVS : Displays which users have been granted system privileges.
      • DBA_TAB_PRIVS : Displays which users have privileges on tables.
      • ROLE_SYS_RPIVS : Displays what system privileges have been assigned to roles.
      • ROLE_TAB_PRIVS : Displays what table privileges have been granted to roles.
    3. Grant and revoke privileges
    4.     GRANT SELECT ON los_carros TO daniel
          REVOKE SELECT ON los_carros FROM daniel
          GRANT SELECT ON cars TO daniel
          REVOKE SELECT ON cars FROM daniel
        
    5. Control operating system and password authentication
    6. See section on Passwords|Resources.

  7. Managing Roles
    1. Create and modify roles
    2. A role can optionally require authorization when a user attempts to enable the role. Role authorization can be maintained by the database (using passwords), by the operating system, or by a network service. To alter the authorization method for a role, you must have the ALTER ANY ROLE system privilege or have been granted the role with the ADMIN OPTION.

      If you are granted a role protected by a password, you can enable or disable the role only by supplying the proper password for the role in a SET ROLE statement. If the role is created without any protection, the role can be enabled or disabled by any grantee with the alter role command.

            ALTER ROLE desarrolladores IDENTIFIED externally;
            ALTER ROLE desarrolladores IDENTIFIED globally;
            ALTER ROLE desarrolladores IDENTIFIED by javaclat;
          
    3. Remove Roles
    4.       DROP ROLE no_sirve;
            DROP ROLE useless;
          
    5. Use predefined roles
    6. The following are predefined roles and the default privileges for each one:

      • CONNECT: ALTER SESSION, CREATE CLUSTER, CREATE DATABASE LINK,CREATE SEQUENCE, CREATE SESSION, CREATE SYNONYM, CREATE TABLE, CREATE VIEW.
      • RESOURCE : CREATE CLUSTER, CREATE PROCEDURE, CREATE SEQUENCE, CREATE TABLE, CREATE TRIGGER, UNLIMITED TABLESPACE.
      • DBA : All system privileges WITH ADMIN OPTION.
      • EXP_FULL_DATABASE : SELECT ANY TABLE, BACKUP ANY TABLE, INSERT DELETE, AND UPDATE ON THE TABLES SYS.INCVID,SYS.INCFIL AND SYS.INCEXP.
      • IMP_FULL_DATABASE : BECOME USER.
      • DELETE_CATALOG_ROLE : DELETE priviliges on all dictonary packages.
      • EXECUTE_CATALOG_ROLE : EXECUTE priviliges on all dictonary packages.
      • SELECT_CATABLOG_ROLE : SELECT privilege on all catalog tables and views.
      • CREATE TYPE : DROP ROLE RECOVERY_CATALOG_OWNER, CREATE ROLE,RECOVERY_CATALOG_OWNER, CREATE TRIGGER, CREATE PROCEDURE TO RECOVERY_CATALOG_OWNER.
      • HS_ADMIN_ROLE : HS_EXTERNAL_OBJECT, HS_EXTERNAL_USER
    7. Display role information from the data dictonary
    8. The views are:

      • DBA_ROLE_PRIVS: Displays which roles have been granted to users.
      • ROLE_ROLE_PRIVS : Displays which roles have been granted to other roles.
      • ROLE_TAB_PRIVS : Displays which roles have certain table privileges.
      • ROLE_SYS_PRIVS : Displays which roles have been granted system privileges.

  8. Using National Language Support
    1. Choose a character set and national character set for a database.
    2. A character set is specified when creating a database, this will determine what languages can be represented in the database. This choice will influence how you create the database schema and develop applications that process character data. It will also influence interoperatbility with operating system resources and database performance.

      When processing characters, computer systems handle charcter data as numeric codes rather than their graphical representation. A group of characters (e.g alphabetic characters, ideographs, symbols, punctuation marks , control characters) can be encoded as a coded character set. A coded character set assings unique numeric codes to each character in the character reportoire. There are many different character code sets used in the industry , and Oracle supports most national,international and vendor-specific encoded character sets.

      Character set differ in :

      • The number of charcter available.
      • The particular characters (character reportoire) available.
      • The language scripts and the languages therefore represented.
      • The code values assigned to each character in the repertoire.
      • The encoding scheme used to represent a character entity.

      The first choice to make in choosing a character set will be based on what languages you wish to store in the database. The characters that are encoded in a character set depend on the writing system that will be represented. A Writing System can be used to represent a language or a group of languages. In general terms Writing Systems can be classified in phonetic and ideographic.

      • Phonetic Writing Systems: Consist of symbols which represent different sounds associated with a language (latin,greek). Note that alphabets can represent more than one language, example: The latin alphabet can represent many Western European languages such a French, German and English. Character set associated with phonetic writing systems (alphabet) can be typically encoded in one byte since the charcter reportoire is usually smaller than 256 characters.
      • Ideographic Writing Systems : In contrast to phonetic systems, they consist of ideographs or pictographs that represent the meaning of a word, not the sounds of a language. Chinese and Japanese are ideographic writing systems which are base of ten of thousands of ideographs. They may also use a Syllabary which provides a way to communicate phonetic information along with the pictographs when necessary. Japanese has two syllabaries, katakana, normally used for foreign and onomatopoeic words. Character set associated with ideographic systems typically must be encoded in more than one byte because the repertoire can be quite large. Ideally a character set should be available on the OS and handled by the application in order to ensure a seamless integration.

      UNICODE is a universal character set that encompasses most major scripts of the modern and ancient world. The Unicode character set provides support for a character repertoire of approx 39,000 characters and continues to grow.

      Encoding Schemes have different performance characterisitics, and can influence your database schema and application development requirements for handling character data, so you need to be aware of the characterisitics of the encoding scheme used by the character set you choose,character sets typically consist of the following encoding schemes:

      • Single-Byte Encoding Schemes: These are the most efficient encoding systems availables.They take up the least amount of space to represent a character and are easy to process and program with because one charcter can be represented in one byte.
      • 7-bit Encoding Schemes : Single-byte 7 bit encoding schemes can define up to 128 characters, and normally support just one language. Two of the most popular single-byte character-sets are the ASCII and US EBCDIC.
      • 8-bit Encoding Scheme : Single-byte 8 bit encoding schemes can define up to 256 characters, and often support a group of related languages. Example ISO8859-1, which supports many Western European Languages.
      • Multi-byte Encoding Schemes : These are needed to support ideographic scripts used in Asian languages like Chinese and Japanese,these schemes use either a fixed number of bytes to represent a character or a variable number of bytes per character.

      The character set is used by Oracle for :

      • Data stored in CHAR ,VARCHAR2 ,CLOB , and LONG columns.
      • Identifiers such as table names, column names, and PL/SQL variables.
      • Entering and Storing SQL and PL/SQL program source.

      NCHAR: In some cases, you may wish to have the ability to choose an alternate character set for the database because the properties of a different character encoding schemes may be more desirable for extensive character processing operations,or to facilitate ease of programming.

      In particular the following datatypes can be used with an alternate character set : NCHAR,NVARCHAR2,NCLOB. Specifying an NCHAR character set allows you to specify an alternate character set from the database character set for use in NCHAR,NVARCHAR2 and NCLOB columns.This can be particularly usefull for customers using variable-width multibyte encoding schemes, wheras the database character cannot. The benefits of using fixed-width multibyte encoding over a variable width are:

      • Optimized String processing performance on NCHAR,NVARCHAR2 and NCLOB
      • Ease of programming with a fixed-width multibyte character set as opposed to variable-width multibyte character set

      NOTE: All SQL commands will use the database character set, not the NCHAR set.Therfore literals can only be specified in the database character set. Some string operations will be faster if you choose a fixed-width character set for the national character set. Since SQL text can only be represented by the database character set, and not the NCHAR character set, you must choose an NCHAR character set with which either has an equivalent or subset character repertoire of the database character set.

      The database character set should always be a superset or equivalent of the clients operating system native character set. In order to change the character set of a particular database, it is necessary to do a full export/import to properly convert all data to the new character set. However , if and only if, the new charcter set is a strict superset of the current character set, it is possible to issue the command >ALTER DATABASE CHARACTER SET.

      The following character sets can take advantange of this command,when the original character set is US7ASCII,since US7ASCII is a subset of them : WE8ISO8859P1, AL24UTFFSS , and UTF8.

      The syntax for the command is:

        ALTER DATABASE [db_name] CHARACTER SET <new_characater_set>
          ALTER DATABASE [db_name] NATIONAL CHARACTER SET <new_NCHAR_character_set>
      
    3. Specifiy the language-dependant behavior using initialization parameters, environment variables, and the ALTER SESSION command.
    4. The view V$NLS_PARAMETERS describes the NLS parameters for the session.

      The language dependant operations are controlled by a number of parameters and evironment variables, since Oracle uses a client/server arquitecture, the client and server may in the same or different locale or language, in the event the the client and server run different character sets, Oracle will handle character set conversion of strings automatically.

      The locale-specific NLS data is stored in a directory specified by the ORA_NLS33 directory on most systems this variable is set to $ORACLE_HOME/ocommon/nls/admin/data.

      There are four ways to specify NLS parameters:

      • As initialization parameters on the server, that is in the init.ora file,these settings have no effect on the client side,they control only the servers behavior.e.g: NLS_TERRITORY = "CZECH REPUBLIC".
      • As environment variables on the client, these are used to specify locale-dependant behavior for the client, overriding the defaults set on the server in the initialization file. e.g.: % export NLS_SORT=FRENCH.
      • As ALTER SESSION parameters,these can be used to override the defaults set for the server in init.ora or set by the client with environment variables.e.g. ALTER SESSION SET NLS_SORT = FRENCH.
      • As a SQL function parameter: NLS parameters can be used explicitly to hardcode NLS behavior within a SQL function. Doing so will override the init.ora specification,the client environments and the ALTER SESSION parameters. e.g.: TO_CHAR(hirdate,'DD/MM/YYYY','nls_date_language = FRENCH').
      ParameterDescriptionDefault Scope(I= INIT.ORA, E= Environment Variable,A= Alter Session)
      NLS_CALENDARCalendar systemGregorianI, -, A
      NLS_COMPSQL Operator comparisonBinary-, E, A
      NLS_CREDITCredit accounting symbolNLS_TERRITORYI, E, A
      NLS_CURRENCYLocal currency symbolNLS_TERRITORYI, E, A
      NLS_DATE_FORMATDate formatNLS_TERRITORYI, E, A
      NLS_DATE_LANGUAGELanguage for day and month namesNLS_LANGUAGEI, E, A
      NLS_DEBITDebit accounting symbolNLS_TERRITORYI, E, A
      NLS_ISO_CURRENCYISO international currency symbolNLS_TERRITORYI, E, A
      NLS_LANGLanguage, territory, character setAmerican_America.US7ASCII-, E, -
      NLS_LANGUAGELanguageNLS_LANGI, -, A
      NLS_LIST_SEPARATORCharacter separating items in a listNLS_TERRITORYI, -, A
      NLS_MONETARY_CHARACTERSMonetary symbol dollar and cents (or their equivalents)NLS_TERRITORYI, E, A
      NLS_NCHARNational character setNLS_LANG-, E, -
      NLS_NUMERIC_CHARACTERSDecimal character and group separatorNLS_TERRITORYI, E, A
      NLS_SORTCharacter Sort SequenceNLS_LANGUAGEI, E, A
      NLS_TERRITORYTerritoryNLS_LANGI, -, A
      NLS_DUAL_CURRENCYDual currency symbolNLS_TERRITORYI, E, A

      NLS_LANG: This is the simplest way to specify a locale beahavior, a locale is a linguistic and cultural environment in which a program is running, it sets the language, territory and character set used by the database for both the server session and client appication. Using this one parameter ensures that the laguage and territory environment for both the server and client are the same. This parameter has three components : language,territory and charset specificed in the form NLS_LANG = language_territory.charset.

      Each component controls a subset of the NLS features:

      • language : Specifies conventions such as the language used for Oracle messages,day names, and month names. The language argument specifies default values for the territory and character set arguments, so either territory or charset can be omitted.
      • territory : Specifies conventions such as the default calander, collation, date, monetary and numeric formats.
      • charset : Specifies the character set used by the client application ( normally that of the users terminal ).Each language has a default character set assciated with it.

      NLS_LANG, is specifed as an environment variable in the client, it can be specified as % export NLS_LANG=FRENCH_FRANCE.WE8DEC. Because NLS_LANG is an environment variable it is read by the client at startup.The client communicates the information defined by NLS_LANG to the server when it connects.

      If NLS_LANG is used in init.ora it sets the NLS language and territory environment used by the database for both the server and client, ensuring that the language environments for both are automatically the same. In case the environment needs to be refined the parameters NLS_TERRITORY and NLS_LANGUAGE can be used separtley.

      NLS_LANGUAGE: Specifies the language for server messages, language for the day and month and abbreviations, default sorting sequence for character set data when ORDER BY is specified. The NLS_LANGUAGE value specified in the init.ora parameter will be the default for all session to that instance.

    5. Use the different types of National Language Support (NLS) parameters.
    6. NLS allows you to retrieve data in native languages, it ensures that database utilities and error messages, sort order,date,time, monetary,numeric and calender conventions automatically adapt to the native language and locale.

      NLS is implemented with the NLS runtime library, which provides a set of language-independant functions which are governed by a set of locale specific data identified and loaded at runtime. See the NLS details specified previously.

    7. Obtain information about NLS usage.
    8. The view V$NLS_PARAMETERS hold information on the NLS variables.