Quantcast
Channel: Sameer Shaik. B.E,M.S,M.B.A,P.M.P,C.S.M
Viewing all 191 articles
Browse latest View live

Create and manage contexts

$
0
0
Create and manage contexts

What is an Application Context?

An application context is a set of name-value pairs that Oracle Database stores in memory. The application context has a label called a namespace, for example, empno_ctx for an application context that retrieves employee IDs. Inside the context are the name-value pairs (an associative array): the name points to a location in memory that holds the value. An application can use the application context to access session information about a user, such as the user ID or other user-specific information, or a client ID, and then securely pass this data to the database. You can then use this information to either permit or prevent the user from accessing data through the application. You can use application contexts to authenticate both database and nondatabase users


The components of the name-value pair are as follows:
  • Name. Refers to the name of the attribute set that is associated with the value. For example, if the empno_ctx application context retrieves an employee ID from the HR.EMPLOYEES table, it could have a name such as employee_id.
  • Value. Refers to a value set by the attribute. For example, for the empno_ctx application context, if you wanted to retrieve an employee ID from the HR.EMPLOYEES table, you could create a value called emp_id that sets the value for this ID.

Oracle Database stores the application context values in a secure data cache available in the User Global Area (UGA) or the System (sometimes called "Shared") Global Area (SGA)

Types of Application Contexts

There are three general categories of application contexts:
  • Database session-based application contexts. This type retrieves data that is stored in the database user session (that is, the UGA) cache. There are three categories of database session-based application contexts:
    • Initialized locally. Initializes the application context locally, to the session of the user.
    • Initialized externally. Initializes the application context from an Oracle Call Interface (OCI) application, a job queue process, or a connected user database link.
    • Initialized globally. Uses attributes and values from a centralized location, such as an LDAP directory.
  • Global application contexts. This type retrieves data that is stored in the System Global Area (SGA) so that it can be used for applications that use a sessionless model, such as middle-tier applications in a three-tiered architecture. A global application context is useful if the session context must be shared across sessions, for example, through connection pool implementations.
  • Client session-based application contexts. This type uses Oracle Call Interface functions on the client side to set the user session data, and then to perform the necessary security checks to restrict user access.

Database Session-Based Application Contexts

If you must retrieve session information for database users, use a database session-based application context. This type of application context uses a PL/SQL procedure within Oracle Database to retrieve, set, and secure the data it manages


The database session-based application context is managed entirely within Oracle Database. Oracle Database sets the values, and then when the user exits the session, automatically clears the application context values stored in cache. If the user connection ends abnormally, for example, during a power failure, then the PMON background process cleans up the application context data.You do not need to explicitly clear the application context from cache.

Retrieve session information. To retrieve the user session information, you can use the SYS_CONTEXT SQL function. The SYS_CONTEXT function returns the value of the parameter associated with the context namespace. You can use this function in both SQL and PL/SQL statements. Typically, you will use the built-in USERENV namespace to retrieve the session information of a user.

Set the name-value attributes of the application context you created with CREATE CONTEXT. You can use the DBMS_SESSION.SET_CONTEXT procedure to set the name-value attributes of the application context. The name-value attributes can hold information such as the user ID, IP address, authentication mode, the name of the application, and so on. The values of the attributes you set remain either until you reset them, or until the user ends the session. Note the following:
  • If the value of the parameter in the namespace already has been set, then SET_CONTEXT overwrites this value.
  • Be aware that any changes in the context value are reflected immediately and subsequent calls to access the value through the SYS_CONTEXT function will return the most recent value


Be executed by users. After you create the package, the user will need to execute the package when he or she logs on. You can create a logon trigger to execute the package automatically when the user logs on, or you can embed this functionality in your applications. Remember that the application context session values are cleared automatically when the user ends the session, so you do not need to manually remove the session data.


Using SYS_CONTEXT with Database Links

When SQL statements within a user session involve database links, then Oracle Database runs the SYS_CONTEXT SQL function at the host computer of the database link, and then captures the context information there (at the host computer).

If remote PL/SQL procedure calls are run on a database link, then Oracle Database runs any SYS_CONTEXT function inside such a procedure at the destination database of the link. In this case, only externally initialized application contexts are available at the database link destination site. For security reasons, Oracle Database propagates only the externally initialized application context information to the destination site from the initiating database link site

Using DBMS_SESSION.SET_CONTEXT to Set Session Information

DBMS_SESSION.SET_CONTEXT (
  namespace VARCHAR2,
  attribute VARCHAR2,
  value     VARCHAR2,
  username  VARCHAR2,
  client_id VARCHAR2);


Demo:
SHAIKDB>select * from dba_context where schema='LOB';

NAMESPACE              SCHEMA                 PACKAGE                TYPE
------------------------------ ------------------------------ ------------------------------ ----------------------
EMP                  LOB                 EMP_PKG                ACCESSED LOCALLY

SHAIKDB>show user
USER is "LOB"

SHAIKDB>drop context emp;

Context dropped.


SHAIKDB>create context emp using emp_pkg;

Context created.

SHAIKDB>create package emp_pkg is
   procedure emp_proc;
   end;
   /

Package created.

SHAIKDB>create or replace package body emp_pkg is
procedure emp_proc is
  id hr.employees.employee_id%type;
 lname hr.employees.last_name%type;
 income hr.employees.salary%type;
 social hr.employees.ssn%type;
begin
select employee_id,last_name,salary,ssn into id,lname,income,social from
hr.employees where first_name=sys_context('USERENV','SESSION_USER');
dbms_session.set_context('emp','employee_id',id);
dbms_session.set_context('emp','fist_name',sys_context('USERENV','SESSION_USER'));
dbms_session.set_context('emp','last_name',lname);
dbms_session.set_context('emp','salary',income);
dbms_session.set_context('emp','ssn',social);
exception
when no_data_found then null;
end;
end;
/
Package body created.


Insert data into the hr table for user LOB & PFAY:

SHAIKDB>/
Enter value for id: 223
Enter value for fname: LOB
Enter value for lname: LOB
Enter value for name: LOB
Enter value for phone: 5141234567
old   1: insert into hr.employees values (&id,'&fname','&lname','&NAME','&phone',sysdate,'AD_VP',50000,null,null,100,null)
new   1: insert into hr.employees values (223,'LOB','LOB','LOB','5141234567',sysdate,'AD_VP',50000,null,null,100,null)

1 row created.

SHAIKDB>create trigger log_trigger after logon on database
    begin
      lob.emp_pkg.emp_proc;
     end;
   /

Trigger created.



[oracle@collabn1 ~]$ sqlplus pfay/pfay

SQL*Plus: Release 11.2.0.1.0 Production on Mon Sep 28 19:50:08 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SHAIKDB>select sys_context('USERENV','SESSION_USER') from dual;

SYS_CONTEXT('USERENV','SESSION_USER')
----------------------------------------------------------------------------------------------------
PFAY


SHAIKDB>select sys_context('emp','employee_id') from dual;

SYS_CONTEXT('EMP','EMPLOYEE_ID')
----------------------------------------------------------------------------------------------------
222


SHAIKDB>select sys_context('emp','first_name') from dual;

SYS_CONTEXT('EMP','FIRST_NAME')
----------------------------------------------------------------------------------------------------
PFAY

SHAIKDB>select sys_context('emp','last_name') from dual;

SYS_CONTEXT('EMP','LAST_NAME')
----------------------------------------------------------------------------------------------------
PFAY_LAST

SHAIKDB>select sys_context('emp','salary') from dual;

SYS_CONTEXT('EMP','SALARY')
----------------------------------------------------------------------------------------------------
50000

SHAIKDB>select sys_context('emp','ssn') from dual;

SYS_CONTEXT('EMP','SSN')
----------------------------------------------------------------------------------------------------




SHAIKDB>select sys_context('USERENV','SESSION_USER') from dual;

SYS_CONTEXT('USERENV','SESSION_USER')
----------------------------------------------------------------------------------------------------
LOB



Let’s create a trigger to test the context values:




Demo2:
You cannot set the context manually it has to set via package:

SHAIKDB>grant create trigger,create procedure,create any context,create session to pfay;

Grant succeeded.

SHAIKDB>exec dbms_session.set_context('emp1','id',1000);
BEGIN dbms_session.set_context('emp1','id',1000); END;

*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_SESSION", line 101
ORA-06512: at line 1




SHAIKDB>create context test1 using test_pkg;

Context created.

SHAIKDB>create package test_pkg is
 2  procedure test_proc;
 3  end;
 4  /

Package created.

SHAIKDB>create or replace package body test_pkg is
 2  procedure test_proc is
 3  begin
 4  dbms_session.set_context('test1','id',1000);
 5  end;
 6  end;
 7  /

Package body created.

SHAIKDB>create trigger test1_trigger after logon on schema
 2  begin
 3  pfay.test_pkg.test_proc;
 4  end;
 5  /

Trigger created.


Predefined Parameters of Namespace USERENV

Parameter
Return Value
ACTION
Identifies the position in the module (application name) and is set through the DBMS_APPLICATION_INFO package or OCI.
AUDITED_CURSORID
Returns the cursor ID of the SQL that triggered the audit. This parameter is not valid in a fine-grained auditing environment. If you specify it in such an environment, then Oracle Database always returns NULL.
AUTHENTICATED_IDENTITY
Returns the identity used in authentication. In the list that follows, the type of user is followed by the value returned:
Kerberos-authenticated enterprise user: kerberos principal name Kerberos-authenticated external user : kerberos principal name; same as the schema name SSL-authenticated enterprise user: the DN in the user's PKI certificate SSL-authenticated external user: the DN in the user's PKI certificate Password-authenticated enterprise user: nickname; same as the login name Password-authenticated database user: the database username; same as the schema name OS-authenticated external user: the external operating system user name Radius/DCE-authenticated external user: the schema name Proxy with DN : Oracle Internet Directory DN of the client Proxy with certificate: certificate DN of the client Proxy with username: database user name if client is a local database user; nickname if client is an enterprise user. SYSDBA/SYSOPER using Password File: login name SYSDBA/SYSOPER using OS authentication: operating system user name
AUTHENTICATION_DATA
Data being used to authenticate the login user. For X.503 certificate authenticated sessions, this field returns the context of the certificate in HEX2 format.
Note: You can change the return value of the AUTHENTICATION_DATA attribute using the length parameter of the syntax. Values of up to 4000 are accepted. This is the only attribute of USERENV for which Oracle Database implements such a change.
AUTHENTICATION_METHOD
Returns the method of authentication. In the list that follows, the type of user is followed by the method returned:
Password-authenticated enterprise user, local database user, or SYSDBA/SYSOPER using Password File; proxy with username using password: PASSWORD Kerberos-authenticated enterprise or external user: KERBEROS SSL-authenticated enterprise or external user: SSL Radius-authenticated external user: RADIUS OS-authenticated external user or SYSDBA/SYSOPER: OS DCE-authenticated external user: DCE Proxy with certificate, DN, or username without using password: NONE Background process (job queue slave process): JOB You can use IDENTIFICATION_TYPE to distinguish between external and enterprise users when the authentication method is Password, Kerberos, or SSL.
BG_JOB_ID
Job ID of the current session if it was established by an Oracle Database background process. Null if the session was not established by a background process.
CLIENT_IDENTIFIER
Returns an identifier that is set by the application through the DBMS_SESSION.SET_IDENTIFIER procedure, the OCI attribute OCI_ATTR_CLIENT_IDENTIFIER, or the Java class Oracle.jdbc.OracleConnection.setClientIdentifier. This attribute is used by various database components to identify lightweight application users who authenticate as the same database user.
CLIENT_INFO
Returns up to 64 bytes of user session information that can be stored by an application using the DBMS_APPLICATION_INFO package.
CURRENT_BIND
The bind variables for fine-grained auditing.
CURRENT_EDITION_ID
The identifier of the current edition.
CURRENT_EDITION_NAME
The name of the current edition.
CURRENT_SCHEMA
The name of the currently active default schema. This value may change during the duration of a session through use of an ALTER SESSION SET CURRENT_SCHEMA statement. This may also change during the duration of a session to reflect the owner of any active definer's rights object. When used directly in the body of a view definition, this returns the default schema used when executing the cursor that is using the view; it does not respect views used in the cursor as being definer's rights.
Note: Oracle recommends against issuing the SQL statement ALTER SESSION SET CURRENT_SCHEMA from within a stored PL/SQL unit.
CURRENT_SCHEMAID
Identifier of the currently active default schema.
CURRENT_SQL
CURRENT_SQLn
CURRENT_SQL returns the first 4K bytes of the current SQL that triggered the fine-grained auditing event. The CURRENT_SQLn attributes return subsequent 4K-byte increments, where n can be an integer from 1 to 7, inclusive. CURRENT_SQL1 returns bytes 4K to 8K; CURRENT_SQL2 returns bytes 8K to 12K, and so forth. You can specify these attributes only inside the event handler for the fine-grained auditing feature.
CURRENT_SQL_LENGTH
The length of the current SQL statement that triggers fine-grained audit or row-level security (RLS) policy functions or event handlers. Valid only inside the function or event handler.
CURRENT_USER
The name of the database user whose privileges are currently active. This may change during the duration of a session to reflect the owner of any active definer's rights object. When no definer's rights object is active, CURRENT_USER returns the same value as SESSION_USER. When used directly in the body of a view definition, this returns the user that is executing the cursor that is using the view; it does not respect views used in the cursor as being definer's rights.
CURRENT_USERID
The identifier of the database user whose privileges are currently active.
DATABASE_ROLE
The database role using the SYS_CONTEXT function with the USERENV namespace. The role is one of the following: PRIMARY, PHYSICAL STANDBY, LOGICAL STANDBY, SNAPSHOT STANDBY.
DB_DOMAIN
Domain of the database as specified in the DB_DOMAIN initialization parameter.
DB_NAME
Name of the database as specified in the DB_NAME initialization parameter.
DB_UNIQUE_NAME
Name of the database as specified in the DB_UNIQUE_NAME initialization parameter.
ENTRYID
The current audit entry number. The audit entryid sequence is shared between fine-grained audit records and regular audit records. You cannot use this attribute in distributed SQL statements. The correct auditing entry identifier can be seen only through an audit handler for standard or fine-grained audit.
ENTERPRISE_IDENTITY
Returns the user's enterprise-wide identity:
For enterprise users: the Oracle Internet Directory DN. For external users: the external identity (Kerberos principal name, Radius and DCE schema names, OS user name, Certificate DN). For local users and SYSDBA/SYSOPER logins: NULL. The value of the attribute differs by proxy method:
For a proxy with DN: the Oracle Internet Directory DN of the client For a proxy with certificate: the certificate DN of the client for external users; the Oracle Internet Directory DN for global users For a proxy with username: the Oracle Internet Directory DN if the client is an enterprise users; NULL if the client is a local database user.
FG_JOB_ID
Job ID of the current session if it was established by a client foreground process. Null if the session was not established by a foreground process.
GLOBAL_CONTEXT_MEMORY
Returns the number being used in the System Global Area by the globally accessed context.
GLOBAL_UID
Returns the global user ID from Oracle Internet Directory for Enterprise User Security (EUS) logins; returns null for all other logins.
HOST
Name of the host machine from which the client has connected.
IDENTIFICATION_TYPE
Returns the way the user's schema was created in the database. Specifically, it reflects the IDENTIFIED clause in the CREATE/ALTER USER syntax. In the list that follows, the syntax used during schema creation is followed by the identification type returned:
IDENTIFIED BY password: LOCAL IDENTIFIED EXTERNALLY: EXTERNAL IDENTIFIED GLOBALLY: GLOBAL SHARED IDENTIFIED GLOBALLY AS DN: GLOBAL PRIVATE
INSTANCE
The instance identification number of the current instance.
INSTANCE_NAME
The name of the instance.
IP_ADDRESS
IP address of the machine from which the client is connected. If the client and server are on the same machine and the connection uses IPv6 addressing, then ::1 is returned.
ISDBA
Returns TRUE if the user has been authenticated as having DBA privileges either through the operating system or through a password file.
LANG
The abbreviated name for the language, a shorter form than the existing 'LANGUAGE' parameter.
LANGUAGE
The language and territory currently used by your session, along with the database character set, in this form:
language_territory.characterset
MODULE
The application name (module) set through the DBMS_APPLICATION_INFO package or OCI.
NETWORK_PROTOCOL
Network protocol being used for communication, as specified in the 'PROTOCOL=protocol' portion of the connect string.
NLS_CALENDAR
The current calendar of the current session.
NLS_CURRENCY
The currency of the current session.
NLS_DATE_FORMAT
The date format for the session.
NLS_DATE_LANGUAGE
The language used for expressing dates.
NLS_SORT
BINARY or the linguistic sort basis.
NLS_TERRITORY
The territory of the current session.
OS_USER
Operating system user name of the client process that initiated the database session.
POLICY_INVOKER
The invoker of row-level security (RLS) policy functions.
PROXY_ENTERPRISE_IDENTITY
Returns the Oracle Internet Directory DN when the proxy user is an enterprise user.
PROXY_GLOBAL_UID
Returns the global user ID from Oracle Internet Directory for Enterprise User Security (EUS) proxy users; returns NULL for all other proxy users.
PROXY_USER
Name of the database user who opened the current session on behalf of SESSION_USER.
PROXY_USERID
Identifier of the database user who opened the current session on behalf of SESSION_USER.
SERVER_HOST
The host name of the machine on which the instance is running.
SERVICE_NAME
The name of the service to which a given session is connected.
SESSION_EDITION_ID
The identifier of the session edition.
SESSION_EDITION_NAME
The name of the session edition.
SESSION_USER
The name of the database user at logon. For enterprise users, returns the schema. For other users, returns the database user name. This value remains the same throughout the duration of the session.
SESSION_USERID
The identifier of the database user at logon.
SESSIONID
The auditing session identifier. You cannot use this attribute in distributed SQL statements.
SID
The session ID.
STATEMENTID
The auditing statement identifier. STATEMENTID represents the number of SQL statements audited in a given session. You cannot use this attribute in distributed SQL statements. The correct auditing statement identifier can be seen only through an audit handler for standard or fine-grained audit.
TERMINAL
The operating system identifier for the client of the current session. In distributed SQL statements, this attribute returns the identifier for your local session. In a distributed environment, this is supported only for remote SELECT statements, not for remote INSERT, UPDATE, or DELETE operations. (The return length of this parameter may vary by operating system.)



Data Dictionary Views That Display Information about Application Contexts

View
Description
ALL_CONTEXT
Describes all context namespaces in the current session for which attributes and values were specified using the DBMS_SESSION.SET_CONTEXT procedure. It lists the namespace and its associated schema and PL/SQL package.
ALL_POLICY_CONTEXTS
Describes the driving contexts defined for the synonyms, tables, and views accessible to the current user. (A driving context is a context used in a Virtual Private Database policy.)
DBA_CONTEXT
Provides all context namespace information in the database. Its columns are the same as those in the ALL_CONTEXT view, except that it includes the TYPE column. The TYPE column describes how the application context is accessed or initialized.
DBA_POLICY_CONTEXTS
Describes all driving contexts in the database that were added by the DBMS_RLS.ADD_POLICY_CONTEXT procedure. Its columns are the same as those in ALL_POLICY_CONTEXTS.
SESSION_CONTEXT
Describes the context attributes and their values set for the current session.
USER_POLICY_CONTEXTS
Describes the driving contexts defined for the synonyms, tables, and views owned by the current user. Its columns (except for OBJECT_OWNER) are the same as those in ALL_POLICY_CONTEXTS.
V$CONTEXT
Lists set attributes in the current session. Users do not have access to this view unless you grant the user the SELECT privilege on it.
V$SESSION
Lists detailed information about each current session. Users do not have access to this view unless you grant the user the SELECT privilege on it


Documentation:
Oracle® Database Security Guide 11g Release 2 (11.2) → Part Number E10574-02 → 6 Using Application Contexts to Retrieve User Information

SYS_CONTEXT → Oracle® Database SQL Language Reference 11g Release 2 (11.2) → Part Number E10592-02

ORA-55626: Cannot remove the Flashback Archive's primary tablespace

$
0
0

Trying to remove tablespace from the default flashback archive but it throws an error until I delete the whole flashback archive:


ORA-55626: Cannot remove the Flashback Archive's primary tablespace


SHAIKDB>create flashback archive flash_days tablespace tbs2 quota 100m retention 10 day;

Flashback archive created.

SHAIKDB>alter flashback archive flash_days set default;

Flashback archive altered.

SHAIKDB>alter flashback archive flash_days add tablespace tbs3;

Flashback archive altered.

SHAIKDB>col FLASHBACK_ARCHIVE_NAME for a20
SHAIKDB>select * from dba_flashback_archive_ts;

FLASHBACK_ARCHIVE_NA FLASHBACK_ARCHIVE# TABLESPACE_NAME            QUOTA_IN_MB
-------------------- ------------------ ------------------------------ ----------------------------------------
FLASH_ARCH_DEFAULT             1 TBS1                  1024
FLASH_DAYS                 2 TBS2                  100
FLASH_MONTHS                 3 TBS3                  1024
FLASH_DAYS                 2 TBS3                  10240

fix:-

Delete the dman archive itself:


SHAIKDB>alter flashback archive flash_days remove tablespace tbs2;
alter flashback archive flash_days remove tablespace tbs2
                                                     *
ERROR at line 1:
ORA-55626: Cannot remove the Flashback Archive's primary tablespace


SHAIKDB>drop flashback archive flash_days;

Flashback archive dropped.

SHAIKDB>select * from dba_flashback_archive_ts;

FLASHBACK_ARCHIVE_NA FLASHBACK_ARCHIVE# TABLESPACE_NAME            QUOTA_IN_MB
-------------------- ------------------ ------------------------------ ----------------------------------------
FLASH_ARCH_DEFAULT              1 TBS1                   1024
FLASH_MONTHS                  3 TBS3                   1024

Administer flashback data archive and schema evolution

$
0
0
Using Flashback Data Archive (Oracle Total Recall)
A Flashback Data Archive provides the ability to track and store transactional changes to a table over its lifetime. A Flashback Data Archive is useful for compliance with record stage policies and audit reports

You can specify a default Flashback Data Archive for the system. A Flashback Data Archive is configured with retention time. Data archived in the Flashback Data Archive is retained for the retention time.

Conditions for Data Archive:
  • You have the FLASHBACK ARCHIVE object privilege on the Flashback Data Archive that you want to use for that table.
  • The table is neither nested, clustered, temporary, remote, or external.
  • The table contains neither LONG nor nested columns.

To disable:
  • FLASHBACK ARCHIVE ADMINISTER system privilege or
  • As SYSDBA

Creating a Flashback Data Archive

Creating a default Flashback Data Archive that uses up to 1 G of tablespace tbs1, whose data are retained for 100 years:

SHAIKDB>create flashback archive default flash_arch_default tablespace tbs1 quota 1g retention 100 year;

Flashback archive created.


SHAIKDB>create flashback archive flash_days tablespace tbs2 quota 100m retention 10 day;

Flashback archive created.

SHAIKDB>alter flashback archive flash_days set default;

Flashback archive altered.

SHAIKDB>create flashback archive flash_months tablespace tbs3 quota 1g retention 1 month;

Flashback archive created.

Add space:

SHAIKDB>alter flashback archive flash_days add tablespace tbs3;

Flashback archive altered.

SHAIKDB>alter flashback archive flash_days modify retention 45 day;

Flashback archive altered.

SHAIKDB>alter flashback archive flash_days modify tablespace tbs3 quota 10g;

Flashback archive altered.


SHAIKDB>col FLASHBACK_ARCHIVE_NAME for a20
SHAIKDB>select * from dba_flashback_archive_ts;

FLASHBACK_ARCHIVE_NA FLASHBACK_ARCHIVE# TABLESPACE_NAME            QUOTA_IN_MB
-------------------- ------------------ ------------------------------ ----------------------------------------
FLASH_ARCH_DEFAULT             1 TBS1                  1024
FLASH_DAYS                 2 TBS2                  100
FLASH_MONTHS                 3 TBS3                  1024
FLASH_DAYS                 2 TBS3                  10240

SHAIKDB>select * from dba_flashback_Archive;

OWNER_NAME              FLASHBACK_ARCHIVE_NA FLASHBACK_ARCHIVE# RETENTION_IN_DAYS CREATE_TIME                        LAST_PURGE_TIME                                STATUS
------------------------------ -------------------- ------------------ ----------------- --------------------------------------------------------------------------- --------------------------------------------------------------------------- -------
SYS                  FLASH_ARCH_DEFAULT            1           36500 28-SEP-15 09.24.57.000000000 PM            28-SEP-15 09.24.57.000000000 PM
SYS                  FLASH_DAYS                2             45 28-SEP-15 09.29.39.000000000 PM            28-SEP-15 09.29.39.000000000 PM                        DEFAULT
SYS                  FLASH_MONTHS                3             30 28-SEP-15 09.30.28.000000000 PM            28-SEP-15 09.30.28.000000000 PM


undo_management             string     AUTO

SHAIKDB>alter flashback archive flash_months add tablespace tbs2;

Flashback archive altered.

SHAIKDB>alter flashback archive flash_months remove tablespace tbs3;
alter flashback archive flash_months remove tablespace tbs3
                                                      *
ERROR at line 1:
ORA-55626: Cannot remove the Flashback Archive's primary tablespace

If it is not the default tablespace that you created the archive with then you can remove the tablespace using the remove command from flashback archive.

SHAIKDB>alter flashback archive flash_months remove tablespace tbs2;

Flashback archive altered.

SHAIKDB>alter flashback archive flash_months set default;

Flashback archive altered.

SHAIKDB>create user flash identified by flash  default tablespace tbs1;

User created.

SHAIKDB>grant create session,create table to flash;

Grant succeeded.

SHAIKDB>grant flashback archive on flash_months to flash;

Grant succeeded.

SHAIKDB>grant flashback archive on flash_arch_default to flash;

Grant succeeded.

SHAIKDB>alter user flash quota unlimited on tbs1;

User altered.

SHAIKDB>alter user flash quota unlimited on tbs2;

User altered.

SHAIKDB>alter user flash quota unlimited on tbs3;

User altered.

Create table flash and store the historical data in the default Flashback Data Archive:

SHAIKDB>show user
USER is "FLASH"

SHAIKDB>create table flash1 (id number,name varchar2(10)) flashback archive;

Table created.

SHAIKDB>begin
    for i in 1..100 loop
    insert into flash1 values(i,'LOVE');
    commit;
    end loop;
    end;
    /

PL/SQL procedure successfully completed.

SHAIKDB>select * from flash1 where rownum <=10;

   ID NAME
---------- ----------
    1 LOVE
    2 LOVE
    3 LOVE
    4 LOVE
    5 LOVE
    6 LOVE
    7 LOVE
    8 LOVE
    9 LOVE
   10 LOVE

10 rows selected.


SHAIKDB>col ARCHIVE_TABLE_NAME for a20
SHAIKDB>col OWNER_NAME for a15
SHAIKDB>col table_name for a10
SHAIKDB>select * from dba_flashback_archive_tables;

TABLE_NAME OWNER_NAME       FLASHBACK_ARCHIVE_NA ARCHIVE_TABLE_NAME   STATUS
---------- --------------- -------------------- -------------------- --------
FLASH1       FLASH               FLASH_MONTHS     SYS_FBA_HIST_74868    ENABLED

SHAIKDB>show user
USER is "SYS"

SHAIKDB>create flashback archive flash_days tablespace tbs2 quota 1g retention 1 day;

Flashback archive created.


SHAIKDB>grant flashback archive on flash_days to flash;

Grant succeeded.



Create table flash2 and store the historical data in the default_flash_arch Flashback Data Archive:

SHAIKDB>create table flash2 (id number,name2 varchar2(10)) flashback archive  flash_days;

Table created.


SHAIKDB>begin
    for i in 1..100 loop
    insert into flash2 values(i,'FLASH2');
    commit;
    end loop;
    end;
    /  2    3    4    5    6    7  

PL/SQL procedure successfully completed.

SHAIKDB>select * from flash2 where rownum <=10;

   ID NAME2
---------- ----------
    1 FLASH2
    2 FLASH2
    3 FLASH2
    4 FLASH2
    5 FLASH2
    6 FLASH2
    7 FLASH2
    8 FLASH2
    9 FLASH2
   10 FLASH2

10 rows selected.


SHAIKDB>select * from dba_flashback_archive_tables;

TABLE_NAME OWNER_NAME       FLASHBACK_ARCHIVE_NA ARCHIVE_TABLE_NAME   STATUS
---------- --------------- -------------------- -------------------- --------
FLASH1       FLASH       FLASH_MONTHS     SYS_FBA_HIST_74868   ENABLED
FLASH2       FLASH       FLASH_DAYS       SYS_FBA_HIST_74878   ENABLED




SHAIKDB>alter table flash1 no flashback archive;
alter table flash1 no flashback archive
*
ERROR at line 1:
ORA-55620: No privilege to use Flashback Archive    
SHAIKDB>show user
USER is "SYS"

or grant flashback archive administer privilege to manage the flashback archives.

SHAIKDB>alter table flash.flash1 no flashback archive;

Table altered.


SHAIKDB>select * from dba_flashback_archive_tables;

TABLE_NAME OWNER_NAME       FLASHBACK_ARCHIVE_NA ARCHIVE_TABLE_NAME   STATUS
---------- --------------- -------------------- -------------------- --------
FLASH2       FLASH       FLASH_DAYS       SYS_FBA_HIST_74878   ENABLED

SHAIKDB>alter table flash1 flashback archive flash_days;

Table altered


TABLE_NAME OWNER_NAME       FLASHBACK_ARCHIVE_NA ARCHIVE_TABLE_NAME   STATUS
---------- --------------- -------------------- -------------------- --------
FLASH1       FLASH       FLASH_DAYS       SYS_FBA_HIST_74868   ENABLED
FLASH2       FLASH       FLASH_DAYS       SYS_FBA_HIST_74878   ENABLED


SHAIKDB>alter flashback archive flash_days modify retention 7 day;

Flashback archive altered.

19:20:19 SHAIKDB>alter session set nls_date_format='mm/dd/yyyy hh24:mi:ss';

Session altered.

19:20:40 SHAIKDB>select sysdate from dual;

SYSDATE
-------------------
09/29/2015 19:20:47

SHAIKDB>insert into flash1 select * from flash2;

200 rows created.

SHAIKDB>commit;

Commit complete.

SHAIKDB>insert into flash2 select * from flash1;

300 rows created.

SHAIKDB>commit;

Commit complete.

19:20:47 SHAIKDB>select count(*) from flash1;

 COUNT(*)
----------
      300

19:29:54 SHAIKDB>select count(*) from flash1 as of timestamp to_timestamp('09-29-2015 10:00:00','mm-dd-yyyy hh24:mi:ss');

 COUNT(*)
----------
      100

19:32:11 SHAIKDB>select count(*) from flash1 as of timestamp to_timestamp('09-29-2015 19:30:00','mm-dd-yyyy hh24:mi:ss');

 COUNT(*)
----------
      300


19:47:18 SHAIKDB>select distinct name from flash1;

NAME
----------
FLASH2
LOVE

19:47:25 SHAIKDB>delete flash1 where name='LOVE';

200 rows deleted.

19:47:45 SHAIKDB>commit;

Commit complete.

19:47:48 SHAIKDB>select count(*) from flash1;

 COUNT(*)
----------
      100

19:47:54 SHAIKDB>select count(*) from flash1 as of timestamp ( systimestamp - interval '5' minute);

 COUNT(*)
----------
      300

19:48:44 SHAIKDB>select count(*) from flash1 as of timestamp ( systimestamp - interval '1' hour);

 COUNT(*)
----------
      100


Lets get the data back:

19:48:55 SHAIKDB>select count(*) from flash1 as of timestamp ( systimestamp - interval '10' minute) where name='LOVE';

 COUNT(*)
----------
      200

19:50:44 SHAIKDB>insert into flash1 select * from flash1 as of timestamp(systimestamp - interval '10' minute) where name='LOVE';

200 rows created.

19:51:21 SHAIKDB>commit;

Commit complete.

19:51:23 SHAIKDB>select count(*) from flash1;

 COUNT(*)
----------
      300





If you must use unsupported DDL statements on a table enabled for Flashback Data Archive, use the DBMS_FLASHBACK_ARCHIVE.DISASSOCIATE_FBA procedure to disassociate the base table from its Flashback Data Archive. To reassociate the Flashback Data Archive with the base table afterward, use the DBMS_FLASHBACK_ARCHIVE.REASSOCIATE_FBA procedure.



View
Description
*_FLASHBACK_ARCHIVE
Displays information about Flashback Data Archive files.
*_FLASHBACK_ARCHIVE_TS
Displays tablespaces of Flashback Data Archive files.
*_FLASHBACK_ARCHIVE_TABLES
Displays information about tables that are enabled for Data Flashback Archive files.



Documentation:
Oracle® Database Advanced Application Developer's Guide 11g Release 2 (11.2) → 12 Using Oracle Flashback Technology → Using Flashback Data Archive (Oracle Total Recall)

How to Add a user in Grid control

$
0
0


How to Add a user in Grid control.
Login as a super user sysman/sys/system





 Click on Setup

 Click on Administartors












Fill in the details for the new user --> RESMGR







Administer Resource Manager

$
0
0

Oracle Database Resource Manager (the Resource Manager) enables you to optimize resource allocation among the many concurrent database sessions.


  • Resource consumer group

A group of sessions that are grouped together based on resource requirements. The Resource Manager allocates resources to resource consumer groups, not to individual sessions.

  • Resource plan

A container for directives that specify how resources are allocated to resource consumer groups. You specify how the database allocates resources by activating a specific resource plan.

  • Resource plan directive

Associates a resource consumer group with a particular plan and specifies how resources are to be allocated to that resource consumer group.

Resource Consumer Groups

A resource consumer group (consumer group) is a collection of user sessions that are grouped together based on their processing needs. When a session is created, it is automatically mapped to a consumer group based on mapping rules that you set up. As a database administrator (DBA), you can manually switch a session to a different consumer group. Similarly, an application can run a PL/SQL package procedure that switches its session to a particular consumer group.

Because the Resource Manager allocates resources (such as CPU) only to consumer groups, when a session becomes a member of a consumer group, its resource allocation is then determined by the allocation for the consumer group. By default, each session in a consumer group shares the resources allocated to that group with other sessions in the group in a round robin fashion.

There are three special consumer groups that are always present in the data dictionary. They cannot be modified or deleted. They are:
  • SYS_GROUP
  • This is the initial consumer group for all sessions created by user accounts SYS or SYSTEM. This initial consumer group can be overridden by session-to–consumer group mapping rules.
  • DEFAULT_CONSUMER_GROUP
  • This is the initial consumer group for all sessions started by user accounts other than SYS and SYSTEM. This initial consumer group can be overridden by session-to–consumer group mapping rules. DEFAULT_CONSUMER_GROUP cannot be named in a resource plan directive.
  • OTHER_GROUPS
  • This group applies collectively to all sessions that belong to a consumer group that is not part of the currently active plan, including sessions that belong to DEFAULT_CONSUMER_GROUP. OTHER_GROUPS must have a resource plan directive specified in every plan. It cannot be explicitly assigned to sessions through mapping rules.



Procedure
Description
GRANT_SYSTEM_PRIVILEGE
Grants the ADMINISTER_RESOURCE_MANAGER system privilege to a user or role.
REVOKE_SYSTEM_PRIVILEGE
Revokes the ADMINISTER_RESOURCE_MANAGER system privilege from a user or role.



The following PL/SQL block grants the administrative privilege to user SCOTT, but does not grant SCOTT the ADMIN option. Therefore, RESMGR can execute all of the procedures in the DBMS_RESOURCE_MANAGER package, but SCOTT cannot use the GRANT_SYSTEM_PRIVILEGE procedure to grant the administrative privilege to others.

BEGIN
 DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE(
  GRANTEE_NAME   => 'RSMGR,
  PRIVILEGE_NAME => 'ADMINISTER_RESOURCE_MANAGER',
  ADMIN_OPTION   => FALSE);
END;
/


The following PL/SQL block creates a simple resource plan with two user-specified consumer groups:


SHAIKDB>select plan,GROUP_OR_SUBPLAN,CPU_P1,CPU_P2,CPU_P3,mgmt_p1,mgmt_p2,mgmt_p3 from dba_rsrc_plan_directives where plan like '%SIMPLE%' order by 2;    

PLAN            GROUP_OR_SUBPLAN           CPU_P1       CPU_P2     CPU_P3    MGMT_P1    MGMT_P2    MGMT_P3
-------------------- ------------------------------ ---------- ---------- ---------- ---------- ---------- ----------
MY_SIMPLE_PLAN         MY_SIMPLE_GROUP1                0           80       0          0     80        0
MY_SIMPLE_PLAN         MY_SIMPLE_GROUP2                0           20       0          0     20        0
MY_SIMPLE_PLAN         OTHER_GROUPS                0       0     100          0      0      100
MY_SIMPLE_PLAN         SYS_GROUP                   100       0       0        100      0        0


SHAIKDB>begin
 2  dbms_resource_manager.create_simple_plan(simple_plan=>'MY_SIMPLE_PLAN',
 3  CONSUMER_GROUP1=>'MY_SIMPLE_GROUP1',GROUP1_PERCENT=>80,
 4  CONSUMER_GROUP2=>'MY_SIMPLE_GROUP2',GROUP2_PERCENT=>20);
 5  END;
 6  /

PL/SQL procedure successfully completed.


Consumer Group
Level 1
Level 2
Level 3
SYS_GROUP
100%
-
-
MYGROUP1
-
80%
-
MYGROUP2
-
20%
-
OTHER_GROUPS
-
-
100%

A complex resource plan is any resource plan that is not created with the DBMS_RESOURCE_MANAGER.CREATE_SIMPLE_PLAN procedure.

for a more complex resource plan, you must create the plan, with its directives and consumer groups, in a staging area called the pending area, and then validate the plan before storing it in the data dictionary.

Step 1: Create a pending area.
Step 2: Create, modify, or delete consumer groups.
Step 3: Create the resource plan.
Step 4: Create resource plan directives.
Step 5: Validate the pending area.
Step 6: Submit the pending area.

Pending Area

The pending area is a staging area where you can create a new resource plan, update an existing plan, or delete a plan without affecting currently running applications. When you create a pending area, the database initializes it and then copies existing plans into the pending area so that they can be updated

SHAIKDB>exec dbms_resource_manager.create_pending_area();

PL/SQL procedure successfully completed.

Creating a Resource Consumer Group
The following PL/SQL block creates a consumer group called OLTP with the default (ROUND-ROBIN) method of allocating resources to sessions in the group:


SHAIKDB>exec dbms_resource_manager.create_consumer_group('CRITICAL','Group for Critical Apps');

PL/SQL procedure successfully completed.
Available Procedures under Resource_manager:


Subprogram
Description
CALIBRATE_IO Procedure
Calibrates the I/O capabilities of storage
CLEAR_PENDING_AREA Procedure
Clears the work area for the resource manager
CREATE_CATEGORY Procedure
Creates a new resource consumer group category
CREATE_CONSUMER_GROUP Procedure
Creates entries which define resource consumer groups
CREATE_PENDING_AREA Procedure
Creates a work area for changes to resource manager objects
CREATE_PLAN Procedure
Creates entries which define resource plans
CREATE_PLAN_DIRECTIVE Procedure
Creates resource plan directives
CREATE_SIMPLE_PLAN Procedure
Creates a single-level resource plan containing up to eight consumer groups in one step
DELETE_CATEGORY Procedure
Deletes an existing resource consumer group category
DELETE_CONSUMER_GROUP Procedure
Deletes entries which define resource consumer groups
DELETE_PLAN Procedure
Deletes the specified plan as well as all the plan directives it refers to
DELETE_PLAN_CASCADE Procedure
Deletes the specified plan as well as all its descendants (plan directives, subplans, consumer groups)
DELETE_PLAN_DIRECTIVE Procedure
Deletes resource plan directives
SET_CONSUMER_GROUP_MAPPING Procedure
Adds, deletes, or modifies entries for the login and run-time attribute mappings
SET_CONSUMER_GROUP_MAPPING_PRI Procedure
Creates the session attribute mapping priority list
SET_INITIAL_CONSUMER_GROUP Procedure
Assigns the initial resource consumer group for a user (Caution: Deprecated Subprogram)
SUBMIT_PENDING_AREA Procedure
Submits pending changes for the resource manager
SWITCH_CONSUMER_GROUP_FOR_SESS Procedure
Changes the resource consumer group of a specific session
SWITCH_CONSUMER_GROUP_FOR_USER Procedure
Changes the resource consumer group for all sessions with a given user name
SWITCH_PLAN Procedure
Sets the current resource manager plan
UPDATE_CATEGORY Procedure
Updates an existing resource consumer group category
UPDATE_CONSUMER_GROUP Procedure
Updates entries which define resource consumer groups
UPDATE_PLAN Procedure
Updates entries which define resource plans
UPDATE_PLAN_DIRECTIVE Procedure
Updates resource plan directives
VALIDATE_PENDING_AREA Procedure
Validates pending changes for the resource manager


Creating a Resource Plan

Parameter
Description
PLAN
Name to assign to the plan.
COMMENT
Any descriptive comment.
CPU_MTH
Deprecated. Use MGMT_MTH.
ACTIVE_SESS_POOL_MTH
Active session pool resource allocation method. ACTIVE_SESS_POOL_ABSOLUTE is the default and only method available.
PARALLEL_DEGREE_LIMIT_MTH
Resource allocation method for specifying a limit on the degree of parallelism of any operation. PARALLEL_DEGREE_LIMIT_ABSOLUTE is the default and only method available.
QUEUEING_MTH
Queuing resource allocation method. Controls the order in which queued inactive sessions are removed from the queue and added to the active session pool. FIFO_TIMEOUT is the default and only method available.
MGMT_MTH
Resource allocation method for specifying how much CPU each consumer group or subplan gets. 'EMPHASIS', the default method, is for single-level or multilevel plans that use percentages to specify how CPU is distributed among consumer groups. 'RATIO' is for single-level plans that use ratios to specify how CPU is distributed.
SUB_PLAN
If TRUE, the plan cannot be used as the top plan; it can be used as a subplan only. Default is FALSE.

The following shows resource plan options:



PLAN
Name of the resource plan to which the directive belongs.
GROUP_OR_SUBPLAN
Name of the consumer group or subplan to which to allocate resources.
COMMENT
Any comment.
CPU_P1
Deprecated. Use MGMT_P1.
CPU_P2
Deprecated. Use MGMT_P2.
CPU_P3
Deprecated. Use MGMT_P3.
CPU_P4
Deprecated. Use MGMT_P4.
CPU_P5
Deprecated. Use MGMT_P5.
CPU_P6
Deprecated. Use MGMT_P6.
CPU_P7
Deprecated. Use MGMT_P7.
CPU_P8
Deprecated. Use MGMT_P8.
ACTIVE_SESS_POOL_P1
Specifies the maximum number of concurrently active sessions for a consumer group. Other sessions await execution in an inactive session queue. Default is UNLIMITED.
QUEUEING_P1
Specifies time (in seconds) after which a session in an inactive session queue (waiting for execution) times out and the call is aborted. Default is UNLIMITED.
PARALLEL_DEGREE_LIMIT_P1
Specifies a limit on the degree of parallelism for any operation. Default is UNLIMITED.
SWITCH_GROUP
Specifies the consumer group to which a session is switched if switch criteria are met. If the group name is 'CANCEL_SQL', then the current call is canceled when switch criteria are met. If the group name is 'KILL_SESSION', then the session is killed when switch criteria are met. Default is NULL. If the group name is 'CANCEL_SQL', the SWITCH_FOR_CALL parameter is always set to TRUE, overriding the user-specified setting.
SWITCH_TIME
Specifies the time (in CPU seconds) that a call can execute before an action is taken. Default is UNLIMITED. The action is specified by SWITCH_GROUP.
SWITCH_ESTIMATE
If TRUE, the database estimates the execution time of each call, and if estimated execution time exceeds SWITCH_TIME, the session is switched to the SWITCH_GROUP before beginning the call. Default is FALSE. The execution time estimate is obtained from the optimizer. The accuracy of the estimate is dependent on many factors, especially the quality of the optimizer statistics. In general, you should expect statistics to be no more accurate than ± 10 minutes.
MAX_EST_EXEC_TIME
Specifies the maximum execution time (in CPU seconds) allowed for a call. If the optimizer estimates that a call will take longer than MAX_EST_EXEC_TIME, the call is not allowed to proceed and ORA-07455 is issued. If the optimizer does not provide an estimate, this directive has no effect. Default is UNLIMITED. The accuracy of the estimate is dependent on many factors, especially the quality of the optimizer statistics. In general, you should expect statistics to be no more accurate than ± 10 minutes.
UNDO_POOL
Sets a maximum in kilobytes (K) on the total amount of undo for uncommitted transactions that can be generated by a consumer group. Default is UNLIMITED.
MAX_IDLE_TIME
Indicates the maximum session idle time, in seconds. Default is NULL, which implies unlimited.
MAX_IDLE_BLOCKER_TIME
Indicates the maximum session idle time of a blocking session, in seconds. Default is NULL, which implies unlimited.
SWITCH_TIME_IN_CALL
Deprecated. Use SWITCH_FOR_CALL.
MGMT_P1
For a plan with the MGMT_MTH parameter set to EMPHASIS, specifies the CPU percentage to allocate at the first level. For MGMT_MTH set to RATIO, specifies the weight of CPU usage. Default is NULL for all MGMT_Pn parameters.
MGMT_P2
For EMPHASIS, specifies CPU percentage to allocate at the second level. Not applicable for RATIO.
MGMT_P3
For EMPHASIS, specifies CPU percentage to allocate at the third level. Not applicable for RATIO.
MGMT_P4
For EMPHASIS, specifies CPU percentage to allocate at the fourth level. Not applicable for RATIO.
MGMT_P5
For EMPHASIS, specifies CPU percentage to allocate at the fifth level. Not applicable for RATIO.
MGMT_P6
For EMPHASIS, specifies CPU percentage to allocate at the sixth level. Not applicable for RATIO.
MGMT_P7
For EMPHASIS, specifies CPU percentage to allocate at the seventh level. Not applicable for RATIO.
MGMT_P8
For EMPHASIS, specifies CPU percentage to allocate at the eighth level. Not applicable for RATIO.
SWITCH_IO_MEGABYTES
Specifies the number of megabytes of I/O that a session can transfer (read and write) before an action is taken. Default is UNLIMITED. The action is specified by SWITCH_GROUP.
SWITCH_IO_REQS
Specifies the number of I/O requests that a session can execute before an action is taken. Default is UNLIMITED. The action is specified by SWITCH_GROUP.
SWITCH_FOR_CALL
If TRUE, a session that was automatically switched to another consumer group (according to SWITCH_TIME, SWITCH_IO_MEGABYTES, or SWITCH_IO_REQS) is returned to its original consumer group when the top level call completes. Default is NULL.
MAX_UTILIZATION_LIMIT
Absolute maximum CPU utilization percentage permitted for the consumer group. This value overrides any level allocations for CPU (MGMT_P1 through MGMT_P8), and also imposes a limit on total CPU utilization when unused allocations are redistributed. You can specify this attribute and leave MGMT_P1 through MGMT_P8 NULL. You cannot specify this attribute for a subplan.


Validating the Pending Area:
The following PL/SQL block validates the pending area.
BEGIN
 DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
END;
/


Submitting the Pending Area:
The following PL/SQL block submits the pending area:
BEGIN
 DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
Clearing the Pending Area
There is also a procedure for clearing the pending area at any time. This PL/SQL block causes all of your changes to be cleared from the pending area and deactivates the pending area:
BEGIN
 DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
END;
/

Switching a Single Session

The SWITCH_CONSUMER_GROUP_FOR_SESS procedure causes the specified session to immediately be moved into the specified resource consumer group. In effect, this procedure can raise or lower priority of the session.
The following PL/SQL block switches a specific session to a new consumer group. The session identifier (SID) is 17, the session serial number (SERIAL#) is 12345, and the new consumer group is the HIGH_PRIORITY consumer group.
BEGIN
 DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_SESS ('17', '12345',
  'HIGH_PRIORITY');
END;
/

Switching All Sessions for a User

The SWITCH_CONSUMER_GROUP_FOR_USER procedure changes the resource consumer group for all sessions pertaining to the specified user name. The following PL/SQL block switches all sessions that belong to user SCOTT to the LOW_GROUP consumer group:
BEGIN
 DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_USER ('SCOTT',
   'LOW_GROUP');
END;
/
Disabling the Resource Manager
To disable the Resource Manager, complete the following steps:
  1. Issue the following SQL statement:
  2. ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = '';


The following statement in a text initialization parameter file activates the Resource Manager upon database startup and sets the top plan as mydb_plan.
RESOURCE_MANAGER_PLAN = mydb_plan
You can also activate or deactivate the Resource Manager, or change the current top plan, using the DBMS_RESOURCE_MANAGER.SWITCH_PLAN package procedure or the ALTER SYSTEM statement.
The following SQL statement sets the top plan to mydb_plan, and activates the Resource Manager if it is not already active:
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'mydb_plan';
 
Deleting a Plan
The DELETE_PLAN procedure deletes the specified plan as well as all the plan directives associated with it. The pending area must be created first, and then submitted after the plan is deleted.
The following PL/SQL block deletes the great_bread plan and its directives.
BEGIN
 DBMS_RESOURCE_MANAGER.DELETE_PLAN(PLAN => 'SIMPLE');
END;
/

Updating a Resource Plan Directive

Use the UPDATE_PLAN_DIRECTIVE procedure to update plan directives. The pending area must be created first, and then submitted after the resource plan directive is updated. If you do not specify an argument for the UPDATE_PLAN_DIRECTIVE procedure, its corresponding parameter in the directive remains unchanged.
The following example adds a comment to a directive:
BEGIN
 DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
 DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
 DBMS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(
        PLAN             => 'SIMPLE_PLAN1',
        GROUP_OR_SUBPLAN => 'MYGROUP1',
        NEW_COMMENT      => 'Higher priority'
       );
 DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
To clear (nullify) a comment, pass a null string (''). To clear (zero or nullify) any numeric directive parameter, set its new value to -1:
BEGIN
 DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
 DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
 DBMS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(
        PLAN                  => 'SIMPLE_PLAN1',
        GROUP_OR_SUBPLAN      => 'MYGROUP1',
        NEW_MAX_EST_EXEC_TIME => -1
       );
 DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

SELECT * FROM DBA_RSRC_CONSUMER_GROUP_PRIVS;

SELECT PLAN,STATUS, COMMENTS FROM DBA_RSRC_PLANS;
SQL> SELECT SID,SERIAL#,USERNAME,RESOURCE_CONSUMER_GROUP FROM V$SESSION;
SELECT NAME, IS_TOP_PLAN FROM V$RSRC_PLAN;


View
Description
DBA_RSRC_CONSUMER_GROUP_PRIVS
USER_RSRC_CONSUMER_GROUP_PRIVS
DBA view lists all resource consumer groups and the users and roles to which they have been granted. USER view lists all resource consumer groups granted to the user.
DBA_RSRC_CONSUMER_GROUPS
Lists all resource consumer groups that exist in the database.
DBA_RSRC_MANAGER_SYSTEM_PRIVS
USER_RSRC_MANAGER_SYSTEM_PRIVS
DBA view lists all users and roles that have been granted Resource Manager system privileges. USER view lists all the users that are granted system privileges for the DBMS_RESOURCE_MANAGER package.
DBA_RSRC_PLAN_DIRECTIVES
Lists all resource plan directives that exist in the database.
DBA_RSRC_PLANS
Lists all resource plans that exist in the database.
DBA_RSRC_GROUP_MAPPINGS
Lists all of the various mapping pairs for all of the session attributes.
DBA_RSRC_MAPPING_PRIORITY
Lists the current mapping priority of each attribute.
DBA_HIST_RSRC_PLAN
Displays historical information about resource plan activation. This view contains AWR snapshots of V$RSRC_PLAN_HISTORY.
DBA_HIST_RSRC_CONSUMER_GROUP
Displays historical statistical information about consumer groups. This view contains AWR snapshots of V$RSRC_CONS_GROUP_HISTORY.
DBA_USERS
USERS_USERS
DBA view contains information about all users of the database. It contains the initial resource consumer group for each user. USER view contains information about the current user. It contains the current user's initial resource consumer group.
V$ACTIVE_SESS_POOL_MTH
Displays all available active session pool resource allocation methods.
V$PARALLEL_DEGREE_LIMIT_MTH
Displays all available parallel degree limit resource allocation methods.
V$QUEUEING_MTH
Displays all available queuing resource allocation methods.
V$RSRC_CONS_GROUP_HISTORY
For each entry in the view V$RSRC_PLAN_HISTORY, contains an entry for each consumer group in the plan showing the cumulative statistics for the consumer group.
V$RSRC_CONSUMER_GROUP
Displays information about active resource consumer groups. This view can be used for tuning.
V$RSRC_CONSUMER_GROUP_CPU_MTH
Displays all available CPU resource allocation methods for resource consumer groups.
V$RSRCMGRMETRIC
Displays a history of resources consumed and cumulative CPU wait time (due to resource management) per consumer group for the past minute.
V$RSRCMGRMETRIC_HISTORY
Displays a history of resources consumed and cumulative CPU wait time (due to resource management) per consumer group for the past hour on a minute-by-minute basis. If a new resource plan is enabled, the history is cleared.
V$RSRC_PLAN
Displays the names of all currently active resource plans.
V$RSRC_PLAN_CPU_MTH
Displays all available CPU resource allocation methods for resource plans.
V$RSRC_PLAN_HISTORY
Shows when Resource Manager plans were enabled or disabled on the instance. It helps you understand how resources were shared among the consumer groups over time.
V$RSRC_SESSION_INFO
Displays Resource Manager statistics for each session. Shows how the session has been affected by the Resource Manager. Can be used for tuning.
V$SESSION
Lists session information for each current session. Specifically, lists the name of the resource consumer group of each current session.



Demo:

SHAIKDB>create user resmgr identified by resmgr default tablespace tbs1 temporary tablespace temp;

User created.

SHAIKDB>grant create session,create table,select any table,create procedure,create trigger to resmgr;

Grant succeeded.

SHAIKDB>SELECT GRANTEE,PRIVILEGE FROM DBA_RSRC_MANAGER_SYSTEM_PRIVS;

GRANTEE               PRIVILEGE
------------------------------ ----------------------------------------
SYS                  ADMINISTER RESOURCE MANAGER
DBA                  ADMINISTER RESOURCE MANAGER
EXP_FULL_DATABASE           ADMINISTER RESOURCE MANAGER
IMP_FULL_DATABASE           ADMINISTER RESOURCE MANAGER
APPQOSSYS              ADMINISTER RESOURCE MANAGER



SHAIKDB>BEGIN DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE('RESMGR','ADMINISTER_RESOURCE_MANAGER',FALSE);
  END;
 /

PL/SQL procedure successfully completed.


USERNAME   PRIVILEGE             ADM
---------- ------------------------------ ---
RESMGR       ADMINISTER RESOURCE MANAGER      NO
RESMGR       CREATE TRIGGER         NO
RESMGR       SELECT ANY TABLE         NO
RESMGR       CREATE PROCEDURE         NO
RESMGR       CREATE SESSION         NO
RESMGR       CREATE TABLE          NO

6 rows selected.

Granting the RESOURCE_MANAGER_ADMIN privilege using GRID Control:
















Delete the plans:


SHAIKDB>EXEC DBMS_RESOURCE_MANAGER.DELETE_CONSUMER_GROUP('CRITICAL');

PL/SQL procedure successfully completed.


If the pending area is not active:


SHAIKDB>EXEC DBMS_RESOURCE_MANAGER.DELETE_PLAN('MY_SIMPLE_PLAN');
BEGIN DBMS_RESOURCE_MANAGER.DELETE_PLAN('MY_SIMPLE_PLAN'); END;

*
ERROR at line 1:
ORA-29371: pending area is not active
ORA-06512: at "SYS.DBMS_RMIN", line 105
ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 180
ORA-06512: at line 1

Create the pending Area:


SHAIKDB>BEGIN
  DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
    END;
    /  2    3    4  

PL/SQL procedure successfully completed.

Delete the PLAN:

SHAIKDB>select plan,GROUP_OR_SUBPLAN,CPU_P1,CPU_P2,CPU_P3,mgmt_p1,mgmt_p2,mgmt_p3 from dba_rsrc_plan_directives where plan like '%SIMPLE%' order by 2;


PLAN                  GROUP_OR_SUBPLAN          CPU_P1     CPU_P2    CPU_P3      MGMT_P1    MGMT_P2    MGMT_P3
------------------------------ ------------------------------ ---------- ---------- ---------- ---------- ---------- ----------
MY_SIMPLE_PLAN              MY_SIMPLE_GROUP1               0     80         0       0      80          0
MY_SIMPLE_PLAN              MY_SIMPLE_GROUP2               0     20         0       0      20          0
MY_SIMPLE_PLAN              OTHER_GROUPS                  0      0       100       0       0        100
MY_SIMPLE_PLAN              SYS_GROUP                100      0         0          100       0          0

SHAIKDB>exec   DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

PL/SQL procedure successfully completed.

SHAIKDB>EXEC DBMS_RESOURCE_MANAGER.DELETE_PLAN('MY_SIMPLE_PLAN');

PL/SQL procedure successfully completed.

SHAIKDB>EXEC   DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

PL/SQL procedure successfully completed.

SHAIKDB>show errors                                        
No errors.
SHAIKDB>exec dbms_resource_manager.submit_pending_Area();

PL/SQL procedure successfully completed.

SHAIKDB>select plan,GROUP_OR_SUBPLAN,CPU_P1,CPU_P2,CPU_P3,mgmt_p1,mgmt_p2,mgmt_p3 from dba_rsrc_plan_directives where plan like '%SIMPLE%' order by 2;

no rows selected

SHAIKDB>



DEMO-2:

  • Creating:
  • Pending area
  • Create consumer groups CRITICAL,MEDIUM,LOW
  • Create PLAN - CRITICAL
  • Create PLAN directives
  • Validate PLAN
  • Submit PLAN.





SHAIKDB>select plan,GROUP_OR_SUBPLAN,CPU_P1,CPU_P2,CPU_P3,mgmt_p1,mgmt_p2,mgmt_p3 from dba_rsrc_plan_directives where plan=’CRITICAL_PLAN’ order by 2;

no rows selected

SHAIKDB>exec dbms_resource_manager.create_pending_area();

PL/SQL procedure successfully completed.

SHAIKDB>exec dbms_resource_manager.create_consumer_group('CRITICAL','Group for Critical Apps');

PL/SQL procedure successfully completed.

SHAIKDB>exec dbms_resource_manager.create_plan('CRITICAL_PLAN','Plan fr critical queries');

PL/SQL procedure successfully completed.

SHAIKDB>exec dbms_resource_manager.create_plan_directive(PLAN=>'CRITICAL_PLAN',GROUP_OR_SUBPLAN=>'CRITICAL',COMMENT=>'Allocate 60% CPU',MGMT_P1=>60);

PL/SQL procedure successfully completed.

SHAIKDB>exec dbms_resource_manager.create_consumer_group('MEDIUM','Medium critical Apps');

PL/SQL procedure successfully completed.

SHAIKDB>exec dbms_resource_manager.create_consumer_group('LOW','Low critical Apps');

PL/SQL procedure successfully completed.

SHAIKDB>exec dbms_resource_manager.create_plan_directive(PLAN=>'CRITICAL_PLAN',GROUP_OR_SUBPLAN=>'MEDIUM', COMMENT=>'ALLOCATE CPU 20%',MGMT_P1=>20,PARALLEL_DEGREE_LIMIT_P1=>2,ACTIVE_SESS_POOL_P1=>2);

PL/SQL procedure successfully completed.

SHAIKDB>exec dbms_resource_manager.create_plan_directive(PLAN=>'CRITICAL_PLAN',GROUP_OR_SUBPLAN=>'LOW', COMMENT=>'Allocate 10% CPU',MGMT_P1=>10,PARALLEL_DEGREE_LIMIT_P1=>1,ACTIVE_SESS_POOL_P1=>1);

PL/SQL procedure successfully completed.




SHAIKDB>exec dbms_resource_manager.create_plan_directive(PLAN=>'CRITICAL_PLAN',GROUP_OR_SUBPLAN=>'OTHER_GROUPS',COMMENT=>'ALLOCATE CPU 10% FOR OTHER GROUPS',MGMT_P1=>10);

PL/SQL procedure successfully completed.

SHAIKDB>EXEC DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

PL/SQL procedure successfully completed.

SHAIKDB>SHOW ERRORS
No errors.
SHAIKDB>EXEC DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA   

PL/SQL procedure successfully completed.

SHAIKDB>select plan,GROUP_OR_SUBPLAN,CPU_P1,CPU_P2,CPU_P3,mgmt_p1,mgmt_p2,mgmt_p3 from dba_rsrc_plan_directives where plan='CRITICAL_PLAN' order by 3 desc;

PLAN                  GROUP_OR_SUBPLAN          CPU_P1     CPU_P2    CPU_P3      MGMT_P1    MGMT_P2    MGMT_P3
------------------------------ ------------------------------ ---------- ---------- ---------- ---------- ---------- ----------
CRITICAL_PLAN              CRITICAL                  60      0         0           60       0          0
CRITICAL_PLAN              MEDIUM               20      0         0           20       0          0
CRITICAL_PLAN              LOW                     10      0         0           10       0          0
CRITICAL_PLAN              OTHER_GROUPS           10      0         0           10       0          0


Assign consumer groups to USERS:

SHAIKDB>create user test1 identified by test1 default tablespace tbs1 temporary tablespace temp;

User created.

SHAIKDB>grant create session to test1;

Grant succeeded.

SHAIKDB>grant select any table to test1;

Grant succeeded.

SHAIKDB>select username,account_status,INITIAL_RSRC_CONSUMER_GROUP from dba_users where username='TEST1';

USERNAME              ACCOUNT_STATUS           INITIAL_RSRC_CONSUMER_GROUP
------------------------------ -------------------------------- ------------------------------
TEST1                  OPEN               DEFAULT_CONSUMER_GROUP


SHAIKDB>exec dbms_resource_manager_privs.grant_switch_consumer_group(grantee_name=>'TEST1',consumer_group=>'LOW',grant_option=>TRUE);

PL/SQL procedure successfully completed.

SHAIKDB>exec dbms_resource_manager.set_initial_consumer_group(user=>'TEST1',consumer_group=>'LOW');

PL/SQL procedure successfully completed.


SHAIKDB>SELECT * FROM DBA_RSRC_CONSUMER_GROUP_PRIVS;

GRANTEE               GRANTED_GROUP             GRA INI
------------------------------ ------------------------------ --- ---
PUBLIC                  DEFAULT_CONSUMER_GROUP          YES YES
SYSTEM                  SYS_GROUP             NO  YES
TEST1                  LOW                 YES YES
PUBLIC                  LOW_GROUP             NO  NO

SHAIKDB>select username,account_status,INITIAL_RSRC_CONSUMER_GROUP from dba_users where username='TEST1';

USERNAME              ACCOUNT_STATUS           INITIAL_RSRC_CONSUMER_GROUP
------------------------------ -------------------------------- ------------------------------
TEST1                  OPEN               LOW

SHAIKDB>SELECT SID,SERIAL#,USERNAME,RESOURCE_CONSUMER_GROUP FROM V$SESSION where username='TEST1';

no rows selected

SHAIKDB>/

      SID    SERIAL# USERNAME                RESOURCE_CONSUMER_GROUP
---------- ---------- ------------------------------ --------------------------------
   50     2546 TEST1                OTHER_GROUPS

SHAIKDB>/


SWITCHING CONSUMER_GROUPS:


SHAIKDB>exec dbms_resource_manager.switch_consumer_group_for_sess('50','2546','MEDIUM');

PL/SQL procedure successfully completed.

SHAIKDB>SELECT SID,SERIAL#,USERNAME,RESOURCE_CONSUMER_GROUP FROM V$SESSION where username='TEST1';

      SID    SERIAL# USERNAME                RESOURCE_CONSUMER_GROUP
---------- ---------- ------------------------------ --------------------------------
   50     2546 TEST1                MEDIUM


SHAIKDB>exec dbms_resource_manager.switch_consumer_group_for_user('TEST1','CRITICAL');

PL/SQL procedure successfully completed.

SHAIKDB>SELECT SID,SERIAL#,USERNAME,RESOURCE_CONSUMER_GROUP FROM V$SESSION where username='TEST1';

      SID    SERIAL# USERNAME                RESOURCE_CONSUMER_GROUP
---------- ---------- ------------------------------ --------------------------------
   50     2546 TEST1                CRITICAL

Revoking Switch Privileges

SHAIKDB>exec dbms_resource_manager_privs.revoke_switch_consumer_group(revokee_name=>'TEST1',consumer_group=>'LOW');

PL/SQL procedure successfully completed.

 Documentation:
Oracle® Database PL/SQL Packages and Types Reference --> 116 DBMS_RESOURCE_MANAGER

Oracle® Database Administrator's Guide 11g Release 2 (11.2) -->
26 Managing Resource Allocation with Oracle Database Resource Manager


Administer Resource Manager using dbconsole:

$
0
0
Administer Resource Manager using dbconsole:

[oracle@collabn1 ~]$ emctl status dbconsole

Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation.  All rights reserved.
http://collabn1.shaiksameer:5501/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory /u01/app/oracle/product/11.2.0.2/SHAIKPROD/collabn1_SHAIKDB/sysman/log


Go to Server → Resource Manager


Delete a PLAN:

Click Plans:






Create Consumer Group:

Go to Server → Resource Manager → Consumer Group:

Delete consumer Group “MEDIUM”




Create Consumer Groups:



Create Groups “LOW” & “MEDIUM”


Create CRITICAL_PLAN & Add the Consumer Groups “CRITICAL”,”MEDIUM”,”LOW” to “CRITICAL_PLAN”

Go to Server → Resource Manager → Plans → Create




Modify CPU %:



Change Parallelism Settings:



Change Session details:



Click “OK”
Summary:

Modify the PLAN to different CPU level:



Assign the consumer groups to the users TEST1 & TEST2:

Before Assignments:
SHAIKDB>SELECT SID,SERIAL#,USERNAME,RESOURCE_CONSUMER_GROUP FROM V$SESSION where username in ('TEST1','TEST2');

no rows selected

SHAIKDB>/

      SID    SERIAL# USERNAME                RESOURCE_CONSUMER_GROUP
---------- ---------- ------------------------------ --------------------------------
   38     2646 TEST1                OTHER_GROUPS
   65     4887 TEST2                OTHER_GROUPS


Map consumer groups to Users:



Regroup/Prioritize the groups for Oracle OS User:




Add consumer group CRITICAL to TEST1:


Add consumer group MIDDLE to TEST2:


Activate the PLAN → CRITICAL_PLAN

 

SHAIKDB>SELECT SID,SERIAL#,USERNAME,RESOURCE_CONSUMER_GROUP FROM V$SESSION where username in ('TEST1','TEST2');

       SID    SERIAL# USERNAME                 RESOURCE_CONSUMER_GROUP
---------- ---------- ------------------------------ --------------------------------
    38     2646 TEST1                 CRITICAL
    65     4887 TEST2                 MEDIUM 


 Documentation:
Oracle® Database PL/SQL Packages and Types Reference --> 116 DBMS_RESOURCE_MANAGER

Oracle® Database Administrator's Guide 11g Release 2 (11.2) -->
26 Managing Resource Allocation with Oracle Database Resource Manager

Result Cache:

$
0
0
Result Cache:

A result cache is an area of memory, either in the SGA or client application memory, that stores the result of a database query or query block for reuse. The cached rows are shared across statements and sessions unless they become stale.

Managing the Server Result Cache

The server result cache is a memory pool within the shared pool. This pool contains a SQL query result cache, which stores results of SQL queries, and a PL/SQL function result cache, which stores values returned by PL/SQL functions.

How the Server Result Cache Works

When a query executes, the database looks in the cache memory to determine whether the result exists in the cache. If the result exists, then the database retrieves the result from memory instead of executing the query. If the result is not cached, then the database executes the query, returns the result as output, and stores the result in the result cache.

When users execute queries and functions repeatedly, the database retrieves rows from the cache, decreasing response time. Cached results become invalid when data in dependent database objects is modified.

Server Result Cache Initialization Parameters

The following database initialization parameters control the server result cache:
  • RESULT_CACHE_MAX_SIZE
    • This parameter sets the memory allocated to the server result cache. The server result cache is enabled unless you set this parameter to 0, in which case the cache is disabled.
  • RESULT_CACHE_MAX_RESULT
    • This parameter sets the maximum amount of server result cache memory that can be used for for a single result. The default is 5%, but you can specify any percentage value between 1 and 100. You can set this parameter at the system or session level.
  • RESULT_CACHE_REMOTE_EXPIRATION
    • This parameter specifies the expiration time for a result in the server result cache that depends on remote database objects. The default value is 0 minutes, which implies that results using remote objects should not be cached.




Managing Server Result Cache Memory with Initialization Parameters
By default, on database startup, Oracle Database allocates memory to the server result cache in the shared pool. The memory size allocated depends on the memory size of the shared pool and the memory management system. The database uses the following algorithm:
  • When using the MEMORY_TARGET initialization parameter to specify the memory allocation, Oracle Database allocates 0.25% of MEMORY_TARGET to the result cache.
  • When you set the size of the shared pool using the SGA_TARGET initialization parameter, Oracle Database allocates 0.50% of SGA_TARGET to the result cache.
  • If you specify the size of the shared pool using the SHARED_POOL_SIZE initialization parameter, then Oracle Database allocates 1% of the shared pool size to the result cache.
The size of the server result cache grows until reaching the maximum size. Query results larger than the available space in the cache are not cached. The database employs an LRU algorithm to age out cached results, but does not otherwise automatically release memory from the server result cache.

Purge RESULT CACHE:

You can use the DBMS_RESULT_CACHE.FLUSH procedure to purge memory.


Oracle Database will not allocate more than 75% of the shared pool to the server result cache.



RESULT CACHE Report:

SHAIKDB>SET SERVEROUTPUT ON
SHAIKDB>EXECUTE DBMS_RESULT_CACHE.MEMORY_REPORT
R e s u l t   C a c h e   M e m o r y    R e p o r t
[Parameters]
Block Size        = 1K bytes
Maximum Cache Size  = 1760K bytes (1760 blocks)
Maximum Result Size = 88K bytes (88 blocks)
[Memory]
Total Memory = 10696 bytes [0.005% of the Shared Pool]
... Fixed Memory = 10696 bytes [0.005% of the Shared Pool]
... Dynamic Memory = 0 bytes [0.000% of the Shared Pool]

PL/SQL procedure successfully completed.



Client Result Cache Initialization Parameters




Initialization Parameter
Description
CLIENT_RESULT_CACHE_SIZE
Sets the maximum size of the client result cache for each client process. To enable the client result cache, set the size to 32768 bytes or greater. A lesser value, including the default of 0, disables the client result cache.
Note: If the CLIENT_RESULT_CACHE_SIZE setting disables the client cache, then a client node cannot enable it. If the CLIENT_RESULT_CACHE_SIZE setting enables the client cache, however, then a client node can override the setting. For example, a client node can disable client result caching or increase the size of its cache.
CLIENT_RESULT_CACHE_LAG
Specifies the amount of lag time for the client result cache. If the OCI application performs no database calls for a period of time, then the client cache lag setting forces the next statement execution call to check for validations.
If the OCI application accesses the database infrequently, then setting this parameter to a low value results in more round trips from the OCI client to the database to keep the client result cache synchronized with the database. The client cache lag is specified in milliseconds, with a default value of 3000 (3 seconds).
COMPATIBLE
Specifies the release with which Oracle must maintain compatibility. For the client result cache to be enabled, this parameter must be set to 11.0.0.0 or higher. For client caching on views, this parameter must be set to 11.2.0.0.0 or higher.



Result Cache Mode

The result cache mode is a database setting that determines which queries are eligible to store result sets in the client and server result caches.


Value
Default
Description
MANUAL
Yes
Query results can only be stored in the result cache by using a query hint or table annotation. This is the recommended value.
FORCE
No
All results are stored in the result cache. If a query result is not in the cache, then the database executes the query and stores the result in the cache. Subsequent executions of the same statement, including the result cache hint, retrieve data from the cache.
Sessions uses these results if possible. To exclude query results from the cache, you must use the /*+ NO_RESULT_CACHE */ query hint.
Note: FORCE mode is not recommended because the database and clients attempt to cache all queries, which can create significant performance and latching overhead.

Result cache hints at the application level to control caching behavior. The SQL result cache hints take precedence over the result cache mode and result cache table annotations.

Demo:

SHAIKDB>show parameter result

NAME                    TYPE     VALUE
------------------------------------ ----------- ------------------------------
client_result_cache_lag          big integer 3000
client_result_cache_size         big integer 0
result_cache_max_result          integer     5
result_cache_max_size            big integer 1760K
result_cache_mode            string     MANUAL
result_cache_remote_expiration         integer     0

SHAIKDB>select name,type,status,object_no,row_count from v$result_cache_objects;

no rows selected



SHAIKDB>select count(*) from result2;

 COUNT(*)
----------
    227328

SHAIKDB>desc result2;
Name                              Null?    Type
----------------------------------------------------- -------- ------------------------------------
EMPLOYEE_ID                           NUMBER(6)
FIRST_NAME                           VARCHAR2(20)
LAST_NAME                          NOT NULL VARCHAR2(25)
EMAIL                              NOT NULL VARCHAR2(25)
PHONE_NUMBER                           VARCHAR2(20)
HIRE_DATE                          NOT NULL DATE
JOB_ID                           NOT NULL VARCHAR2(10)
SALARY                            NUMBER(8,2)
COMMISSION_PCT                        NUMBER(2,2)
MANAGER_ID                           NUMBER(6)
DEPARTMENT_ID                           NUMBER(4)
SSN                               NUMBER(9)

SHAIKDB>set timing on time on
22:08:07 SHAIKDB>

22:08:54 SHAIKDB>select /*+ result_cache result2 */ count(*) from result2 where employee_id=100;

 COUNT(*)
----------
     2048

Elapsed: 00:00:00.02

22:09:41 SHAIKDB>set echo on
22:10:10 SHAIKDB>set autot on exp

22:10:22 SHAIKDB>select /*+ result_cache result2 */ count(*) from result2 where employee_id=100;

 COUNT(*)
----------
     2048

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 3939577969

--------------------------------------------------------------------------------------------------
| Id  | Operation        | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                |     1 |    13 |   651   (1)| 00:00:08 |
|   1 |  RESULT CACHE        | cqt78fxmq2rgrad2rsrnxyvtsc |     |     |          |      |
|   2 |   SORT AGGREGATE    |                |     1 |    13 |          |      |
|*  3 |    TABLE ACCESS FULL| RESULT2            |  2086 | 27118 |   651   (1)| 00:00:08 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  3 - filter("EMPLOYEE_ID"=100)

Result Cache Information (identified by operation id):
------------------------------------------------------

  1 - column-count=1; dependencies=(TEST2.RESULT2); attributes=(single-row); name="select /*+ resul
t_cache result2 */ count(*) from result2 where employee_id=100"


Note
-----
  - dynamic sampling used for this statement (level=2)


SHAIKDB>set linesize 600
SHAIKDB>col name for a50

SHAIKDB>select name,type,status,object_no,row_count from v$result_cache_objects;


NAME                           TYPE       STATUS     OBJECT_NO  ROW_COUNT
-------------------------------------------------- ---------- --------- ---------- ----------
TEST2.RESULT2                       Dependency Published      78641        0
select /*+ result_cache result2 */ count(*) from r Result     Published      0        1
esult2 where employee_id=100


22:10:26 SHAIKDB>set autot off
22:14:31 SHAIKDB>select /*+ result_cache result2 */ count(*) from result2 where employee_id=100;

 COUNT(*)
----------
     2048

Elapsed: 00:00:00.00


SHAIKDB>alter system flush buffer_cache;

System altered.


22:14:39 SHAIKDB>select /*+ result_cache result2 */ count(*) from result2 where employee_id=100;

 COUNT(*)
----------
     2048

Elapsed: 00:00:00.01


22:15:45 SHAIKDB>select /*+ result_cache result2 */ count(*) from result2 where employee_id=100;

 COUNT(*)
----------
     2048

Elapsed: 00:00:00.00


22:15:45 SHAIKDB>insert into result2 select * from hr.employees where employee_id=100;

1 row created.

Elapsed: 00:00:00.02

22:17:06 SHAIKDB>commit;

Commit complete.

Elapsed: 00:00:00.00


22:17:09 SHAIKDB>select /*+ result_cache result2 */ count(*) from result2 where employee_id=100;

 COUNT(*)
----------
     2049

Elapsed: 00:00:00.03


22:17:16 SHAIKDB>set autot on exp                                                             
22:17:34 SHAIKDB>select /*+ result_cache result2 */ count(*) from result2 where employee_id=100;

 COUNT(*)
----------
     2049

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 3939577969

--------------------------------------------------------------------------------------------------
| Id  | Operation        | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                |     1 |    13 |   651   (1)| 00:00:08 |
|   1 |  RESULT CACHE        | cqt78fxmq2rgrad2rsrnxyvtsc |     |     |          |      |
|   2 |   SORT AGGREGATE    |                |     1 |    13 |          |      |
|*  3 |    TABLE ACCESS FULL| RESULT2            |  2086 | 27118 |   651   (1)| 00:00:08 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  3 - filter("EMPLOYEE_ID"=100)

Result Cache Information (identified by operation id):
------------------------------------------------------

  1 - column-count=1; dependencies=(TEST2.RESULT2); attributes=(single-row); name="select /*+ resul
t_cache result2 */ count(*) from result2 where employee_id=100"


Note
-----
  - dynamic sampling used for this statement (level=2)


22:26:41 SHAIKDB>delete result2 where employee_id=100 and rownum <=500;

500 rows deleted.

Elapsed: 00:00:00.03



SHAIKDB>select name,type,status,object_no,row_count from v$result_cache_objects;

NAME                           TYPE       STATUS     OBJECT_NO  ROW_COUNT
-------------------------------------------------- ---------- --------- ---------- ----------
TEST2.RESULT2                       Dependency Published      78641        0
select /*+ result_cache result2 */ count(*) from r Result     Published      0        1
esult2 where employee_id=100

select /*+ result_cache result2 */ count(*) from r Result     Invalid        0        1
esult2 where employee_id=100



22:28:34 SHAIKDB>select /*+ result_cache result2 */ count(*) from result2 where employee_id=100;

 COUNT(*)
----------
     1549

Elapsed: 00:00:00.05


SHAIKDB>set serveroutput on
SHAIKDB>exec dbms_result_cache.memory_report(detailed=>true);
R e s u l t   C a c h e   M e m o r y    R e p o r t
[Parameters]
Block Size        = 1K bytes
Maximum Cache Size  = 1760K bytes (1760 blocks)
Maximum Result Size = 88K bytes (88 blocks)
[Memory]
Total Memory = 174752 bytes [0.082% of the Shared Pool]
... Fixed Memory = 10696 bytes [0.005% of the Shared Pool]
....... Memory Mgr = 200 bytes
....... Bloom Fltr = 2K bytes
....... Cache Mgr  = 5552 bytes
....... State Objs = 2896 bytes
... Dynamic Memory = 164056 bytes [0.077% of the Shared Pool]
....... Overhead = 131288 bytes
........... Hash Table      = 64K bytes (4K buckets)
........... Chunk Ptrs      = 24K bytes (3K slots)
........... Chunk Maps      = 12K bytes
........... Miscellaneous = 28888 bytes
....... Cache Memory = 32K bytes (32 blocks)
........... Unused Memory = 29 blocks
........... Used Memory = 3 blocks
............... Dependencies = 1 blocks (1 count)
............... Results = 2 blocks
................... SQL     = 1 blocks (1 count)
................... Invalid = 1 blocks (1 count)

PL/SQL procedure successfully completed.


SHAIKDB> exec dbms_result_cache.flush;

PL/SQL procedure successfully completed.

SHAIKDB>select name,type,status,object_no,row_count from v$result_cache_objects;

no rows selected



22:31:51 SHAIKDB>select /*+ result_cache result2 */ count(*) from result2 where employee_id=100;

 COUNT(*)
----------
     1549

Elapsed: 00:00:00.01
22:32:29 SHAIKDB>/

 COUNT(*)
----------
     1549

Elapsed: 00:00:00.00


Dictionary Views:

View/Table
Description
V$RESULT_CACHE_STATISTICS
Lists various server result cache settings and memory usage statistics.
V$RESULT_CACHE_MEMORY
Lists all the memory blocks in the server result cache and their corresponding statistics.
V$RESULT_CACHE_OBJECTS
Lists all the objects whose results are in the server result cache along with their attributes.
V$RESULT_CACHE_DEPENDENCY
Lists the dependency details between the results in the server cache and dependencies among these results.
CLIENT_RESULT_CACHE_STATS$
Stores cache settings and memory usage statistics for the client result caches obtained from the OCI client processes. This statistics table has entries for each client process that is using result caching. After the client processes terminate, the database removes their entries from this table. The client table lists information similar to V$RESULT_CACHE_STATISTICS.
DBA_TABLES, USER_TABLES, ALL_TABLES
Includes a RESULT_CACHE column that shows the result cache mode annotation for the table. If the table has not been annotated, then this column shows DEFAULT. This column applies to both server and client result caching.


Documentation:
Oracle® Database Performance Tuning Guide 11g Release 2 (11.2)--> 7.6 Managing the Server and Client Result Caches

Use multi column statistics

$
0
0

Using multi column statistics

MultiColumn Statistics

When multiple columns from a single table are used together in the where clause of a query (multiple single column predicates), the relationship between the columns can strongly affect the combined selectivity for the column group.

By default, Oracle creates column groups for a table, based on the workload analysis, similar to how it is done for histograms.
You can also create column groups manually by using the DBMS_STATS package. You can use this package to create a column group, get the name of a column group, or delete a column group from a table.

**** The optimizer will only use MultiColumn statistics with equality predicates *****
Creating a Column Group
Use the create_extended_statistics function to create a column group. The create_extended_statistics function returns the system-generated name of the newly created column group

Parameter
Description
owner
Schema owner. NULL indicates current schema.
tab_name
Name of the table to which the column group is being added.
extension
Columns in the column group.


Getting a Column Group
Use the show_extended_stats_name function to obtain the name of the column group for a given set of columns

Parameter
Description
owner
Schema owner. NULL indicates current schema.
tab_name
Name of the table to which the column group belongs.
extension
Name of the column group.

Dropping a Column Group
Use the drop_extended_stats function to delete a column group from a table.


Parameter
Description
owner
Schema owner. NULL indicates current schema.
tab_name
Name of the table to which the column group belongs.
extension
Name of the column group to be deleted.



Gathering Statistics on Column Groups
The METHOD_OPT argument of the DBMS_STATS package enables you to gather statistics on column groups. If you set the value of this argument to FOR ALL COLUMNS SIZE AUTO, the optimizer will gather statistics on all the existing column groups. To collect statistics on a new column group, specify the group using FOR COLUMNS. The column group will be automatically created as part of statistic gathering.


DBMS_STATS package "DBMS_STATS.SEED_COL_USAGE" is used to identify candidate columns for Extended Statistics. This procedure generates candidates for extended statistics when a representative dataload is being played in database.

A second procedure DBMS_STATS.CREATE_EXTENDED_STATS() can be used to define the relationship between the columns that have been identified.

DBMS_STATS.SEED_COL_USAGE can be executed few times in peak load. This will identify all the columns which need Extended Statistics.


DEMO:
====

SHAIKDB>create table ext_columns as select * from dba_tables;

Table created.

SHAIKDB>select count(*) from ext_columns;

 COUNT(*)
----------
     2795

SHAIKDB>select count(*),segment_created from dba_tables group by segment_created;

 COUNT(*) SEG
---------- ---
      826 NO
     1900 YES
   70 N/A



SHAIKDB>exec dbms_stats.gather_table_stats('test1','ext_columns');

PL/SQL procedure successfully completed.

SHAIKDB>exec dbms_stats.gather_index_stats('test1','ext_columns_idx');

PL/SQL procedure successfully completed.


SHAIKDB>set autot on exp
SHAIKDB>select count(*) from ext_columns where num_rows=0 and dropped='YES';

 COUNT(*)
----------
    0


Execution Plan
----------------------------------------------------------
Plan hash value: 1088373390

------------------------------------------------------------------------------------------------
| Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |              |     1 |     6 |     4     (0)| 00:00:01 |
|   1 |  SORT AGGREGATE          |              |     1 |     6 |        |           |
|*  2 |   TABLE ACCESS BY INDEX ROWID| EXT_COLUMNS     |     1 |     6 |     4     (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN         | EXT_COLUMNS_IDX |     7 |       |     1     (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  2 - filter("DROPPED"='YES')
  3 - access("NUM_ROWS"=0)

SHAIKDB>select count(*) from ext_columns where num_rows=0 and dropped='NO';

 COUNT(*)
----------
     1704


Execution Plan
----------------------------------------------------------
Plan hash value: 1088373390

------------------------------------------------------------------------------------------------
| Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |              |     1 |     6 |     4     (0)| 00:00:01 |
|   1 |  SORT AGGREGATE          |              |     1 |     6 |        |           |
|*  2 |   TABLE ACCESS BY INDEX ROWID| EXT_COLUMNS     |     7 |    42 |     4     (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN         | EXT_COLUMNS_IDX |     7 |       |     1     (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  2 - filter("DROPPED"='NO')
  3 - access("NUM_ROWS"=0)


In the above plan it shows the number of rows that matches the criteria is 7 for index_range_Scan and for the select statement “2”

In reality the selective count is far more greater than that.






SHAIKDB>select count(*) from ext_columns where num_rows=0 and dropped='NO';

 COUNT(*)
----------
     1704


SHAIKDB>select count(*) from ext_columns where num_rows=0 and dropped='YES';

 COUNT(*)
----------
    0

SHAIKDB>select count(*) from ext_columns where num_rows=0;

 COUNT(*)
----------
     1704


Above NUM_ROWS and DROPPED columns are related.



SHAIKDB>Select dbms_stats.create_extended_stats('TEST1','EXT_COLUMNS','(NUM_ROWS,DROPPED)') FROM DUAL;

DBMS_STATS.CREATE_EXTENDED_STATS('TEST1','EXT_COLUMNS','(NUM_ROWS,DROPPED)')
----------------------------------------------------------------------------------------------------
SYS_STUEXUYKWMBP9B#CB$NYP#P8I5



SHAIKDB>col EXTENSION_NAME for a30
SHAIKDB>col EXTENSION for a30
SHAIKDB>/

SHAIKDB>SELECT extension_name, extension FROM dba_stat_extensions WHERE table_name = 'EXT_COLUMNS';


EXTENSION_NAME              EXTENSION
------------------------------ ------------------------------
SYS_STUEXUYKWMBP9B#CB$NYP#P8I5 ("NUM_ROWS","DROPPED")


After the extension creation it shows the ROWS as 7.


SHAIKDB>set autot on exp
SHAIKDB>select count(*) from ext_columns where num_rows=0 and dropped='NO';

 COUNT(*)
----------
     1704


Execution Plan
----------------------------------------------------------
Plan hash value: 1088373390

------------------------------------------------------------------------------------------------
| Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |              |     1 |     6 |     4     (0)| 00:00:01 |
|   1 |  SORT AGGREGATE          |              |     1 |     6 |        |           |
|*  2 |   TABLE ACCESS BY INDEX ROWID| EXT_COLUMNS     |     7 |    42 |     4     (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN         | EXT_COLUMNS_IDX |     7 |       |     1     (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  2 - filter("DROPPED"='NO')
  3 - access("NUM_ROWS"=0)


Gather the stats now.


SHAIKDB>exec dbms_stats.gather_table_stats('test1','ext_columns');

PL/SQL procedure successfully completed.

SHAIKDB>select count(*) from ext_columns where num_rows=0 and dropped='NO';

 COUNT(*)
----------
     1704


Execution Plan
----------------------------------------------------------
Plan hash value: 3344132040

----------------------------------------------------------------------------------
| Id  | Operation       | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     1 |     6 |    31   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |        |     1 |     6 |          |      |
|*  2 |   TABLE ACCESS FULL| EXT_COLUMNS |  1660 |  9960 |    31   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  2 - filter("NUM_ROWS"=0 AND "DROPPED"='NO')


Reflects the correct number of rows.





Documentation:

Oracle® Database Performance Tuning Guide
11g Release 2 (11.2)
Part Number E16638-06
Chapter 13 Managing Optimizer Statistics
Section 13.3.1.5 Extended Statistics

Oracle® Database PL/SQL Packages and Types Reference
11g Release 2 (11.2)
Part Number E25788-04
Chater 141 DBMS_STATS
Extended Statistics

How to install go on MAC

$
0
0

How to install
go
on MAC

Download

Download it from → https://golang.org/dl/


Pick the OS.


Download the package



Open the downloaded package:


Click “Continue”

Select “all users” → Click “Continue”

Click “Instal”



Enter “Password” → Click “Instal Software”


Wait for the install to complete

Summary:

That’s it you are done.




Package Details:

The package installs the Go distribution under  /usr/local/go.

shaiks@MAC$ls -lrt /usr/local/go/
total 168
drwxr-xr-x    7 root  wheel    238 Sep  8 21:31 pkg
drwxr-xr-x   15 root  wheel    510 Sep  8 21:31 misc
drwxr-xr-x    3 root  wheel    102 Sep  8 21:31 lib
-rw-r--r--    1 root  wheel   1150 Sep  8 21:31 favicon.ico
drwxr-xr-x   39 root  wheel   1326 Sep  8 21:31 doc
drwxr-xr-x    4 root  wheel    136 Sep  8 21:31 blog
drwxr-xr-x    5 root  wheel    170 Sep  8 21:31 bin
drwxr-xr-x   11 root  wheel    374 Sep  8 21:31 api
-rw-r--r--    1 root  wheel      7 Sep  8 21:31 VERSION
-rw-r--r--    1 root  wheel   1519 Sep  8 21:31 README.md
-rw-r--r--    1 root  wheel   1303 Sep  8 21:31 PATENTS
-rw-r--r--    1 root  wheel   1479 Sep  8 21:31 LICENSE
-rw-r--r--    1 root  wheel  28953 Sep  8 21:31 CONTRIBUTORS
-rw-r--r--    1 root  wheel   1107 Sep  8 21:31 CONTRIBUTING.md
-rw-r--r--    1 root  wheel  21146 Sep  8 21:31 AUTHORS
drwxr-xr-x  234 root  wheel   7956 Sep  8 21:32 test
drwxr-xr-x   63 root  wheel   2142 Sep  8 21:32 src
-rw-r--r--    1 root  wheel     26 Sep  8 21:32 robots.txt


The package should put the /usr/local/go/bin directory in your PATH environment variable. You may need to restart any open Terminal sessions for the change to take effect.

shaiks@MAC$echo $PATH
/usr/local/git/current/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/go/bin


Create Project Workspace:


shaiks@MAC$mkdir $HOME/mygo
          
shaiks@MAC$echo "export GOPATH=$HOME/mygo">> ~/.profile
shaiks@MAC$echo "export GOBIN=$GOPATH/bin">> ~/.profile

shaiks@MAC$cat .profile | grep GO
export GOPATH=/Users/shaiksameer/mygo
export GOBIN=$GOPATH/bin

Test go:


shaiks@MAC$pwd
/Users/shaiksameer/mygo

shaiks@MAC$mkdir -p src/hello

shaiks@MAC$pwd
/Users/shaiksameer/mygo/src

Create a sample program
shaiks@MAC$vi /Users/shaiksameer/mygo/src/hello/hello.go
package main

import "fmt"

func main() {
   fmt.Printf("hello, world\n")
}


shaiks@MAC$cd ..

shaiks@MAC$go install src/hello/hello.go

It creates a bin directory and executable file called “hello” under bin directory.

Execute the little program from the bin dir:

shaiks@MAC$./bin/hello
hello, world

Or Use compile & run in single command.

shaiks@MAC$go run src/hello/hello.go
Hey!
Mr. Sameer Shaik

Resources:

Gather statistics on a specific table without invalidating cursors:

$
0
0
Gather statistics on a specific table without invalidating cursors:

no_invalidae
Does not invalidate the dependent cursors if set to TRUE. The procedure invalidates the dependent cursors immediately if set to FALSE. Use DBMS_STATS.AUTO_INVALIDATE. to have Oracle decide when to invalidate dependent cursors. This is the default. The default can be changed using the SET_DATABASE_PREFS Procedure, SET_GLOBAL_PREFS Procedure, SET_SCHEMA_PREFS Procedure and SET_TABLE_PREFS Procedure.


SHAIKDB>select parsing_schema_name,sql_id,invalidations from v$sql where parsing_schema_name='TEST1';

PARSING_SCHEMA_NAME           SQL_ID         INVALIDATIONS
------------------------------ ------------- -------------
TEST1                  g4y6nw3tts7cc        0
TEST1                  cf19zu91tn7mj        0
TEST1                  dyk4dprp70d74        0
TEST1                  gpp46471wp37k        0
TEST1                  5qgz1p0cut7mx        0
TEST1                  g3f3cw3zy5aat        0
TEST1                  fkh6yuk3jpayv        0
TEST1                  c0j6cx9kzjf7g        0
TEST1                  g93x9gquu9t13        0
TEST1                  99qa3zyarxvms        0
TEST1                  g72kdvcacxvtf        0
TEST1                  8wfgaknkskb14        0
TEST1                  d6vwqbw6r2ffk        0
TEST1                  cw6vxf0kbz3v1        0
TEST1                  7hys3h7ysgf9m        0

15 rows selected.

SHAIKDB>select PARSING_SCHEMA_NAME,sql_id,sql_text,invalidations from v$sql where PARSING_SCHEMA_NAME='TEST1' and upper(sql_text) like '%EXT_COLUMNS%';

PARSING_SC SQL_ID     SQL_TEXT                                     INVALIDATIONS
---------- ------------- -------------------------------------------------------------------------------- -------------
TEST1       0kmkdmr78n6rj select PARSING_SCHEMA_NAME,sql_id,sql_text,invalidations from v$sql where PARSIN          0
           G_SCHEMA_NAME='TEST1' and upper(sql_text) like 'EXT_COLUMNS'

TEST1       8bnh5pqv9t49q select PARSING_SCHEMA_NAME,sql_id,sql_text,invalidations from v$sql where PARSIN          0
           G_SCHEMA_NAME='TEST1' and sql_text like 'EXT_COLUMNS'

TEST1       fkh6yuk3jpayv select /*+ no_parallel_index(t, "EXT_COLUMNS_IDX") dbms_stats cursor_sharing_exa          0
           ct use_weak_name_resl dynamic_sampling(0) no_monitoring no_substrb_pad  no_expan
           d index(t,"EXT_COLUMNS_IDX") */ count(*) as nrw,count(distinct sys_op_lbid(78734
           ,'L',t.rowid)) as nlb,null as ndk,sys_op_countchg(substrb(t.rowid,1,15),1) as cl
           f from "TEST1"."EXT_COLUMNS" t where "NUM_ROWS" is not null

TEST1       g93x9gquu9t13 select min(minbkt),maxbkt,substrb(dump(min(val),16,0,32),1,120) minval,substrb(d          0
           ump(max(val),16,0,32),1,120) maxval,sum(rep) sumrep, sum(repsq) sumrepsq, max(re
           p) maxrep, count(*) bktndv, sum(case when rep=1 then 1 else 0 end) unqrep from (
           select val,min(bkt) minbkt, max(bkt) maxbkt, count(val) rep, count(val)*count(va
           l) repsq from (select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_
           sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring no_substrb_pa
           d */"NUM_ROWS" val, ntile(254) over (order by "NUM_ROWS") bkt    from "TEST1"."EXT
           _COLUMNS" t  where "NUM_ROWS" is not null) group by val) group by maxbkt order b
           y maxbkt

TEST1       9tu5t6fm8f34k select PARSING_SCHEMA_NAME,sql_id,sql_text,invalidations from v$sql where PARSIN          0
           G_SCHEMA_NAME='TEST1' and upper(sql_text) like '%EXT_COLUMNS%'

TEST1       29188am21rjc4 select * from ext_columns where num_rows=0                             0

6 rows selected.



SHAIKDB>exec dbms_stats.gather_table_stats('TEST1','EXT_COLUMNS',NO_INVALIDATE=>TRUE);

PL/SQL procedure successfully completed.

SHAIKDB>select PARSING_SCHEMA_NAME,sql_id,sql_text,invalidations from v$sql where PARSING_SCHEMA_NAME='TEST1' and upper(sql_text) like '%EXT_COLUMNS%';

PARSING_SC SQL_ID     SQL_TEXT                                     INVALIDATIONS
---------- ------------- -------------------------------------------------------------------------------- -------------
TEST1       0kmkdmr78n6rj select PARSING_SCHEMA_NAME,sql_id,sql_text,invalidations from v$sql where PARSIN          0
           G_SCHEMA_NAME='TEST1' and upper(sql_text) like 'EXT_COLUMNS'

TEST1       d78xb1kzg8h3u select * from ext_columns where segment_created='NO'                        0
TEST1       8bnh5pqv9t49q select PARSING_SCHEMA_NAME,sql_id,sql_text,invalidations from v$sql where PARSIN          0
           G_SCHEMA_NAME='TEST1' and sql_text like 'EXT_COLUMNS'

TEST1       fvxzppp6dda2d BEGIN dbms_stats.gather_table_stats('TEST1','EXT_COLUMNS',NO_INVALIDATE=>TRUE);          0
           END;

TEST1       fkh6yuk3jpayv select /*+ no_parallel_index(t, "EXT_COLUMNS_IDX") dbms_stats cursor_sharing_exa          0
           ct use_weak_name_resl dynamic_sampling(0) no_monitoring no_substrb_pad  no_expan
           d index(t,"EXT_COLUMNS_IDX") */ count(*) as nrw,count(distinct sys_op_lbid(78734
           ,'L',t.rowid)) as nlb,null as ndk,sys_op_countchg(substrb(t.rowid,1,15),1) as cl
           f from "TEST1"."EXT_COLUMNS" t where "NUM_ROWS" is not null

TEST1       fa216vary5rhd select substrb(dump(val,16,0,32),1,120) ep, cnt from (select /*+ no_parallel(t)          0
           no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_
           sampling(0) no_monitoring no_substrb_pad */max("SEGMENT_CREATED") val,count(*) c
           nt  from "TEST1"."EXT_COLUMNS" t  where "SEGMENT_CREATED" is not null    group by
           nlssort("SEGMENT_CREATED", 'NLS_SORT = binary')) order by nlssort(val,'NLS_SORT
           = binary')

TEST1       g93x9gquu9t13 select min(minbkt),maxbkt,substrb(dump(min(val),16,0,32),1,120) minval,substrb(d          0
           ump(max(val),16,0,32),1,120) maxval,sum(rep) sumrep, sum(repsq) sumrepsq, max(re
           p) maxrep, count(*) bktndv, sum(case when rep=1 then 1 else 0 end) unqrep from (
           select val,min(bkt) minbkt, max(bkt) maxbkt, count(val) rep, count(val)*count(va
           l) repsq from (select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_
           sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring no_substrb_pa
           d */"NUM_ROWS" val, ntile(254) over (order by "NUM_ROWS") bkt    from "TEST1"."EXT
           _COLUMNS" t  where "NUM_ROWS" is not null) group by val) group by maxbkt order b
           y maxbkt

TEST1       46qvsy7cg230g select substrb(dump(val,16,0,32),1,120) ep, cnt from (select /*+ no_parallel(t)          0
           no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_
           sampling(0) no_monitoring no_substrb_pad */max("DROPPED") val,count(*) cnt  from
            "TEST1"."EXT_COLUMNS" t  where "DROPPED" is not null    group by nlssort("DROPPED
           ", 'NLS_SORT = binary')) order by nlssort(val,'NLS_SORT = binary')

TEST1       9tu5t6fm8f34k select PARSING_SCHEMA_NAME,sql_id,sql_text,invalidations from v$sql where PARSIN          0
           G_SCHEMA_NAME='TEST1' and upper(sql_text) like '%EXT_COLUMNS%'

TEST1       3n4rprac5usmj select min(minbkt),maxbkt,substrb(dump(min(val),16,0,32),1,120) minval,substrb(d          0
           ump(max(val),16,0,32),1,120) maxval,sum(rep) sumrep, sum(repsq) sumrepsq, max(re
           p) maxrep, count(*) bktndv, sum(case when rep=1 then 1 else 0 end) unqrep from (
           select val,min(bkt) minbkt, max(bkt) maxbkt, count(val) rep, count(val)*count(va
           l) repsq from (select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_
           sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring no_substrb_pa
           d */mod("SYS_STUEXUYKWMBP9B#CB$NYP#P8I5",9999999999) val, ntile(254) over (order
            by mod("SYS_STUEXUYKWMBP9B#CB$NYP#P8I5",9999999999)) bkt  from "TEST1"."EXT_COL
           UMNS" t  where mod("SYS_STUEXUYKWMBP9B#CB$NYP#P8I5",9999999999) is not null) gro
           up by val) group by maxbkt order by maxbkt

TEST1       1hgggnzr9v7ks select * from ext_columns where num_rows=0 and DROPPED='NO'                    0
TEST1       29188am21rjc4 select * from ext_columns where num_rows=0                             0

12 rows selected.


SHAIKDB>insert into ext_columns select * from ext_columns ;

2795 rows created.

SHAIKDB>/

5590 rows created.

SHAIKDB>/

11180 rows created.

SHAIKDB>/

22360 rows created.

SHAIKDB>commit;

Commit complete.

TNS-12508: TNS:listener could not resolve the COMMAND given

$
0
0


Getting below TNS error while changing the trace level for the listener
LSNRCTL> set trc_level user
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROCEBSSBX_SSL)))
TNS-12508: TNS:listener could not resolve the COMMAND given


Fix:
===
Turn off the ADMIN_RESTRICTIONS in listener.ora

cat listener.ora | grep -i restrictions
ADMIN_RESTRICTIONS_SHAIKDB =  ON

change it to:
ADMIN_RESTRICTIONS_SHAIKDB =  OFF

LSNRCTL> stop SHAIKDB
LSNRCTL> start SHAIKDB

or
[oracle@collabn1 ~]$ lsnrctl stop shaikdb

LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 14-OCT-2015 10:51:24

Copyright (c) 1991, 2009, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=collabn1.shaiksameer)(PORT=1522)))
The command completed successfully
[oracle@collabn1 ~]$ lsnrctl start shaikdb

Now try to reset the trace level:
=====================

LSNRCTL> set current_listener shaikdb
Current Listener is shaikdb

LSNRCTL> set trc_level off
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=collabn1.shaiksameer)(PORT=1522)))
shaikdb parameter "trc_level" set to off
The command completed successfully

LSNRCTL> save_config
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=collabn1.shaiksameer)(PORT=1522)))
Saved shaikdb configuration parameters.
Listener Parameter File   /u01/app/oracle/product/11.2.0.2/SHAIKPROD/network/admin/listener.ora
Old Parameter File   /u01/app/oracle/product/11.2.0.2/SHAIKPROD/network/admin/listener.bak
The command completed successfully

Administer partitioned tables and indexes using appropriate methods and keys

$
0
0
Administer partitioned tables and indexes using appropriate methods and keys
Basics of Partitioning
Partitioning allows a table, index, or index-organized table to be subdivided into smaller pieces, where each piece of such a database object is called a partition. Each partition has its own name, and may optionally have its own storage characteristics

Partitioning Key

Each row in a partitioned table is unambiguously assigned to a single partition. The partitioning key is comprised of one or more columns that determine the partition where each row will be stored. Oracle automatically directs insert, update, and delete operations to the appropriate partition through the use of the partitioning key

Partitioned Tables

Any table can be partitioned into a million separate partitions except those tables containing columns with LONG or LONG RAW data types.

Partitioned Index-Organized Tables

Partitioned index-organized tables are very useful for providing improved performance, manageability, and availability for index-organized tables.
For partitioning an index-organized table:
  • Partition columns must be a subset of the primary key columns.
  • Secondary indexes can be partitioned (both locally and globally).
  • OVERFLOW data segments are always equi-partitioned with the table partitions

Benefits of Partitioning


Partition Pruning


Partition-Wise Joins

Partition-wise joins can be applied when two tables are being joined together and both tables are partitioned on the join key, or when a reference partitioned table is joined with its parent table. Partition-wise joins break a large join into smaller joins that occur between each of the partitions, completing the overall join in less time

Partitioning for Availability

Partitioned database objects provide partition independence

Partitioning for Manageability

Partitioning allows tables and indexes to be partitioned into smaller, more manageable units, providing database administrators with the ability to pursue a "divide and conquer" approach to data managemen


Partitioning Strategies

  • Range
  • Hash
  • List
Screen Shot 2015-09-20 at 5.56.56 PM.png


Hash Partitioning

Hash partitioning maps data to partitions based on a hashing algorithm that Oracle applies to the partitioning key that you identify. The hashing algorithm evenly distributes rows among partitions, giving partitions approximately the same size.
Hash partitioning is the ideal method for distributing data evenly across devices. Hash partitioning is also an easy-to-use alternative to range partitioning, especially when the data to be partitioned is not historical or has no obvious partitioning key.

List Partitioning

List partitioning enables you to explicitly control how rows map to partitions by specifying a list of discrete values for the partitioning key in the description for each partition. The advantage of list partitioning is that you can group and organize unordered and unrelated sets of data in a natural way. For a table with a region column as the partitioning key, the North America partition might contain values Canada, USA, and Mexico.
The DEFAULT partition enables you to avoid specifying all possible values for a list-partitioned table by using a default partition, so that all rows that do not map to any other partition do not generate an error.

Composite Partitioning

Composite partitioning is a combination of the basic data distribution methods; a table is partitioned by one data distribution method and then each partition is further subdivided into subpartitions using a second data distribution method. All subpartitions for a given partition together represent a logical subset of the data.

Composite Range-Range Partitioning

Composite range-range partitioning enables logical range partitioning along two dimensions; for example, partition by order_date and range subpartition by shipping_date.

Composite Range-Hash Partitioning

Composite range-hash partitioning partitions data using the range method, and within each partition, subpartitions it using the hash method. Composite range-hash partitioning provides the improved manageability of range partitioning and the data placement, striping, and parallelism advantages of hash partitioning.

Composite Range-List Partitioning

Composite range-list partitioning partitions data using the range method, and within each partition, subpartitions it using the list method. Composite range-list partitioning provides the manageability of range partitioning and the explicit control of list partitioning for the subpartitions.

Composite List-Range Partitioning

Composite list-range partitioning enables logical range subpartitioning within a given list partitioning strategy; for example, list partition by country_id and range subpartition by order_date.

Composite List-Hash Partitioning

Composite list-hash partitioning enables hash subpartitioning of a list-partitioned object; for example, to enable partition-wise joins.

Composite List-List Partitioning

Composite list-list partitioning enables logical list partitioning along two dimensions; for example, list partition by country_id and list subpartition by sales_channel

Partitioning Extensions

In addition to the basic partitioning strategies, Oracle Database provides partitioning extensions:
  • Manageability Extensions

  • Partitioning Key Extensions

Manageability Extensions

These extensions significantly enhance the manageability of partitioned tables:

  • Interval Partitioning
  • Partition Advisor

Interval Partitioning

Interval partitioning is an extension of range partitioning which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the existing range partitions. You must specify at least one range partition. The range partitioning key value determines the high value of the range partitions, which is called the transition point, and the database creates interval partitions for data beyond that transition point. The lower boundary of every interval partition is the non-inclusive upper boundary of the previous range or interval partition.

When using interval partitioning, consider the following restrictions:
  • You can only specify one partitioning key column, and it must be of NUMBER or DATE type.
  • Interval partitioning is not supported for index-organized tables.
  • You cannot create a domain index on an interval-partitioned table.
You can create single-level interval partitioned tables as well as the following composite partitioned tables:
  • Interval-range
  • Interval-hash
  • Interval-list

Partitioning Key Extensions

These extensions extend the flexibility in defining partitioning keys:
  • Reference Partitioning
  • Virtual Column-Based Partitioning

Reference Partitioning

Reference partitioning allows the partitioning of two tables related to one another by referential constraints. The partitioning key is resolved through an existing parent-child relationship, enforced by enabled and active primary key and foreign key constraints

Virtual Column-Based Partitioning

In Oracle Database 11g, virtual columns allow the partitioning key to be defined by an expression, using one or more existing columns of a table. The expression is stored as metadata only.

Virtual column-based partitioning is supported with all basic partitioning strategies, including reference Partitioning, as well as interval and interval-* composite partitioning.

Partition Advisor

The Partition Advisor is part of the SQL Access Advisor. The Partition Advisor can recommend a partitioning strategy for a table based on a supplied workload of SQL statements which can be supplied by the SQL Cache, a SQL Tuning set, or be defined by the user.

Partitioned Indexes

Just like partitioned tables, partitioned indexes improve manageability, availability, performance, and scalability. They can either be partitioned independently (global indexes) or automatically linked to a table's partitioning method (local indexes). In general, you should use global indexes for OLTP applications and local indexes for data warehousing or DSS applications. Also, whenever possible, you should try to use local indexes because they are easier to manage

  1. If the table partitioning column is a subset of the index keys, use a local index.
  2. If the index is unique and does not include the partitioning key columns, then use a global index.
  3. If your priority is manageability, use a local index.
  4. If the application is an OLTP one and users need quick response times, use a global index. If the application is a DSS one and users are more interested in throughput, use a local index


Local Partitioned Indexes

Each partition of a local index is associated with exactly one partition of the table. This enables Oracle to automatically keep the index partitions in sync with the table partitions, and makes each table-index pair independent. Any actions that make one partition's data invalid or unavailable only affect a single partition.

You cannot explicitly add a partition to a local index. Instead, new partitions are added to local indexes only when you add a partition to the underlying table. Likewise, you cannot explicitly drop a partition from a local index. Instead, local index partitions are dropped only when you drop a partition from the underlying table

Global Partitioned Indexes

Oracle offers two types of global partitioned indexes: range partitioned and hash partitioned.

Global Range Partitioned Indexes

Global range partitioned indexes are flexible in that the degree of partitioning and the partitioning key are independent from the table's partitioning method.
The highest partition of a global index must have a partition bound, all of whose values are MAXVALUE. This ensures that all rows in the underlying table can be represented in the index. Global prefixed indexes can be unique or nonunique.
You cannot add a partition to a global index because the highest partition always has a partition bound of MAXVALUE. If you want to add a new highest partition, use the ALTER INDEX SPLIT PARTITION statement. If a global index partition is empty, you can explicitly drop it by issuing the ALTER INDEX DROP PARTITION statement. If a global index partition contains data, dropping the partition causes the next highest partition to be marked unusable. You cannot drop the highest partition in a global index.

Global Hash Partitioned Indexes

Global hash partitioned indexes improve performance by spreading out contention when the index is monotonically growing. In other words, most of the index insertions occur only on the right edge of an index.

Maintenance of Global Partitioned Indexes

By default, the following operations on partitions on a heap-organized table mark all global indexes as unusable:
ADD (HASH)
COALESCE (HASH)
DROP
EXCHANGE
MERGE
MOVE
SPLIT
TRUNCATE

When to Use Range or Interval Partitioning

Range partitioning is a convenient method for partitioning historical data. The boundaries of range partitions define the ordering of the partitions in the tables or indexes.
Interval partitioning is an extension to range partitioning in which, beyond a point in time, partitions are defined by an interval. Interval partitions are automatically created by the database when data is inserted into the partition

When to Use Hash Partitioning

There are times when it is not obvious into which partition data should reside, although the partitioning key can be identified. Rather than group similar data together, there are times when it is desirable to distribute data such that it does not correspond to a business or a logical view of the data, as it does in range partitioning. With hash partitioning, a row is placed into a partition based on the result of passing the partitioning key into a hashing algorithm.

When to Use List Partitioning

You should use list partitioning when you want to specifically map rows to partitions based on discrete values

When to Use Composite Partitioning

Composite partitioning offers the benefits of partitioning on two dimensions. From a performance perspective you can take advantage of partition pruning on one or two dimensions depending on the SQL statement, and you can take advantage of the use of full or partial partition-wise joins on either dimension

When to Use Composite Range-Hash Partitioning

Composite range-hash partitioning is particularly common for tables that store history, are very large as a result, and are frequently joined with other large tables. For these types of tables (typical of data warehouse systems), composite range-hash partitioning provides the benefit of partition pruning at the range level with the opportunity to perform parallel full or partial partition-wise joins at the hash level. Specific cases can benefit from partition pruning on both dimensions for specific SQL statements

When to Use Composite Range-Range Partitioning

Composite range-range partitioning is useful for applications that store time-dependent data on more than one time dimension. Often these applications do not use one particular time dimension to access the data, but rather another time dimension, or sometimes both at the same time.

When to Use Composite List-Hash Partitioning

Composite list-hash partitioning is useful for large tables that are usually accessed on one dimension, but (due to their size) still need to take advantage of parallel full or partial partition-wise joins on another dimension in joins with other large tables

When to Use Composite List-List Partitioning

Composite list-list partitioning is useful for large tables that are often accessed on different dimensions. You can specifically map rows to partitions on those dimensions based on discrete values

4 Partition Administration


SHAIKDB>col partition_name for a30
SHAIKDB>col high_value  for a30
SHAIKDB>set linesize 400
SHAIKDB>set lines 100

SHAIKDB>create table range_part (prod_id number(6),cust_id number,time_id date,channel_id char(1),promo_id number(6),quantity_sold number(3),
   amount_sold number(10,2)) partition by range(time_id)
    (partition Q1 values less than (to_date('01-APR-2006','DD-MON-YYYY')) TABLESPACE TBS1,
   PARTITION Q2 VALUES LESS THAN (TO_DATE('01-JUL-2006','DD-MON-YYYY')) TABLESPACE TBS2,
   PARTITION Q3 VALUES LESS THAN (TO_DATE('01-DEC-2006','DD-MON-YYYY')) TABLESPACE TBS3);

Table created.


SHAIKDB>insert into range_part select * from sh.sales;

918843 rows created.



View
Description
DBA_PART_TABLES ALL_PART_TABLES
USER_PART_TABLES
DBA view displays partitioning information for all partitioned tables in the database. ALL view displays partitioning information for all partitioned tables accessible to the user. USER view is restricted to partitioning information for partitioned tables owned by the user.
DBA_TAB_PARTITIONS ALL_TAB_PARTITIONS
USER_TAB_PARTITIONS
Display partition-level partitioning information, partition storage parameters, and partition statistics generated by the DBMS_STATS package or the ANALYZE statement.
DBA_TAB_SUBPARTITIONS ALL_TAB_SUBPARTITIONS
USER_TAB_SUBPARTITIONS
Display subpartition-level partitioning information, subpartition storage parameters, and subpartition statistics generated by the DBMS_STATS package or the ANALYZE statement.
DBA_PART_KEY_COLUMNS ALL_PART_KEY_COLUMNS
USER_PART_KEY_COLUMNS
Display the partitioning key columns for partitioned tables.
DBA_SUBPART_KEY_COLUMNS ALL_SUBPART_KEY_COLUMNS
USER_SUBPART_KEY_COLUMNS
Display the subpartitioning key columns for composite-partitioned tables (and local indexes on composite-partitioned tables).
DBA_PART_COL_STATISTICS ALL_PART_COL_STATISTICS
USER_PART_COL_STATISTICS
Display column statistics and histogram information for the partitions of tables.
DBA_SUBPART_COL_STATISTICS ALL_SUBPART_COL_STATISTICS
USER_SUBPART_COL_STATISTICS
Display column statistics and histogram information for subpartitions of tables.
DBA_PART_HISTOGRAMS ALL_PART_HISTOGRAMS
USER_PART_HISTOGRAMS
Display the histogram data (end-points for each histogram) for histograms on table partitions.
DBA_SUBPART_HISTOGRAMS ALL_SUBPART_HISTOGRAMS
USER_SUBPART_HISTOGRAMS
Display the histogram data (end-points for each histogram) for histograms on table subpartitions.
DBA_PART_INDEXES ALL_PART_INDEXES
USER_PART_INDEXES
Display partitioning information for partitioned indexes.
DBA_IND_PARTITIONS ALL_IND_PARTITIONS
USER_IND_PARTITIONS
Display the following for index partitions: partition-level partitioning information, storage parameters for the partition, statistics collected by the DBMS_STATS package or the ANALYZE statement.
DBA_IND_SUBPARTITIONS ALL_IND_SUBPARTITIONS
USER_IND_SUBPARTITIONS
Display the following information for index subpartitions: partition-level partitioning information, storage parameters for the partition, statistics collected by the DBMS_STATS package or the ANALYZE statement.
DBA_SUBPARTITION_TEMPLATES ALL_SUBPARTITION_TEMPLATES
USER_SUBPARTITION_TEMPLATES
Display information about existing subpartition templates.


CREATE TABLE sales
 ( prod_id       NUMBER(6)
 , cust_id       NUMBER
 , time_id       DATE
 , channel_id    CHAR(1)
 , promo_id      NUMBER(6)
 , quantity_sold NUMBER(3)
 , amount_sold   NUMBER(10,2)
 )
STORAGE (INITIAL 100K NEXT 50K) LOGGING
PARTITION BY RANGE (time_id)
( PARTITION sales_q1_2006 VALUES LESS THAN (TO_DATE('01-APR-2006','dd-MON-yyyy'))
   TABLESPACE tsa STORAGE (INITIAL 20K NEXT 10K)
, PARTITION sales_q2_2006 VALUES LESS THAN (TO_DATE('01-JUL-2006','dd-MON-yyyy'))
   TABLESPACE tsb
, PARTITION sales_q3_2006 VALUES LESS THAN (TO_DATE('01-OCT-2006','dd-MON-yyyy'))
   TABLESPACE tsc
, PARTITION sales_q4_2006 VALUES LESS THAN (TO_DATE('01-JAN-2007','dd-MON-yyyy'))
   TABLESPACE tsd
)
ENABLE ROW MOVEMENT;
Create global partition index:


SHAIKDB>create index range_part_idx1 on range_part (amount_sold) tablespace idx1 global partition by range(amount_sold)
 (partition p1 values less than (100),  partition p2 values less than  (1000),partition p3 values less than (maxvalue));

Index created.


TABLE_NAME INDEX_NAME             PARTITION PARTITION_COUNT LOCALI COM PARTITION_NAME             STATUS
---------- ------------------------------ --------- --------------- ------ --- ------------------------------ --------
RANGE_PART RANGE_PART_IDX1         RANGE          3 GLOBAL NO  P1                 USABLE
RANGE_PART RANGE_PART_IDX1         RANGE          3 GLOBAL NO  P2                 USABLE
RANGE_PART RANGE_PART_IDX1         RANGE          3 GLOBAL NO  P3                 USABLE



Create a local index:

SHAIKDB>col table_name for a10
SHAIKDB>set linesize 400

SHAIKDB>create index range_part_idx1 on range_part(amount_sold) tablespace idx1;

Index created.

SHAIKDB>select table_name,index_name,index_type from dba_indexes where table_name='RANGE_PART';

TABLE_NAME              INDEX_NAME             INDEX_TYPE
------------------------------ ------------------------------ ---------------------------
RANGE_PART              RANGE_PART_IDX1             NORMAL


SHAIKDB>select table_name,a.index_name,partitioning_type,partition_count,locality,a.composite,partition_name,status from dba_ind_partitions a,dba_part_indexes b where a.index_name=b.index_name and a.index_name='RANGE_PART_IDX1';

TABLE_NAME INDEX_NAME  PARTITION PARTITION_COUNT LOCALI COM PARTITION_NAME  STATUS
---------- ------------------------------ --------- --------------- ------ --- ------------------------------ --------
RANGE_PART RANGE_PART_IDX1         RANGE          3 LOCAL  NO  Q1                 USABLE
RANGE_PART RANGE_PART_IDX1         RANGE          3 LOCAL  NO  Q2                 USABLE
RANGE_PART RANGE_PART_IDX1         RANGE          3 LOCAL  NO  Q3                 USABLE


Creating Interval-Partitioned Tables


You must specify at least one range partition using the PARTITION clause. The range partitioning key value determines the high value of the range partitions, which is called the transition point, and the database automatically creates interval partitions for data beyond that transition point. The lower boundary of every interval partition is the non-inclusive upper boundary of the previous range or interval partition.

For interval partitioning, the partitioning key can only be a single column name from the table and it must be of NUMBER or DATE typ



SHAIKDB>create table interval_part
   (prod_id number(6),cust_id number,time_id date,channel_id char(1),promo_id number(6),quantity_sold number(3),
   amount_sold number(10,2)) partition by range(time_id) interval(numtoyminterval(1,'MONTH'))
   (partition p0 values less than (to_date('1-1-2005','dd-mm-yyyy')),
   partition p1 values less than (to_date('1-1-2006','dd-mm-yyyy')),
   partition p2 values less than (to_date('1-1-2007','dd-mm-yyyy')));
amount_sold number(10,2)) partition by range(time_id) interval(numtoyminterbal(1,'MONTH'))
                                                  

Table created.

The high bound of partition p2 represents the transition point. p2 and all partitions below it (p0 and  p1in this example) are in the range section while all partitions above it fall into the interval section


SHAIKDB>insert into interval_part values (123,1,sysdate,'A',456,1,10);

1 row created.

SHAIKDB>commit;

Commit complete.

SHAIKDB>col partition_name for a5
SHAIKDB>select partition_name,composite,high_value,partition_position,num_rows,interval from dba_tab_partitions where table_name='INTERVAL_PART';

PARTI COM HIGH_VALUE       PARTITION_POSITION    NUM_ROWS INT
----- --- -------------------------------------------------------------------------------- ------------------ ---------- ---
P0    NO  TO_DATE(' 2005-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA           1        NO
P1    NO  TO_DATE(' 2006-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA           2        NO
P2    NO  TO_DATE(' 2007-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA           3        NO
SYS_P NO  TO_DATE(' 2015-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA           4        YES
21


SHAIKDB>insert into interval_part(time_id) values (to_date('2014-01-31','yyyy-mm-dd'));

1 row created.

SHAIKDB>commit;

Commit complete.


SHAIKDB>select partition_name,composite,high_value,partition_position,num_rows,interval from dba_tab_partitions where table_name='INTERVAL_PART';

PARTI COM HIGH_VALUE           PARTITION_POSITION    NUM_ROWS INT
----- --- -------------------------------------------------------------------------------- ------------------ ---------- ---
P0    NO  TO_DATE(' 2005-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA           1        NO
P1    NO  TO_DATE(' 2006-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA           2        NO
P2    NO  TO_DATE(' 2007-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA           3        NO
SYS_P NO  TO_DATE(' 2014-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA           4        YES
22

SYS_P NO  TO_DATE(' 2015-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA           5        YES
21




Creating Hash-Partitioned Tables and Global Indexes

The PARTITION BY HASH clause of the CREATE TABLE statement identifies that the table is to be hash-partitioned. The PARTITIONS clause can then be used to specify the number of partitions to create, and optionally, the tablespaces to store them in. Alternatively, you can use PARTITION clauses to name the individual partitions and their tablespaces.
The only attribute you can specify for hash partitions is TABLESPACE. All of the hash partitions of a table must share the same segment attributes (except TABLESPACE), which are inherited from the table level

SHAIKDB>create table hash_part1 (id number,name varchar2(10)) partition by hash (id) partitions 6
   store in (tbs1,tbs2,tbs3);

Table created

SHAIKDB>begin
 2  for i in 1..1000 loop
 3  insert into hash_part1 values (i,'AA');
 4  commit;
 5  end loop;
 6  end;
 7  /

PL/SQL procedure successfully completed.

SHAIKDB>select count(*) from hash_part1;

 COUNT(*)
----------
     1000

SHAIKDB>col high_value for a10
select partition_name,composite,high_value,partitioning_type,status,def_tablespace_name,partition_count
   from dba_tab_partitions a , dba_part_tables b where a.table_name=b.table_name and a.table_name='HASH_PART1';



PARTITION_NAME              COM HIGH_VALUE PARTITION STATUS     DEF_TABLESPACE_NAME       PARTITION_COUNT
------------------------------ --- ---------- --------- -------- ------------------------------ ---------------
SYS_P23               NO          HASH    VALID     SYSTEM                      6
SYS_P24               NO          HASH    VALID     SYSTEM                      6
SYS_P25               NO          HASH    VALID     SYSTEM                      6
SYS_P26               NO          HASH    VALID     SYSTEM                      6
SYS_P27               NO          HASH    VALID     SYSTEM                      6
SYS_P28               NO          HASH    VALID     SYSTEM                      6


SHAIKDB>create table dept_hash (id number,name varchar2(10)) storage (initial 10k) tablespace tbs1
   partition by hash (id)
   partitions 2;

Table created.

CREATE INDEX hgidx ON tab (c1,c2,c3) GLOBAL
    PARTITION BY HASH (c1,c2)
    (PARTITION p1  TABLESPACE tbs_1,
     PARTITION p2  TABLESPACE tbs_2,
     PARTITION p3  TABLESPACE tbs_3,
     PARTITION p4  TABLESPACE tbs_4);


CREATE TABLE q1_sales_by_region
     (deptno number,
      deptname varchar2(20),
      quarterly_sales number(10, 2),
      state varchar2(2))
  PARTITION BY LIST (state)
     (PARTITION q1_northwest VALUES ('OR', 'WA'),
      PARTITION q1_southwest VALUES ('AZ', 'UT', 'NM'),
      PARTITION q1_northeast VALUES  ('NY', 'VM', 'NJ'),
      PARTITION q1_southeast VALUES ('FL', 'GA'),
      PARTITION q1_northcentral VALUES ('SD', 'WI'),
      PARTITION q1_southcentral VALUES ('OK', 'TX'));

CREATE TABLE sales_by_region (item# INTEGER, qty INTEGER,
            store_name VARCHAR(30), state_code VARCHAR(2),
            sale_date DATE)
    STORAGE(INITIAL 10K NEXT 20K) TABLESPACE tbs5
    PARTITION BY LIST (state_code)
    (
    PARTITION region_east
       VALUES ('MA','NY','CT','NH','ME','MD','VA','PA','NJ')
       STORAGE (INITIAL 8M)
       TABLESPACE tbs8,
    PARTITION region_west
       VALUES ('CA','AZ','NM','OR','WA','UT','NV','CO')
       NOLOGGING,
    PARTITION region_south
       VALUES ('TX','KY','TN','LA','MS','AR','AL','GA'),
    PARTITION region_central
       VALUES ('OH','ND','SD','MO','IL','MI','IA'),
    PARTITION region_null
       VALUES (NULL),
    PARTITION region_unknown
       VALUES (DEFAULT)
    );

Creating Reference-Partitioned Tables

To create a reference-partitioned table, you specify a PARTITION BY REFERENCE clause in the CREATE TABLE statement. This clause specifies the name of a referential constraint and this constraint becomes the partitioning referential constraint that is used as the basis for reference partitioning in the table. The referential constraint must be enabled and enforced.
As with other partitioned tables, you can specify object-level default attributes, and you can optionally specify partition descriptors that override the object-level defaults on a per-partition basis.
The following example creates a parent table orders which is range-partitioned on order_date. The reference-partitioned child table order_items is created with four partitions, Q1_2005, Q2_2005, Q3_2005, and Q4_2005, where each partition contains the order_items rows corresponding to orders in the respective parent partition.
CREATE TABLE orders
   ( order_id           NUMBER(12),
     order_date         TIMESTAMP WITH LOCAL TIME ZONE,
     order_mode         VARCHAR2(8),
     customer_id        NUMBER(6),
     order_status       NUMBER(2),
     order_total        NUMBER(8,2),
     sales_rep_id       NUMBER(6),
     promotion_id       NUMBER(6),
     CONSTRAINT orders_pk PRIMARY KEY(order_id)
   )
 PARTITION BY RANGE(order_date)
   ( PARTITION Q1_2005 VALUES LESS THAN (TO_DATE('01-APR-2005','DD-MON-YYYY')),
     PARTITION Q2_2005 VALUES LESS THAN (TO_DATE('01-JUL-2005','DD-MON-YYYY')),
     PARTITION Q3_2005 VALUES LESS THAN (TO_DATE('01-OCT-2005','DD-MON-YYYY')),
     PARTITION Q4_2005 VALUES LESS THAN (TO_DATE('01-JAN-2006','DD-MON-YYYY'))
   );

CREATE TABLE order_items
   ( order_id           NUMBER(12) NOT NULL,
     line_item_id       NUMBER(3)  NOT NULL,
     product_id         NUMBER(6)  NOT NULL,
     unit_price         NUMBER(8,2),
     quantity           NUMBER(8),
     CONSTRAINT order_items_fk
     FOREIGN KEY(order_id) REFERENCES orders(order_id)
   )
   PARTITION BY REFERENCE(order_items_fk);

If partition descriptors are provided, then the number of partitions described must exactly equal the number of partitions or subpartitions in the referenced table. If the parent table is a composite partitioned table, then the table will have one partition for each subpartition of its parent; otherwise the table will have one partition for each partition of its parent.
Partition bounds cannot be specified for the partitions of a reference-partitioned table.
The partitions of a reference-partitioned table can be named. If a partition is not explicitly named, then it will inherit its name from the corresponding partition in the parent table, unless this inherited name conflicts with one of the explicit names given. In this case, the partition will have a system-generated name.


Creating Composite Partitioned Tables

To create a composite partitioned table, you start by using the PARTITION BY [ RANGE | LIST ] clause of a CREATE TABLE statement. Next, you specify a SUBPARTITION BY [ RANGE | LIST | HASH ] clause that follows similar syntax and rules as the PARTITION BY [ RANGE | LIST | HASH ] clause. The individual PARTITION and SUBPARTITION or SUBPARTITIONS clauses, and optionally a SUBPARTITION TEMPLATE clause, follow.

Creating Composite Range-Hash Partitioned Tables

The following statement creates a range-hash partitioned table. In this example, four range partitions are created, each containing eight subpartitions. Because the subpartitions are not named, system generated names are assigned, but the STORE IN clause distributes them across the 4 specified tablespaces (ts1, ...,ts4).
CREATE TABLE sales
 ( prod_id       NUMBER(6)
 , cust_id       NUMBER
 , time_id       DATE
 , channel_id    CHAR(1)
 , promo_id      NUMBER(6)
 , quantity_sold NUMBER(3)
 , amount_sold   NUMBER(10,2)
 )
PARTITION BY RANGE (time_id) SUBPARTITION BY HASH (cust_id)
 SUBPARTITIONS 8 STORE IN (ts1, ts2, ts3, ts4)
( PARTITION sales_q1_2006 VALUES LESS THAN (TO_DATE('01-APR-2006','dd-MON-yyyy'))
, PARTITION sales_q2_2006 VALUES LESS THAN (TO_DATE('01-JUL-2006','dd-MON-yyyy'))
, PARTITION sales_q3_2006 VALUES LESS THAN (TO_DATE('01-OCT-2006','dd-MON-yyyy'))
, PARTITION sales_q4_2006 VALUES LESS THAN (TO_DATE('01-JAN-2007','dd-MON-yyyy'))
);


CREATE TABLE emp (deptno NUMBER, empname VARCHAR(32), grade NUMBER)   
    PARTITION BY RANGE(deptno) SUBPARTITION BY HASH(empname)
       SUBPARTITIONS 8 STORE IN (ts1, ts3, ts5, ts7)
   (PARTITION p1 VALUES LESS THAN (1000),
    PARTITION p2 VALUES LESS THAN (2000)
       STORE IN (ts2, ts4, ts6, ts8),
    PARTITION p3 VALUES LESS THAN (MAXVALUE)
      (SUBPARTITION p3_s1 TABLESPACE ts4,
       SUBPARTITION p3_s2 TABLESPACE ts5));



CREATE INDEX emp_ix ON emp(deptno)
    LOCAL STORE IN (ts7, ts8, ts9);
This local index is equipartitioned with the base table as follows:
  • It consists of as many partitions as the base table.
  • Each index partition consists of as many subpartitions as the corresponding base table partition.
  • Index entries for rows in a given subpartition of the base table are stored in the corresponding subpartition of the index



how range-list partitioning might be used. The example tracks sales data of products by quarters and within each quarter, groups it by specified states.
CREATE TABLE quarterly_regional_sales
     (deptno number, item_no varchar2(20),
      txn_date date, txn_amount number, state varchar2(2))
 TABLESPACE ts4
 PARTITION BY RANGE (txn_date)
   SUBPARTITION BY LIST (state)
     (PARTITION q1_1999 VALUES LESS THAN (TO_DATE('1-APR-1999','DD-MON-YYYY'))
        (SUBPARTITION q1_1999_northwest VALUES ('OR', 'WA'),
         SUBPARTITION q1_1999_southwest VALUES ('AZ', 'UT', 'NM'),
         SUBPARTITION q1_1999_northeast VALUES ('NY', 'VM', 'NJ'),
         SUBPARTITION q1_1999_southeast VALUES ('FL', 'GA'),
         SUBPARTITION q1_1999_northcentral VALUES ('SD', 'WI'),
         SUBPARTITION q1_1999_southcentral VALUES ('OK', 'TX')
        ),
      PARTITION q2_1999 VALUES LESS THAN ( TO_DATE('1-JUL-1999','DD-MON-YYYY'))
        (SUBPARTITION q2_1999_northwest VALUES ('OR', 'WA'),
         SUBPARTITION q2_1999_southwest VALUES ('AZ', 'UT', 'NM'),
         SUBPARTITION q2_1999_northeast VALUES ('NY', 'VM', 'NJ'),
         SUBPARTITION q2_1999_southeast VALUES ('FL', 'GA'),
         SUBPARTITION q2_1999_northcentral VALUES ('SD', 'WI'),
         SUBPARTITION q2_1999_southcentral VALUES ('OK', 'TX')
        ),
      PARTITION q3_1999 VALUES LESS THAN (TO_DATE('1-OCT-1999','DD-MON-YYYY'))
        (SUBPARTITION q3_1999_northwest VALUES ('OR', 'WA'),
         SUBPARTITION q3_1999_southwest VALUES ('AZ', 'UT', 'NM'),
         SUBPARTITION q3_1999_northeast VALUES ('NY', 'VM', 'NJ'),
         SUBPARTITION q3_1999_southeast VALUES ('FL', 'GA'),
         SUBPARTITION q3_1999_northcentral VALUES ('SD', 'WI'),
         SUBPARTITION q3_1999_southcentral VALUES ('OK', 'TX')
        ),
      PARTITION q4_1999 VALUES LESS THAN ( TO_DATE('1-JAN-2000','DD-MON-YYYY'))
        (SUBPARTITION q4_1999_northwest VALUES ('OR', 'WA'),
         SUBPARTITION q4_1999_southwest VALUES ('AZ', 'UT', 'NM'),
         SUBPARTITION q4_1999_northeast VALUES ('NY', 'VM', 'NJ'),
         SUBPARTITION q4_1999_southeast VALUES ('FL', 'GA'),
         SUBPARTITION q4_1999_northcentral VALUES ('SD', 'WI'),
         SUBPARTITION q4_1999_southcentral VALUES ('OK', 'TX')
        )
     );
The following example creates a table that specifies a tablespace at the partition and subpartition levels. The number of subpartitions within each partition varies, and default subpartitions are specified.
CREATE TABLE sample_regional_sales
     (deptno number, item_no varchar2(20),
      txn_date date, txn_amount number, state varchar2(2))
 PARTITION BY RANGE (txn_date)
   SUBPARTITION BY LIST (state)
     (PARTITION q1_1999 VALUES LESS THAN (TO_DATE('1-APR-1999','DD-MON-YYYY'))
         TABLESPACE tbs_1
        (SUBPARTITION q1_1999_northwest VALUES ('OR', 'WA'),
         SUBPARTITION q1_1999_southwest VALUES ('AZ', 'UT', 'NM'),
         SUBPARTITION q1_1999_northeast VALUES ('NY', 'VM', 'NJ'),
         SUBPARTITION q1_1999_southeast VALUES ('FL', 'GA'),
         SUBPARTITION q1_others VALUES (DEFAULT) TABLESPACE tbs_4
        ),
      PARTITION q2_1999 VALUES LESS THAN ( TO_DATE('1-JUL-1999','DD-MON-YYYY'))
         TABLESPACE tbs_2
        (SUBPARTITION q2_1999_northwest VALUES ('OR', 'WA'),
         SUBPARTITION q2_1999_southwest VALUES ('AZ', 'UT', 'NM'),
         SUBPARTITION q2_1999_northeast VALUES ('NY', 'VM', 'NJ'),
         SUBPARTITION q2_1999_southeast VALUES ('FL', 'GA'),
         SUBPARTITION q2_1999_northcentral VALUES ('SD', 'WI'),
         SUBPARTITION q2_1999_southcentral VALUES ('OK', 'TX')
        ),
      PARTITION q3_1999 VALUES LESS THAN (TO_DATE('1-OCT-1999','DD-MON-YYYY'))
         TABLESPACE tbs_3
        (SUBPARTITION q3_1999_northwest VALUES ('OR', 'WA'),
         SUBPARTITION q3_1999_southwest VALUES ('AZ', 'UT', 'NM'),
         SUBPARTITION q3_others VALUES (DEFAULT) TABLESPACE tbs_4
        ),
      PARTITION q4_1999 VALUES LESS THAN ( TO_DATE('1-JAN-2000','DD-MON-YYYY'))
         TABLESPACE tbs_4
     );


Creating Composite Range-Range Partitioned Tables

The range partitions of a range-range composite partitioned table are described as for non-composite range partitioned tables. This allows that optional subclauses of a PARTITION clause can specify physical and other attributes, including tablespace, specific to a partition segment. If not overridden at the partition level, partitions inherit the attributes of their underlying table.
The range subpartition descriptions, in the SUBPARTITION clauses, are described as for non-composite range partitions, except the only physical attribute that can be specified is an optional tablespace. Subpartitions inherit all other physical attributes from the partition description.
The following example illustrates how range-range partitioning might be used. The example tracks shipments. The service level agreement with the customer states that every order will be delivered in the calendar month after the order was placed. The following types of orders are identified:
  • E (EARLY): orders that are delivered before the middle of the next month after the order was placed. These orders likely exceed customers' expectations.
  • A (AGREED): orders that are delivered in the calendar month after the order was placed (but not early orders).
  • L (LATE): orders that were only delivered starting the second calendar month after the order was placed.
CREATE TABLE shipments
( order_id      NUMBER NOT NULL
, order_date    DATE NOT NULL
, delivery_date DATE NOT NULL
, customer_id   NUMBER NOT NULL
, sales_amount  NUMBER NOT NULL
)
PARTITION BY RANGE (order_date)
SUBPARTITION BY RANGE (delivery_date)
( PARTITION p_2006_jul VALUES LESS THAN (TO_DATE('01-AUG-2006','dd-MON-yyyy'))
 ( SUBPARTITION p06_jul_e VALUES LESS THAN (TO_DATE('15-AUG-2006','dd-MON-yyyy'))
 , SUBPARTITION p06_jul_a VALUES LESS THAN (TO_DATE('01-SEP-2006','dd-MON-yyyy'))
 , SUBPARTITION p06_jul_l VALUES LESS THAN (MAXVALUE)
 )
, PARTITION p_2006_aug VALUES LESS THAN (TO_DATE('01-SEP-2006','dd-MON-yyyy'))
 ( SUBPARTITION p06_aug_e VALUES LESS THAN (TO_DATE('15-SEP-2006','dd-MON-yyyy'))
 , SUBPARTITION p06_aug_a VALUES LESS THAN (TO_DATE('01-OCT-2006','dd-MON-yyyy'))
 , SUBPARTITION p06_aug_l VALUES LESS THAN (MAXVALUE)
 )
, PARTITION p_2006_sep VALUES LESS THAN (TO_DATE('01-OCT-2006','dd-MON-yyyy'))
 ( SUBPARTITION p06_sep_e VALUES LESS THAN (TO_DATE('15-OCT-2006','dd-MON-yyyy'))
 , SUBPARTITION p06_sep_a VALUES LESS THAN (TO_DATE('01-NOV-2006','dd-MON-yyyy'))
 , SUBPARTITION p06_sep_l VALUES LESS THAN (MAXVALUE)
 )
, PARTITION p_2006_oct VALUES LESS THAN (TO_DATE('01-NOV-2006','dd-MON-yyyy'))
 ( SUBPARTITION p06_oct_e VALUES LESS THAN (TO_DATE('15-NOV-2006','dd-MON-yyyy'))
 , SUBPARTITION p06_oct_a VALUES LESS THAN (TO_DATE('01-DEC-2006','dd-MON-yyyy'))
 , SUBPARTITION p06_oct_l VALUES LESS THAN (MAXVALUE)
 )
, PARTITION p_2006_nov VALUES LESS THAN (TO_DATE('01-DEC-2006','dd-MON-yyyy'))
 ( SUBPARTITION p06_nov_e VALUES LESS THAN (TO_DATE('15-DEC-2006','dd-MON-yyyy'))
 , SUBPARTITION p06_nov_a VALUES LESS THAN (TO_DATE('01-JAN-2007','dd-MON-yyyy'))
 , SUBPARTITION p06_nov_l VALUES LESS THAN (MAXVALUE)
 )
, PARTITION p_2006_dec VALUES LESS THAN (TO_DATE('01-JAN-2007','dd-MON-yyyy'))
 ( SUBPARTITION p06_dec_e VALUES LESS THAN (TO_DATE('15-JAN-2007','dd-MON-yyyy'))
 , SUBPARTITION p06_dec_a VALUES LESS THAN (TO_DATE('01-FEB-2007','dd-MON-yyyy'))
 , SUBPARTITION p06_dec_l VALUES LESS THAN (MAXVALUE)
 )
);


Creating Composite List-Hash Partitioned Tables
The following example shows an accounts table that is list partitioned by region and subpartitioned using hash by customer identifier.
CREATE TABLE accounts
( id             NUMBER
, account_number NUMBER
, customer_id    NUMBER
, balance        NUMBER
, branch_id      NUMBER
, region         VARCHAR(2)
, status         VARCHAR2(1)
)
PARTITION BY LIST (region)
SUBPARTITION BY HASH (customer_id) SUBPARTITIONS 8
( PARTITION p_northwest VALUES ('OR', 'WA')
, PARTITION p_southwest VALUES ('AZ', 'UT', 'NM')
, PARTITION p_northeast VALUES ('NY', 'VM', 'NJ')
, PARTITION p_southeast VALUES ('FL', 'GA')
, PARTITION p_northcentral VALUES ('SD', 'WI')
, PARTITION p_southcentral VALUES ('OK', 'TX')
);
To learn how using a subpartition template can simplify the specification of a composite partitioned table, see "Using Subpartition Templates to Describe Composite Partitioned Tables".
Creating Composite List-List Partitioned Tables
The following example shows an accounts table that is list partitioned by region and subpartitioned using list by account status.
CREATE TABLE accounts
( id             NUMBER
, account_number NUMBER
, customer_id    NUMBER
, balance        NUMBER
, branch_id      NUMBER
, region         VARCHAR(2)
, status         VARCHAR2(1)
)
PARTITION BY LIST (region)
SUBPARTITION BY LIST (status)
( PARTITION p_northwest VALUES ('OR', 'WA')
 ( SUBPARTITION p_nw_bad VALUES ('B')
 , SUBPARTITION p_nw_average VALUES ('A')
 , SUBPARTITION p_nw_good VALUES ('G')
 )
, PARTITION p_southwest VALUES ('AZ', 'UT', 'NM')
 ( SUBPARTITION p_sw_bad VALUES ('B')
 , SUBPARTITION p_sw_average VALUES ('A')
 , SUBPARTITION p_sw_good VALUES ('G')
 )
, PARTITION p_northeast VALUES ('NY', 'VM', 'NJ')
 ( SUBPARTITION p_ne_bad VALUES ('B')
 , SUBPARTITION p_ne_average VALUES ('A')
 , SUBPARTITION p_ne_good VALUES ('G')
 )
, PARTITION p_southeast VALUES ('FL', 'GA')
 ( SUBPARTITION p_se_bad VALUES ('B')
 , SUBPARTITION p_se_average VALUES ('A')
 , SUBPARTITION p_se_good VALUES ('G')
 )
, PARTITION p_northcentral VALUES ('SD', 'WI')
 ( SUBPARTITION p_nc_bad VALUES ('B')
 , SUBPARTITION p_nc_average VALUES ('A')
 , SUBPARTITION p_nc_good VALUES ('G')
 )
, PARTITION p_southcentral VALUES ('OK', 'TX')
 ( SUBPARTITION p_sc_bad VALUES ('B')
 , SUBPARTITION p_sc_average VALUES ('A')
 , SUBPARTITION p_sc_good VALUES ('G')
 )
);
To learn how using a subpartition template can simplify the specification of a composite partitioned table, see "Using Subpartition Templates to Describe Composite Partitioned Tables".
Creating Composite List-Range Partitioned Tables
The following example shows an accounts table that is list partitioned by region and subpartitioned using range by account balance. Note that row movement is enabled. Subpartitions for different list partitions could have different ranges specified.
CREATE TABLE accounts
( id             NUMBER
, account_number NUMBER
, customer_id    NUMBER
, balance        NUMBER
, branch_id      NUMBER
, region         VARCHAR(2)
, status         VARCHAR2(1)
)
PARTITION BY LIST (region)
SUBPARTITION BY RANGE (balance)
( PARTITION p_northwest VALUES ('OR', 'WA')
 ( SUBPARTITION p_nw_low VALUES LESS THAN (1000)
 , SUBPARTITION p_nw_average VALUES LESS THAN (10000)
 , SUBPARTITION p_nw_high VALUES LESS THAN (100000)
 , SUBPARTITION p_nw_extraordinary VALUES LESS THAN (MAXVALUE)
 )
, PARTITION p_southwest VALUES ('AZ', 'UT', 'NM')
 ( SUBPARTITION p_sw_low VALUES LESS THAN (1000)
 , SUBPARTITION p_sw_average VALUES LESS THAN (10000)
 , SUBPARTITION p_sw_high VALUES LESS THAN (100000)
 , SUBPARTITION p_sw_extraordinary VALUES LESS THAN (MAXVALUE)
 )
, PARTITION p_northeast VALUES ('NY', 'VM', 'NJ')
 ( SUBPARTITION p_ne_low VALUES LESS THAN (1000)
 , SUBPARTITION p_ne_average VALUES LESS THAN (10000)
 , SUBPARTITION p_ne_high VALUES LESS THAN (100000)
 , SUBPARTITION p_ne_extraordinary VALUES LESS THAN (MAXVALUE)
 )
, PARTITION p_southeast VALUES ('FL', 'GA')
 ( SUBPARTITION p_se_low VALUES LESS THAN (1000)
 , SUBPARTITION p_se_average VALUES LESS THAN (10000)
 , SUBPARTITION p_se_high VALUES LESS THAN (100000)
 , SUBPARTITION p_se_extraordinary VALUES LESS THAN (MAXVALUE)
 )
, PARTITION p_northcentral VALUES ('SD', 'WI')
 ( SUBPARTITION p_nc_low VALUES LESS THAN (1000)
 , SUBPARTITION p_nc_average VALUES LESS THAN (10000)
 , SUBPARTITION p_nc_high VALUES LESS THAN (100000)
 , SUBPARTITION p_nc_extraordinary VALUES LESS THAN (MAXVALUE)
 )
, PARTITION p_southcentral VALUES ('OK', 'TX')
 ( SUBPARTITION p_sc_low VALUES LESS THAN (1000)
 , SUBPARTITION p_sc_average VALUES LESS THAN (10000)
 , SUBPARTITION p_sc_high VALUES LESS THAN (100000)
 , SUBPARTITION p_sc_extraordinary VALUES LESS THAN (MAXVALUE)
 )
) ENABLE ROW MOVEMENT;

Creating Composite Interval-* Partitioned Tables

The concepts of interval-* composite partitioning are similar to the concepts for range-* partitioning. However, you extend the PARTITION BY RANGE clause to include the INTERVAL definition. You must specify at least one range partition using the PARTITION clause. The range partitioning key value determines the high value of the range partitions, which is called the transition point, and the database automatically creates interval partitions for data beyond that transition point.
The subpartitions for intervals in an interval-* partitioned table will be created when the database creates the interval. You can specify the definition of future subpartitions only through the use of a subpartition template. To learn more about how to use a subpartition template, see "Using Subpartition Templates to Describe Composite Partitioned Tables".
Creating Composite Interval-Hash Partitioned Tables
You can create an interval-hash partitioned table with multiple hash partitions using one of the following methods:
  • Specify a number of hash partitions in the PARTITIONS clause.
  • Use a subpartition template.
If you do not use either of these methods, then future interval partitions will only get a single hash subpartition.
The following example shows the sales table, interval partitioned using monthly intervals on time_id, with hash subpartitions by cust_id. Note that this example specifies a number of hash partitions, without any specific tablespace assignment to the individual hash partitions.
CREATE TABLE sales
 ( prod_id       NUMBER(6)
 , cust_id       NUMBER
 , time_id       DATE
 , channel_id    CHAR(1)
 , promo_id      NUMBER(6)
 , quantity_sold NUMBER(3)
 , amount_sold   NUMBER(10,2)
 )
PARTITION BY RANGE (time_id) INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))
SUBPARTITION BY HASH (cust_id) SUBPARTITIONS 4
( PARTITION before_2000 VALUES LESS THAN (TO_DATE('01-JAN-2000','dd-MON-yyyy')))
PARALLEL;
The following example shows the same sales table, interval partitioned using monthly intervals on time_id, again with hash subpartitions by cust_id. This time, however, individual hash partitions will be stored in separate tablespaces. Note that the subpartition template is used in order to define the tablespace assignment for future hash subpartitions. To learn more about how to use a subpartition template, see "Using Subpartition Templates to Describe Composite Partitioned Tables".
CREATE TABLE sales
 ( prod_id       NUMBER(6)
 , cust_id       NUMBER
 , time_id       DATE
 , channel_id    CHAR(1)
 , promo_id      NUMBER(6)
 , quantity_sold NUMBER(3)
 , amount_sold   NUMBER(10,2)
 )
PARTITION BY RANGE (time_id) INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))
SUBPARTITION BY hash(cust_id)
  SUBPARTITION template
  ( SUBPARTITION p1 TABLESPACE ts1
  , SUBPARTITION p2 TABLESPACE ts2
  , SUBPARTITION p3 TABLESPACE ts3
  , SUBPARTITION P4 TABLESPACE ts4
  )
( PARTITION before_2000 VALUES LESS THAN (TO_DATE('01-JAN-2000','dd-MON-yyyy'))
) PARALLEL;
Creating Composite Interval-List Partitioned Tables
The only way to define list subpartitions for future interval partitions is through the use of the subpartition template. If you do not use the subpartitioning template, then the only subpartition that will be created for every interval partition is a DEFAULT subpartition. To learn more about how to use a subpartition template, see "Using Subpartition Templates to Describe Composite Partitioned Tables".
The following example shows the sales table, interval partitioned using daily intervals on time_id, with list subpartitions by channel_id.
CREATE TABLE sales
 ( prod_id       NUMBER(6)
 , cust_id       NUMBER
 , time_id       DATE
 , channel_id    CHAR(1)
 , promo_id      NUMBER(6)
 , quantity_sold NUMBER(3)
 , amount_sold   NUMBER(10,2)
 )
PARTITION BY RANGE (time_id) INTERVAL (NUMTODSINTERVAL(1,'DAY'))
SUBPARTITION BY RANGE(amount_sold)
  SUBPARTITION TEMPLATE
  ( SUBPARTITION p_low VALUES LESS THAN (1000)
  , SUBPARTITION p_medium VALUES LESS THAN (4000)
  , SUBPARTITION p_high VALUES LESS THAN (8000)
  , SUBPARTITION p_ultimate VALUES LESS THAN (maxvalue)
  )
( PARTITION before_2000 VALUES LESS THAN (TO_DATE('01-JAN-2000','dd-MON-yyyy')))
PARALLEL;
Creating Composite Interval-Range Partitioned Tables
The only way to define range subpartitions for future interval partitions is through the use of the subpartition template. If you do not use the subpartition template, then the only subpartition that will be created for every interval partition is a range subpartition with the MAXVALUE upper boundary. To learn more about how to use a subpartition template, see "Using Subpartition Templates to Describe Composite Partitioned Tables".
The following example shows the sales table, interval partitioned using daily intervals on time_id, with range subpartitions by amount_sold.
CREATE TABLE sales
 ( prod_id       NUMBER(6)
 , cust_id       NUMBER
 , time_id       DATE
 , channel_id    CHAR(1)
 , promo_id      NUMBER(6)
 , quantity_sold NUMBER(3)
 , amount_sold   NUMBER(10,2)
 )
PARTITION BY RANGE (time_id) INTERVAL (NUMTODSINTERVAL(1,'DAY'))
SUBPARTITION BY LIST (channel_id)
  SUBPARTITION TEMPLATE
  ( SUBPARTITION p_catalog VALUES ('C')
  , SUBPARTITION p_internet VALUES ('I')
  , SUBPARTITION p_partners VALUES ('P')
  , SUBPARTITION p_direct_sales VALUES ('S')
  , SUBPARTITION p_tele_sales VALUES ('T')
  )
( PARTITION before_2000 VALUES LESS THAN (TO_DATE('01-JAN-2000','dd-MON-yyyy')))
PARALLEL;
Interval Partitioning:

Let us consider the following example:
create table sales
(
sales_id number,
sales_dt date
)
partition by range (sales_dt)
(
partition p0901 values less than (to_date('2009-02-01','yyyy-mm-dd')),
partition p0902 values less than (to_date('2009-03-01','yyyy-mm-dd'))
);

Only two partitions are defined here January 2009 and February 2009.
Now if a new record is inserted having sales_dt value as March 2009, it will fail with following error:

ORA-14400: inserted partition key does not map to any partition

Prior to 11g,it is required to add a partition for March2009,then only the record can be inserted.
Often creation of partitions beforehand is not affordable.


11g, Oracle introduced new partition type called INTERVAL PARTITIONING. let us have a look at the benefits of interval partitioning :
create table sales
(
sales_id number,
sales_dt date
)
partition by range (sales_dt)
interval (numtoyminterval(1,'MONTH'))
( partition p0901 values less than (to_date('2009-02-01','yyyy-mm-dd')) );


SQL> insert into sales values (1,'01-jun-09');
You can see that this time it did not generate the ORA_14400 errors. Let see what oracle did to insert the data over the partitions limit.

Here we go.
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'SALES';

PARTITION_NAME HIGH_VALUE
--------------- ----------------------------------------------------------------
P0701 TO_DATE(' 2009-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_C
ALENDAR=GREGORIA

SYS_P41 TO_DATE(' 2009-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_C
ALENDAR=GREGORIA
Note:  The partition named SYS_P41 with a high value of July 1, 2009,  will hold data up to the end of June. This partition was created dynamically by Oracle and has a system generated name.


Let us insert a value lower than highest value ,for example May1,2009.
SQL> insert into sales6 values (1,'01-may-09');

1 row created.
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'SALES';

PARTITION_NAME HIGH_VALUE
--------------- ----------------------------------------------------------------
P0701 TO_DATE(' 2009-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_C
ALENDAR=GREGORIA

SYS_P41 TO_DATE(' 2009-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_C
ALENDAR=GREGORIA

SYS_P42 TO_DATE(' 2009-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_C
ALENDAR=GREGORIA


The optional STORE IN clause lets you specify one or more tablespaces into which the database will store interval partition data using a round-robin algorithm for subsequently created interval partitions.


The following example specifies that above the transition point of January 1, 2007, partitions are created with a width of one week.
CREATE TABLE interval_sales
( prod_id NUMBER(6)
, cust_id NUMBER
, time_id DATE
, channel_id CHAR(1)
, promo_id NUMBER(6)
, quantity_sold NUMBER(3)
, amount_sold NUMBER(10,2) )
PARTITION BY RANGE (time_id)
INTERVAL(numtodsinterval(7,'day'))
( PARTITION p0 VALUES LESS THAN (TO_DATE('1-1-2007', 'DD-MM-YYYY')) );


The high bound of partition p0 represents the transition point. p0 is in the range section while all partitions above it fall into the interval section.


Interval Partitioning Essentials - Common Questions - Top Issues (Doc ID 1479115.1)

Specifying a Subpartition Template for a *-Hash Partitioned Table

In the case of [range | interval | list]-hash partitioned tables, the subpartition template can describe the subpartitions in detail, or it can specify just the number of hash subpartitions.
The following example creates a range-hash partitioned table using a subpartition template:
CREATE TABLE emp_sub_template (deptno NUMBER, empname VARCHAR(32), grade NUMBER)
    PARTITION BY RANGE(deptno) SUBPARTITION BY HASH(empname)
    SUBPARTITION TEMPLATE
        (SUBPARTITION a TABLESPACE ts1,
         SUBPARTITION b TABLESPACE ts2,
         SUBPARTITION c TABLESPACE ts3,
         SUBPARTITION d TABLESPACE ts4
        )
   (PARTITION p1 VALUES LESS THAN (1000),
    PARTITION p2 VALUES LESS THAN (2000),
    PARTITION p3 VALUES LESS THAN (MAXVALUE)
   );
This example produces the following table description:
  • Every partition has four subpartitions as described in the subpartition template.
  • Each subpartition has a tablespace specified. It is required that if a tablespace is specified for one subpartition in a subpartition template, then one must be specified for all.
  • The names of the subpartitions, unless you use interval-* subpartitioning, are generated by concatenating the partition name with the subpartition name in the form:
  • partition name_subpartition name
  • For interval-* subpartitioning, the subpartition names are system-generated in the form:
  • SYS_SUBPn
The following query displays the subpartition names and tablespaces:
SQL> SELECT TABLESPACE_NAME, PARTITION_NAME, SUBPARTITION_NAME
 2  FROM DBA_TAB_SUBPARTITIONS WHERE TABLE_NAME='EMP_SUB_TEMPLATE'
 3  ORDER BY TABLESPACE_NAME;

TABLESPACE_NAME PARTITION_NAME  SUBPARTITION_NAME
--------------- --------------- ------------------
TS1             P1              P1_A
TS1             P2              P2_A
TS1             P3              P3_A
TS2             P1              P1_B
TS2             P2              P2_B
TS2             P3              P3_B
TS3             P1              P1_C
TS3             P2              P2_C
TS3             P3              P3_C
TS4             P1              P1_D
TS4             P2              P2_D
TS4             P3              P3_D

12 rows selected.

Specifying a Subpartition Template for a *-List Partitioned Table

The following example, for a range-list partitioned table, illustrates how using a subpartition template can help you stripe data across tablespaces. In this example, a table is created where the table subpartitions are vertically striped, meaning that subpartition n from every partition is in the same tablespace.
CREATE TABLE stripe_regional_sales
           ( deptno number, item_no varchar2(20),
             txn_date date, txn_amount number, state varchar2(2))
  PARTITION BY RANGE (txn_date)
  SUBPARTITION BY LIST (state)
  SUBPARTITION TEMPLATE
     (SUBPARTITION northwest VALUES ('OR', 'WA') TABLESPACE tbs_1,
      SUBPARTITION southwest VALUES ('AZ', 'UT', 'NM') TABLESPACE tbs_2,
      SUBPARTITION northeast VALUES ('NY', 'VM', 'NJ') TABLESPACE tbs_3,
      SUBPARTITION southeast VALUES ('FL', 'GA') TABLESPACE tbs_4,
      SUBPARTITION midwest VALUES ('SD', 'WI') TABLESPACE tbs_5,
      SUBPARTITION south VALUES ('AL', 'AK') TABLESPACE tbs_6,
      SUBPARTITION others VALUES (DEFAULT ) TABLESPACE tbs_7
     )
 (PARTITION q1_1999 VALUES LESS THAN ( TO_DATE('01-APR-1999','DD-MON-YYYY')),
  PARTITION q2_1999 VALUES LESS THAN ( TO_DATE('01-JUL-1999','DD-MON-YYYY')),
  PARTITION q3_1999 VALUES LESS THAN ( TO_DATE('01-OCT-1999','DD-MON-YYYY')),
  PARTITION q4_1999 VALUES LESS THAN ( TO_DATE('1-JAN-2000','DD-MON-YYYY'))
 );
If you specified the tablespaces at the partition level (for example, tbs_1 for partition q1_1999, tbs_2 for partition q2_1999, tbs_3 for partition q3_1999, and tbs_4 for partition q4_1999) and not in the subpartition template, then the table would be horizontally striped. All subpartitions would be in the tablespace of the owning partition.

Using Multicolumn Partitioning Keys

For range-partitioned and hash-partitioned tables, you can specify up to 16 partitioning key columns. Multicolumn partitioning should be used when the partitioning key is composed of several columns and subsequent columns define a higher granularity than the preceding ones. The most common scenario is a decomposed DATE or TIMESTAMP key, consisting of separated columns, for year, month, and day.
In evaluating multicolumn partitioning keys, the database uses the second value only if the first value cannot uniquely identify a single target partition, and uses the third value only if the first and second do not determine the correct partition, and so forth. A value cannot determine the correct partition only when a partition bound exactly matches that value and the same bound is defined for the next partition. The nth column will therefore be investigated only when all previous (n-1) values of the multicolumn key exactly match the (n-1) bounds of a partition. A second column, for example, will be evaluated only if the first column exactly matches the partition boundary value. If all column values exactly match all of the bound values for a partition, then the database will determine that the row does not fit in this partition and will consider the next partition for a match.
In the case of nondeterministic boundary definitions (successive partitions with identical values for at least one column), the partition boundary value becomes an inclusive value, representing a "less than or equal to" boundary. This is in contrast to deterministic boundaries, where the values are always regarded as "less than" boundaries.
The following example illustrates the column evaluation for a multicolumn range-partitioned table, storing the actual DATE information in three separate columns: year, month, and day. The partitioning granularity is a calendar quarter. The partitioned table being evaluated is created as follows:
CREATE TABLE sales_demo (
  year          NUMBER,
  month         NUMBER,
  day           NUMBER,
  amount_sold   NUMBER)
PARTITION BY RANGE (year,month)
 (PARTITION before2001 VALUES LESS THAN (2001,1),
  PARTITION q1_2001    VALUES LESS THAN (2001,4),
  PARTITION q2_2001    VALUES LESS THAN (2001,7),
  PARTITION q3_2001    VALUES LESS THAN (2001,10),
  PARTITION q4_2001    VALUES LESS THAN (2002,1),
  PARTITION future     VALUES LESS THAN (MAXVALUE,0));

REM  12-DEC-2000
INSERT INTO sales_demo VALUES(2000,12,12, 1000);
REM  17-MAR-2001
INSERT INTO sales_demo VALUES(2001,3,17, 2000);
REM  1-NOV-2001
INSERT INTO sales_demo VALUES(2001,11,1, 5000);
REM  1-JAN-2002
INSERT INTO sales_demo VALUES(2002,1,1, 4000);
The year value for 12-DEC-2000 satisfied the first partition, before2001, so no further evaluation is needed:
SELECT * FROM sales_demo PARTITION(before2001);

     YEAR      MONTH        DAY AMOUNT_SOLD
---------- ---------- ---------- -----------
     2000         12         12        1000
The information for 17-MAR-2001 is stored in partition q1_2001. The first partitioning key column, year, does not by itself determine the correct partition, so the second partitioning key column, month, must be evaluated.
SELECT * FROM sales_demo PARTITION(q1_2001);

     YEAR      MONTH        DAY AMOUNT_SOLD
---------- ---------- ---------- -----------
     2001          3         17        2000
Following the same determination rule as for the previous record, the second column, month, determines partition q4_2001 as correct partition for 1-NOV-2001:
SELECT * FROM sales_demo PARTITION(q4_2001);

     YEAR      MONTH        DAY AMOUNT_SOLD
---------- ---------- ---------- -----------
     2001         11          1        5000
The partition for 01-JAN-2002 is determined by evaluating only the year column, which indicates the future partition:
SELECT * FROM sales_demo PARTITION(future);

     YEAR      MONTH        DAY AMOUNT_SOLD
---------- ---------- ---------- -----------
     2002          1          1        4000
If the database encounters MAXVALUE in one of the partitioning key columns, then all other values of subsequent columns become irrelevant. That is, a definition of partition future in the preceding example, having a bound of (MAXVALUE,0) is equivalent to a bound of (MAXVALUE,100) or a bound of (MAXVALUE,MAXVALUE).
The following example illustrates the use of a multicolumn partitioned approach for table supplier_parts, storing the information about which suppliers deliver which parts. To distribute the data in equal-sized partitions, it is not sufficient to partition the table based on the supplier_id, because some suppliers might provide hundreds of thousands of parts, while others provide only a few specialty parts. Instead, you partition the table on (supplier_id, partnum) to manually enforce equal-sized partitions.
CREATE TABLE supplier_parts (
  supplier_id      NUMBER,
  partnum          NUMBER,
  price            NUMBER)
PARTITION BY RANGE (supplier_id, partnum)
 (PARTITION p1 VALUES LESS THAN  (10,100),
  PARTITION p2 VALUES LESS THAN (10,200),
  PARTITION p3 VALUES LESS THAN (MAXVALUE,MAXVALUE));
The following three records are inserted into the table:
INSERT INTO supplier_parts VALUES (5,5, 1000);
INSERT INTO supplier_parts VALUES (5,150, 1000);
INSERT INTO supplier_parts VALUES (10,100, 1000);
The first two records are inserted into partition p1, uniquely identified by supplier_id. However, the third record is inserted into partition p2; it matches all range boundary values of partition p1 exactly and the database therefore considers the following partition for a match. The value of partnum satisfies the criteria < 200, so it is inserted into partition p2.
SELECT * FROM supplier_parts PARTITION (p1);

SUPPLIER_ID    PARTNUM      PRICE
----------- ---------- ----------
         5          5       1000
         5        150       1000

SELECT * FROM supplier_parts PARTITION (p2);

SUPPLIER_ID    PARTNUM      PRICE
----------- ---------- ----------
         10       100       1000
Every row with supplier_id < 10 will be stored in partition p1, regardless of the partnum value. The column partnum will be evaluated only if supplier_id =10, and the corresponding rows will be inserted into partition p1, p2, or even into p3 when partnum >=200. To achieve equal-sized partitions for ranges of supplier_parts, you could choose a composite range-hash partitioned table, range partitioned by supplier_id, hash subpartitioned by partnum.
Defining the partition boundaries for multicolumn partitioned tables must obey some rules. For example, consider a table that is range partitioned on three columns a, b, and c. The individual partitions have range values represented as follows:
P0(a0, b0, c0)
P1(a1, b1, c1)
P2(a2, b2, c2)
...
Pn(an, bn, cn)
The range values you provide for each partition must follow these rules:
  • a0 must be less than or equal to a1, and a1 must be less than or equal to a2, and so on.
  • If a0=a1, then b0 must be less than or equal to b1. If a0 < a1, then b0 and b1 can have any values. If a0=a1 and b0=b1, then c0 must be less than or equal to c1. If b0<b1, then c0 and c1 can have any values, and so on.
  • If a1=a2, then b1 must be less than or equal to b2. If a1<a2, then b1 and b2 can have any values. If a1=a2 and b1=b2, then c1 must be less than or equal to c2. If b1<b2, then c1 and c2 can have any values, and so on.

Using Virtual Column-Based Partitioning

In the context of partitioning, a virtual column can be used as any regular column. All partition methods are supported when using virtual columns, including interval partitioning and all different combinations of composite partitioning. A virtual column that you want to use as the partitioning column cannot use calls to a PL/SQL function.
See Also:
Oracle Database SQL Language Reference for the syntax on how to create a virtual column
The following example shows the sales table partitioned by range-range using a virtual column for the subpartitioning key. The virtual column calculates the total value of a sale by multiplying amount_sold and quantity_sold.
CREATE TABLE sales
 ( prod_id       NUMBER(6) NOT NULL
 , cust_id       NUMBER NOT NULL
 , time_id       DATE NOT NULL
 , channel_id    CHAR(1) NOT NULL
 , promo_id      NUMBER(6) NOT NULL
 , quantity_sold NUMBER(3) NOT NULL
 , amount_sold   NUMBER(10,2) NOT NULL
 , total_amount AS (quantity_sold * amount_sold)
 )
PARTITION BY RANGE (time_id) INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))
SUBPARTITION BY RANGE(total_amount)
SUBPARTITION TEMPLATE
  ( SUBPARTITION p_small VALUES LESS THAN (1000)
  , SUBPARTITION p_medium VALUES LESS THAN (5000)
  , SUBPARTITION p_large VALUES LESS THAN (10000)
  , SUBPARTITION p_extreme VALUES LESS THAN (MAXVALUE)
  )
(PARTITION sales_before_2007 VALUES LESS THAN
       (TO_DATE('01-JAN-2007','dd-MON-yyyy'))
)
ENABLE ROW MOVEMENT
PARALLEL NOLOGGING;


ALTER TABLE Maintenance Operations for Table Partitions

Maintenance Operation
RangeComposite Range-*
IntervalComposite Interval-*
Hash
ListComposite List-*
Reference
Adding Partitions
ADD PARTITION
ADD PARTITION
ADD PARTITION
ADD PARTITION
N/AFoot 1
Coalescing Partitions
N/A
N/A
COALESCE PARTITION
N/A
N/AFootref 1
Dropping Partitions
DROP PARTITION
DROP PARTITION
N/A
DROP PARTITION
N/AFootref 1
Exchanging Partitions
EXCHANGE PARTITION
EXCHANGE PARTITION
EXCHANGE PARTITION
EXCHANGE PARTITION
EXCHANGE PARTITION
Merging Partitions
MERGE PARTITIONS
MERGE PARTITIONS
N/A
MERGE PARTITIONS
N/AFootref 1
Modifying Default Attributes
MODIFY DEFAULT ATTRIBUTES
MODIFY DEFAULT ATTRIBUTES
MODIFY DEFAULT ATTRIBUTES
MODIFY DEFAULT ATTRIBUTES
MODIFY DEFAULT ATTRIBUTES
Modifying Real Attributes of Partitions
MODIFY PARTITION
MODIFY PARTITION
MODIFY PARTITION
MODIFY PARTITION
MODIFY PARTITION
Modifying List Partitions: Adding Values
N/A
N/A
N/A
MODIFY PARTITION ... ADD VALUES
N/A
Modifying List Partitions: Dropping Values
N/A
N/A
N/A
MODIFY PARTITION ... DROP VALUES
N/A
Moving Partitions
MOVE SUBPARTITION
MOVE SUBPARTITION
MOVE PARTITION
MOVE SUBPARTITION
MOVE PARTITION
Renaming Partitions
RENAME PARTITION
RENAME PARTITION
RENAME PARTITION
RENAME PARTITION
RENAME PARTITION
Splitting Partitions
SPLIT PARTITION
SPLIT PARTITION
N/A
SPLIT PARTITION
N/AFootref 1
Truncating Partitions
TRUNCATE PARTITION
TRUNCATE PARTITION
TRUNCATE PARTITION
TRUNCATE PARTITION
TRUNCATE PARTITION



ALTER TABLE Maintenance Operations for Table Subpartitions

Maintenance Operation
Composite *-Range
Composite *-Hash
Composite *-List
Adding Partitions
MODIFY PARTITION ... ADD SUBPARTITION
MODIFY PARTITION ... ADD SUBPARTITION
MODIFY PARTITION ... ADD SUBPARTITION
Coalescing Partitions
N/A
MODIFY PARTITION ... COALESCE SUBPARTITION
N/A
Dropping Partitions
DROP SUBPARTITION
N/A
DROP SUBPARTITION
Exchanging Partitions
EXCHANGE SUBPARTITION
N/A
EXCHANGE SUBPARTITION
Merging Partitions
MERGE SUBPARTITIONS
N/A
MERGE SUBPARTITIONS
Modifying Default Attributes
MODIFY DEFAULT ATTRIBUTES FOR PARTITION
MODIFY DEFAULT ATTRIBUTES FOR PARTITION
MODIFY DEFAULT ATTRIBUTES FOR PARTITION
Modifying Real Attributes of Partitions
MODIFY SUBPARTITION
MODIFY SUBPARTITION
MODIFY SUBPARTITION
Modifying List Partitions: Adding Values
N/A
N/A
MODIFY SUBPARTITION ... ADD VALUES
Modifying List Partitions: Dropping Values
N/A
N/A
MODIFY SUBPARTITION ... DROP VALUES
Modifying a Subpartition Template
SET SUBPARTITION TEMPLATE
SET SUBPARTITION TEMPLATE
SET SUBPARTITION TEMPLATE
Moving Partitions
MOVE SUBPARTITION
MOVE SUBPARTITION
MOVE SUBPARTITION
Renaming Partitions
RENAME SUBPARTITION
RENAME SUBPARTITION
RENAME SUBPARTITION
Splitting Partitions
SPLIT SUBPARTITION
N/A
SPLIT SUBPARTITION
Truncating Partitions
TRUNCATE SUBPARTITION
TRUNCATE SUBPARTITION
TRUNCATE SUBPARTITION





ALTER INDEX Maintenance Operations for Index Partitions


Maintenance Operation
Type of Index
Type of Index Partitioning
Range
Hash and List
Composite
Adding Index Partitions
Global
-
ADD PARTITION (hash only)
-

Local
N/A
N/A
N/A
Dropping Index Partitions
Global
DROP PARTITION
-
-

Local
N/A
N/A
N/A
Modifying Default Attributes of Index Partitions
Global
MODIFY DEFAULT ATTRIBUTES
-
-

Local
MODIFY DEFAULT ATTRIBUTES
MODIFY DEFAULT ATTRIBUTES
MODIFY DEFAULT ATTRIBUTES
MODIFY DEFAULT ATTRIBUTES FOR PARTITION
Modifying Real Attributes of Index Partitions
Global
MODIFY PARTITION
-
-

Local
MODIFY PARTITION
MODIFY PARTITION
MODIFY PARTITION
MODIFY SUBPARTITION
Rebuilding Index Partitions
Global
REBUILD PARTITION
-
-

Local
REBUILD PARTITION
REBUILD PARTITION
REBUILD SUBPARTITION
Renaming Index Partitions
Global
RENAME PARTITION
-
-

Local
RENAME PARTITION
RENAME PARTITION
RENAME PARTITION
RENAME SUBPARTITION
Splitting Index Partitions
Global
SPLIT PARTITION
-
-

Local
N/A
N/A
N/A


Partition Maintenance Operations:

The following operations support the UPDATE INDEXES clause:
  • ADD PARTITION | SUBPARTITION
  • COALESCE PARTITION | SUBPARTITION
  • DROP PARTITION | SUBPARTITION
  • EXCHANGE PARTITION | SUBPARTITION
  • MERGE PARTITION | SUBPARTITION
  • MOVE PARTITION | SUBPARTITION
  • SPLIT PARTITION | SUBPARTITION
  • TRUNCATE PARTITION | SUBPARTITION


SKIP_UNUSABLE_INDEXES Initialization Parameter
SKIP_UNUSABLE_INDEXES is an initialization parameter with a default value of TRUE. This setting disables error reporting of indexes and index partitions marked UNUSABLE. If you do not want the database to choose an alternative execution plan to avoid the unusable elements, then you should set this parameter to FALSE

Adding a Partition:


Use the ALTER TABLE ... ADD PARTITION statement to add a new partition to the "high" end (the point after the last existing partition). To add a partition at the beginning or in the middle of a table, use the SPLIT PARTITION clause.
For example, consider the table, sales, which contains data for the current month in addition to the previous 12 months. On January 1, 1999, you add a partition for January, which is stored in tablespace tsx.

ALTER TABLE sales
     ADD PARTITION jan99 VALUES LESS THAN ( '01-FEB-1999' )
     TABLESPACE tsx;

Adding a Partition to a Hash-Partitioned Table



ALTER TABLE scubagear ADD PARTITION;

ALTER TABLE scubagear
     ADD PARTITION p_named TABLESPACE gear5;

Adding a Partition to a List-Partitioned Table

The following statement illustrates how to add a new partition to a list-partitioned table. In this example, physical attributes and NOLOGGING are specified for the partition being added.

ALTER TABLE q1_sales_by_region
  ADD PARTITION q1_nonmainland VALUES ('HI', 'PR')
     STORAGE (INITIAL 20K NEXT 20K) TABLESPACE tbs_3
     NOLOGGING;

Adding a Partition to an Interval-Partitioned Table

You cannot explicitly add a partition to an interval-partitioned table unless you first lock the partition, which triggers the creation of the partition. The database automatically creates a partition for an interval when data for that interval is inserted. In general, you only need to explicitly create interval partitions for a partition exchange load scenario.
To change the interval for future partitions, use the SET INTERVAL clause of the ALTER TABLE statement. This clause changes the interval for partitions beyond the current highest boundary of all materialized interval partitions.
You also use the SET INTERVAL clause to migrate an existing range partitioned or range-* composite partitioned table into an interval or interval-* partitioned table. If you want to disable the creation of future interval partitions, and effectively revert back to a range-partitioned table, then use an empty value in the SET INTERVAL clause. Created interval partitions will then be transformed into range partitions with their current high values.
To increase the interval for date ranges, then you need to ensure that you are at a relevant boundary for the new interval. For example, if the highest interval partition boundary in your daily interval partitioned table transactions is January 30, 2007 and you want to change to a monthly partition interval, then the following statement results in an error:
ALTER TABLE transactions SET INTERVAL (NUMTOYMINTERVAL(1,'MONTH');

ORA-14767: Cannot specify this interval with existing high bounds
You need to create another daily partition with a high bound of February 1, 2007 in order to successfully change to a monthly interval:
LOCK TABLE transactions PARTITION FOR(TO_DATE('31-JAN-2007','dd-MON-yyyy') IN SHARE MODE;

ALTER TABLE transactions SET INTERVAL (NUMTOYMINTERVAL(1,'MONTH');
The lower partitions of an interval-partitioned table are range partitions. You can split range partitions in order to add more partitions in the range portion of the interval-partitioned table.
In order to disable interval partitioning on the transactions table, use:
ALTER TABLE transactions SET INTERVAL ();


Use one of the following statements to drop a table partition or subpartition:
  • ALTER TABLE ... DROP PARTITION to drop a table partition
  • ALTER TABLE ... DROP SUBPARTITION to drop a subpartition of a composite *-[range | list] partitioned table

Dropping a Partition from a Table that Contains Data and Global Indexes
If the partition contains data and one or more global indexes are defined on the table, then use one of the following methods to drop the table partition.
Method 1
Leave the global indexes in place during the ALTER TABLE ... DROP PARTITION statement. Afterward, you must rebuild any global indexes (whether partitioned or not) because the index (or index partitions) will have been marked UNUSABLE. The following statements provide an example of dropping partition dec98 from the sales table, then rebuilding its global non-partitioned index.
ALTER TABLE sales DROP PARTITION dec98;
ALTER INDEX sales_area_ix REBUILD;
If index sales_area_ix were a range-partitioned global index, then all partitions of the index would require rebuilding. Further, it is not possible to rebuild all partitions of an index in one statement. You must issue a separate REBUILD statement for each partition in the index. The following statements rebuild the index partitions jan99_ix, feb99_ix, mar99_ix, ..., dec99_ix.
ALTER INDEX sales_area_ix REBUILD PARTITION jan99_ix;
ALTER INDEX sales_area_ix REBUILD PARTITION feb99_ix;
ALTER INDEX sales_area_ix REBUILD PARTITION mar99_ix;
...
ALTER INDEX sales_area_ix REBUILD PARTITION dec99_ix;
This method is most appropriate for large tables where the partition being dropped contains a significant percentage of the total data in the table.
Method 2
Issue the DELETE statement to delete all rows from the partition before you issue the ALTER TABLE ... DROP PARTITION statement. The DELETE statement updates the global indexes.
For example, to drop the first partition, issue the following statements:
DELETE FROM sales partition (dec98);
ALTER TABLE sales DROP PARTITION dec98;
This method is most appropriate for small tables, or for large tables when the partition being dropped contains a small percentage of the total data in the table.
Method 3
Specify UPDATE INDEXES in the ALTER TABLE statement. Doing so causes the global index to be updated at the time the partition is dropped.
ALTER TABLE sales DROP PARTITION dec98
    UPDATE INDEXES;
Dropping a Partition Containing Data and Referential Integrity Constraints
If a partition contains data and the table has referential integrity constraints, choose either of the following methods to drop the table partition. This table has a local index only, so it is not necessary to rebuild any indexes.
Method 1
If there is no data referencing the data in the partition you want to drop, then you can disable the integrity constraints on the referencing tables, issue the ALTER TABLE ... DROP PARTITION statement, then re-enable the integrity constraints.
This method is most appropriate for large tables where the partition being dropped contains a significant percentage of the total data in the table. If there is still data referencing the data in the partition to be dropped, then make sure to remove all the referencing data in order to be able to re-enable the referential integrity constraints.
Method 2
If there is data in the referencing tables, then you can issue the DELETE statement to delete all rows from the partition before you issue the ALTER TABLE ... DROP PARTITION statement. The DELETE statement enforces referential integrity constraints, and also fires triggers and generates redo and undo logs. The delete can succeed if you created the constraints with the ON DELETE CASCADE option, deleting all rows from referencing tables as well.
DELETE FROM sales partition (dec94);
ALTER TABLE sales DROP PARTITION dec94;
This method is most appropriate for small tables or for large tables when the partition being dropped contains a small percentage of the total data in the table.

Dropping Interval Partitions

You can drop interval partitions in an interval-partitioned table. This operation will drop the data for the interval only and leave the interval definition in tact. If data is inserted in the interval just dropped, then the database will again create an interval partition.
You can also drop range partitions in an interval-partitioned table. The rules for dropping a range partition in an interval-partitioned table follow the rules for dropping a range partition in a range-partitioned table. If you drop a range partition in the middle of a set of range partitions, then the lower boundary for the next range partition shifts to the lower boundary of the range partition you just dropped. You cannot drop the highest range partition in the range-partitioned section of an interval-partitioned table.
The following example drops the September 2007 interval partition from the sales table. There are only local indexes so no indexes will be invalidated.
ALTER TABLE sales DROP PARTITION FOR(TO_DATE('01-SEP-2007','dd-MON-yyyy'));

Dropping Index Partitions

You cannot explicitly drop a partition of a local index. Instead, local index partitions are dropped only when you drop a partition from the underlying table.
If a global index partition is empty, then you can explicitly drop it by issuing the ALTER INDEX ... DROP PARTITION statement. But, if a global index partition contains data, then dropping the partition causes the next highest partition to be marked UNUSABLE. For example, you would like to drop the index partition P1, and P2 is the next highest partition. You must issue the following statements:
ALTER INDEX npr DROP PARTITION P1;
ALTER INDEX npr REBUILD PARTITION P2;


Documentation:

Oracle® Database VLDB and Partitioning Guide 11g Release 2 (11.2)
Part Number E10837-02

Use partitioned indexes

$
0
0


SHAIKDB>create table part1 (i int,j int ,k int,l number,
      constraint iunique unique(i),
      constraint junique unique(j))
     partition by range(i)
    ( partition p1 values less than (10),
    partition p2 values less than (100),
    partition p3 values less than (10000));  

Table created.


SHAIKDB>begin
 2  for i in 1..1000 loop
 3  insert into part1 values (i,i,1,0);
 4  commit;
 5  end loop;
 6  end;
 7  /

PL/SQL procedure successfully completed.


SHAIKDB>select count(*) from part1 partition (p1);

 COUNT(*)
----------
    9

SHAIKDB>select count(*) from part1 partition (p2);

 COUNT(*)
----------
   90

SHAIKDB>select count(*) from part1 partition (p3);

 COUNT(*)
----------
      901


create table part2 (a int not null,b int,c number, constraint afkey foreign key(a) references part1(i))
    partition by reference(afkey);

Table created.

create table part3 (x int,y number,z number not null,constraint zfkey foreign key(z) references part1(j))
       partition by reference (zfkey);

Table created
SHAIKDB>select table_name,partition_name,high_value from dba_tab_partitions where table_name like 'PART_';

TABLE_NAME PARTITION_NAME         HIGH_VALUE
---------- ------------------------------ ----------
PART1       P1                 10
PART1       P2                 100
PART1       P3                 10000
PART2       P1
PART2       P2
PART2       P3
PART3       P1
PART3       P2
PART3       P3


.SHAIKDB>begin
    for i in 1..1000 loop
    insert into part2 values(i,i,i);
    commit;
    end loop;
    end;
    /  2    3    4    5    6    7  

PL/SQL procedure successfully completed.

SHAIKDB>begin
    for i in 1..1000 loop
    insert into part3 values(i,i,i);
    commit;
    end loop;
    end;
    /  2    3    4    5    6    7  

PL/SQL procedure successfully completed.

select table_name,partition_name,high_value from dba_tab_partitions where table_name like 'PART_';

TABLE_NAME PARTITION_NAME         HIGH_VALUE
---------- ------------------------------ ----------
PART1       P1                 10
PART1       P2                 100
PART1       P3                 10000
PART2       P1
PART2       P2
PART2       P3
PART3       P1
PART3       P2
PART3       P3

9 rows selected



SHAIKDB>create index part1idx on part1(k);

Index created.


.

Copying contents from one file into another using python:

$
0
0
Copying contents from one file into another using python:

shaiks@MAC$vi 8files.py

from sys import argv
from os.path import exists


script, old, new =argv

while True:
        if exists(new):
           print "file already exists\t",new
           if (raw_input("Continue with wipe out: y|n:\t")) == 'Y':
                new1=(new,'w')
                break
           else:
                print "You entered No.....appending data to the %s file\t" % new
                #exit(1)
                new1=open(new,'a')
           break
        else:
         print "Creating new file....%s\t:" % new
         new1=open(new,'w')
         break

old1 = open(old).read()
print "Old file contents are%s\n" % old1

#new1.close()
print "Appenind data to the file\t",new
new1 =  new.write(old1)

print "Closing files that were opened..."
#old.close()
#new.close()


#Contents of ps.out
shaiks@MAC$cat ps.out

Write these new lins into the ls file

line3 left blank
Write these new lins into the ls file
Write these new lins into the ls file
line below me left blank


#Contents of ls.out
shaiks@MAC$cat ls.out

I already have this line in this file

##Executing the  script

shaiks@MAC$python 8files.py ps.out ls.out
file already exists    ls.out
Continue with wipe out: y|n:    n
You entered No.....appending data to the ls.out file   
Old file contents are
Write these new lins into the ls file

line3 left blank
Write these new lins into the ls file
Write these new lins into the ls file
line below me left blank



Appending data to the file    ls.out
Closing files that were opened...

shaiks@MAC$cat ls.out

I already have this line in this file

Write these new lins into the ls file

line3 left blank
Write these new lins into the ls file
Write these new lins into the ls file
line below me left blank

 

Python using functions & argv

$
0
0
Python using functions & argv
Below code takes filenames as input min 2 max -unlimited, verifies if the input files exists on the OS:$PWD or not if exists it will read the file and displays the contents if the file doesn't exists it just skips it.


shaiks@MAC$vi funcs.py
from sys import argv
from os.path import exists

#print argv
#print len(argv)
#print range(len(argv))

def check_input():
 if len(argv) < 2:
        print "Need two variables\t"
        exit(1)
 else:
#       print "Your first file is\t",argv[1]
        return(argv[1])

##Opens file and prints the contents
def files(a):
  if a == argv [0]:
        print "Not reading my own file"
  else:
        read = open(a,'r+')
        print "File contents of file:\t%s\tare%s\n" % (a,read.read())

def check_file():
        for i in range(len(argv)):
#         print "%s" % argv[i]
          if exists(argv[i]):
               print "Input file verified\t",argv[i]
               files(argv[i])
          else:
               print "Input file doesn't exists:\t",argv[i]
#              exit(1)


check_input()
check_file()


shaiks@MAC$ls -lrt *.out
-rw-r--r--  1 shaiksameer  5000  162 Oct 19 16:35 ps.out
-rw-r--r--  1 shaiksameer  5000  162 Oct 19 16:42 ls.out


shaiks@MAC$py funcs.py a.out ps.out ls.out
Input file verified    funcs.py
Not reading my own file
Input file doesn't exists:    a.out
Input file verified    ps.out
File contents of file:    ps.out    are
Write these new lines into the ls file

line3 left blank



Input file verified    ls.out
File contents of file:    ls.out    are
I am ls.out and I have only one line


Calculate age in Python

$
0
0

Calculate age:
Takes input a) Future Date(optional) b)Birth Date
Validates for leap year and some basic input validations

 vi age.py
#########START OF SCRIPT#########
from sys import argv
from datetime import datetime,timedelta

oddmonths = [ '01','05','07','08','10','12' ]

def getdata():
     global clockyear
     global clockmonth
     global clockday
     global iyear
     global imonth
     global iday
     global ageyear
     global agemonth
     global ageday
     ask = raw_input("\nDo you want to calc age at a certain date?...Press Y|y|yes:\t")
     if ask == "":
        print "Invalid Response...Exiting"
        exit(1)
     elif ask in ['Y', 'y', 'yes'] :
       get = raw_input("\nPlease enter the from date as yyyy/mm/dd:\t")
       if len(get)< 10:
         print "Invalid Date...Exiting"
       else:
         clock = get.split("/")
         clockyear = int(clock[0])
         clockmonth = int(clock[1])
         clockday = int(clock[2])
     else:
         clock = datetime.now()
         clockyear = clock.year
         clockmonth = clock.month
         clockday = clock.day
    #Assign input variables
     getdetail = raw_input("\nEnter you date of birth as yyyy/mm/dd:\t")
     input = getdetail.split("/")
     iyear = int(input[0])
     imonth = int(input[1])
     iday = int(input[2])
     if iyear > clockyear:
        print "\nCal year cannot be > the birth year..Please Enter valid year\n"
        exit(1)
     #Assign age variables
     ageyear = clockyear - iyear
     agemonth = clockmonth - imonth
     ageday = clockday - iday

getdata()


#Assign age variables
#ageyear = clockyear - iyear
#agemonth = clockmonth - imonth
#ageday = clockday - iday

#print "Your ageyear\t:",ageyear
#print "Your age month\t:",agemonth
#print "Your ageday\t:",ageday

##### Function checks for Leap Year
def leapyear(a):
    if a % 400 == 0:
        return True
    elif a % 100 == 0:
        return False
    elif a % 4 == 0:
        return True
    else:
        return False

#Calc month & year
def month(a,b):
  if imonth < 0 or imonth > 12:
     print "You Entered invalid month...Exiting."
     exit(1)
  elif a < imonth:
      return(12+(a-imonth),b-1)
  elif a > imonth:
        return((a-imonth),b)
  else:
        return(0,b)


def prints(ageyear,agemonth,ageday):
        print "\nYour age is:\t%s Years:\t%s Months:\t%s Days\n" % (ageyear,agemonth,ageday)

def age():
        global imonth
        global iyear
        global iday
        global ageyear
        global agemonth
        global ageday
        global clockmonth
        global clockday
        if iday < 0 or iday > 31:
           print "You Entered invalid day...Exiting."
           exit(1)
        elif clockmonth !=03 and ageday >= 0 and agemonth >  0:
             prints(ageyear,agemonth,ageday)
        elif clockmonth !=03 and ageday >= 0 and agemonth < 0:
             agemonth,ageyear = month(clockmonth,ageyear)
             prints(ageyear,agemonth,ageday)
        elif clockmonth !=03 and ageday < 0 and (clockmonth-1) in oddmonths:
             ageday = 31 + ageday
             agemonth,ageyear = month((clockmonth-1),ageyear)
             prints(ageyear,agemonth,ageday)
        elif clockmonth !=03 and ageday < 0 and (clockmonth-1) not in oddmonths:
             ageday = 30 + ageday
             agemonth,ageyear = month((clockmonth-1),ageyear)
             prints(ageyear,agemonth,ageday)
####Condition for Leap Year check
        elif clockmonth == 03 and leapyear(clockyear):
           if ageday >=0:
             agemonth,ageyear = month((clockmonth),ageyear)
             prints(ageyear,agemonth,ageday)
           elif ageday <0:
             ageday = 29 + ageday
             agemonth,ageyear = month((clockmonth-1),ageyear)
             prints(ageyear,agemonth,ageday)
        elif clockmonth == 03 and not leapyear(clockyear):
           print " I am in 03"
           if ageday >=0:
                    agemonth,ageyear = month((clockmonth),ageyear)
                    prints(ageyear,agemonth,ageday)
           elif ageday <0:
                     ageday = 28 + ageday
                     agemonth,ageyear = month((clockmonth-1),ageyear)
                     prints(ageyear,agemonth,ageday)


def main():
        global imonth
        global iyear
        global iday
        if imonth == 02 and leapyear(iyear) and iday >29:
                print "You cannot have >29 days in Feb of a leapyear:\n"
                print "Enter valid day:\n"
        elif imonth == 02 and not leapyear(iyear) and iday >28:
                print "You cannot have >28 days in Feb,if the year is not a leapyear:\n"
                print "Enter valid day:\n"
        else:
                age()

main()

#########END OF SCRIPT#########


Output-1:
############
shaiks@MAC$py age.py

Do you want to calc age at a certain date?...Press Y|y|yes:    y

Please enter the from date as yyyy/mm/dd:    2017/08/31

Enter you date of birth as yyyy/mm/dd:    2000/02/29

Your age is:    17 Years:    6 Months:    2 Days


Output-2:
############
shaiks@MAC$py age.py

Do you want to calc age at a certain date?...Press Y|y|yes:    n

Enter you date of birth as yyyy/mm/dd:    2000/02/29

Your age is:    15 Years:    7 Months:    27 Days


Error Handling:
###############
shaiks@MAC$py age.py


Do you want to calc age at a certain date?...Press Y|y|yes:    y

Please enter the from date as yyyy/mm/dd:    2017/08/31

Enter you date of birth as yyyy/mm/dd:    2000/02/31
You cannot have >29 days in Feb of a leapyear:

Enter valid day:

Reverse string,words and check for palindrome in Python

$
0
0


cat words.py
####START OF SCRIPT#########
words1 = "test"
ask = "test"

def splitinput():
        global ask
        ask = raw_input("\nEnter few words here:\t")
        global words1
        words1 = ask.split('')
        return words1
        return ask

def reverse():
        global words1
        global ask
        j = ''
        for i in range(len(ask)-1,-1,-1):
                j = j + ask[i]
        print "\nPrinting the Reversed string:\t",j

def same():
        global ask
        j = ''
        for i in range(len(ask)):
                j = j + ask[i]
        print "\nPrinting in the same order\t%s" % j

words2 = ''
def reverseword():
        global words1
        global words2
        for i in range(len(words1)-1,-1,-1):
            words2 = words2 + words1[i] + ''
        print "\nPrinting in the reverse order\t%s" % words2
        return words2

def palindrome():
        global words1
        global words2
        global ask
        for i in range(len(words1)):
                j = words1[i]
                l = ''
                for k in range(len(j)-1,-1,-1):
                        l = l + j[k]
                #print "print j[k]\t%s" % l
                if j == l:
                        print "\nWord """"%s"""" is a palindrome" % l


while True:
        splitinput()
        same()
        reverseword()
        reverse()
        palindrome()
        ask1 = raw_input("\nDo you wanna play again? Type Y|y|YES|yes:\t")
        if ask1 in ['Y', 'y', 'yes', 'YES']:
                print "Okay..Let's do it again"
        else:
                print "\nThanks for playing....Exiting!\n"
                exit(0)
####END OF SCRIPT#########


Output:
=====

shaiks@MAC$py words.py

Enter few words here:    Sameer is good boy and GOOG

Printing in the same order    Sameer is good boy and GOOG

Printing in the reverse order    GOOG and boy good is Sameer

Printing the Reversed string:    GOOG dna yob doog si reemaS

Word GOOG is a palindrome

Do you wanna play again? Type Y|y|YES|yes:    n

Thanks for playing....Exiting!
 

HOW TO ENCRYPT RMAN BACKUPS USING PASSWORD

$
0
0
RMAN Backup Encryption Modes
RMAN offers three encryption modes: transparent mode, password mode, and dual mode.


Password Encryption of Backups
Password encryption requires that the DBA provide a password when creating and restoring encrypted backups. Restoring a password-encrypted backup requires the same password that was used to create the backup.

Password encryption is useful for backups that are restored at remote locations, but which must remain secure in transit. Password encryption cannot be persistently configured. You do not need to configure an Oracle wallet if password encryption is used exclusively.

Caution:
If you forget or lose the password that you used to encrypt a password-encrypted backup, then you cannot restore the backup.

To use password encryption, use the SET ENCRYPTION ON IDENTIFIED BY password ONLY command in your RMAN scripts.

RMAN> CONFIGURE ENCRYPTION FOR DATABASE ON;

new RMAN configuration parameters:
CONFIGURE ENCRYPTION FOR DATABASE ON;
new RMAN configuration parameters are successfully stored

RMAN> set encryption on identified by shaiksameer2 only;

executing command: SET encryption

RMAN> show all;

RMAN configuration parameters for database with db_unique_name SHAIKDB are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE ON;
CONFIGURE ENCRYPTION ALGORITHM 'AES256';
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0.2/SHAIKPROD/dbs/snapcf_SHAIKDB.f'; # default


RMAN> run{
2> allocate channel ch01 type disk format '/home/oracle/backup/SHAIKDB_INCO_21JAN2016_%U.bkp';
3> backup incremental level=0 TAG='test password' database plus archivelog;
4> backup archivelog all;
5> }

released channel: ORA_DISK_1
allocated channel: ch01
channel ch01: SID=43 device type=DISK


Starting backup at 21-JAN-16
current log archived
channel ch01: starting archived log backup set
channel ch01: specifying archived log(s) in backup set
input archived log thread=1 sequence=59 RECID=1 STAMP=901721811
input archived log thread=1 sequence=60 RECID=2 STAMP=901721937
channel ch01: starting piece 1 at 21-JAN-16
channel ch01: finished piece 1 at 21-JAN-16
piece handle=/home/oracle/backup/SHAIKDB_INCO_21JAN2016_09qrugg0_1_1.bkp tag=TEST PASSWORD comment=NONE
channel ch01: backup set complete, elapsed time: 00:00:01
channel ch01: starting archived log backup set
channel ch01: specifying archived log(s) in backup set
input archived log thread=1 sequence=1 RECID=5 STAMP=901726097
input archived log thread=1 sequence=2 RECID=6 STAMP=901726243
input archived log thread=1 sequence=3 RECID=7 STAMP=901726245
input archived log thread=1 sequence=4 RECID=8 STAMP=901726311
input archived log thread=1 sequence=5 RECID=9 STAMP=901726720
channel ch01: starting piece 1 at 21-JAN-16
channel ch01: finished piece 1 at 21-JAN-16
piece handle=/home/oracle/backup/SHAIKDB_INCO_21JAN2016_0aqrugg1_1_1.bkp tag=TEST PASSWORD comment=NONE
channel ch01: backup set complete, elapsed time: 00:00:01
Finished backup at 21-JAN-16

Starting backup at 21-JAN-16
channel ch01: starting incremental level 0 datafile backup set
channel ch01: specifying datafile(s) in backup set
input datafile file number=00010 name=/u01/app/oracle/shaikdb/lob01.dbf
input datafile file number=00001 name=/u01/app/oracle/shaikdb/SHAIKDB/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/shaikdb/SHAIKDB/sysaux01.dbf
input datafile file number=00003 name=/u01/app/oracle/shaikdb/SHAIKDB/undotbs01.dbf
input datafile file number=00005 name=/u01/app/oracle/shaikdb/SHAIKDB/example01.dbf
input datafile file number=00006 name=/u01/app/oracle/shaikdb/tbs1
input datafile file number=00007 name=/u01/app/oracle/shaikdb/tbs1.dbf
input datafile file number=00008 name=/u01/app/oracle/shaikdb/tbs3.dbf
input datafile file number=00009 name=/u01/app/oracle/shaikdb/idx1.dbf
input datafile file number=00004 name=/u01/app/oracle/shaikdb/SHAIKDB/users01.dbf
channel ch01: starting piece 1 at 21-JAN-16
channel ch01: finished piece 1 at 21-JAN-16
piece handle=/home/oracle/backup/SHAIKDB_INCO_21JAN2016_0bqrugg3_1_1.bkp tag=TEST PASSWORD comment=NONE
channel ch01: backup set complete, elapsed time: 00:00:25
channel ch01: starting incremental level 0 datafile backup set
channel ch01: specifying datafile(s) in backup set
including current control file in backup set
channel ch01: starting piece 1 at 21-JAN-16
channel ch01: finished piece 1 at 21-JAN-16
piece handle=/home/oracle/backup/SHAIKDB_INCO_21JAN2016_0cqruggs_1_1.bkp tag=TEST PASSWORD comment=NONE
channel ch01: backup set complete, elapsed time: 00:00:01
Finished backup at 21-JAN-16

Starting backup at 21-JAN-16
current log archived
channel ch01: starting archived log backup set
channel ch01: specifying archived log(s) in backup set
input archived log thread=1 sequence=6 RECID=10 STAMP=901726750
channel ch01: starting piece 1 at 21-JAN-16
channel ch01: finished piece 1 at 21-JAN-16
piece handle=/home/oracle/backup/SHAIKDB_INCO_21JAN2016_0dqruggu_1_1.bkp tag=TEST PASSWORD comment=NONE
channel ch01: backup set complete, elapsed time: 00:00:01
Finished backup at 21-JAN-16

Starting backup at 21-JAN-16
current log archived
channel ch01: starting archived log backup set
channel ch01: specifying archived log(s) in backup set
input archived log thread=1 sequence=1 RECID=5 STAMP=901726097
input archived log thread=1 sequence=2 RECID=6 STAMP=901726243
input archived log thread=1 sequence=3 RECID=7 STAMP=901726245
input archived log thread=1 sequence=4 RECID=8 STAMP=901726311
input archived log thread=1 sequence=5 RECID=9 STAMP=901726720
input archived log thread=1 sequence=6 RECID=10 STAMP=901726750
input archived log thread=1 sequence=7 RECID=11 STAMP=901726751
channel ch01: starting piece 1 at 21-JAN-16
channel ch01: finished piece 1 at 21-JAN-16
piece handle=/home/oracle/backup/SHAIKDB_INCO_21JAN2016_0eqruggv_1_1.bkp tag=TAG20160121T153911 comment=NONE
channel ch01: backup set complete, elapsed time: 00:00:01
Finished backup at 21-JAN-16
released channel: ch01

RMAN> exit



RMAN> restore controlfile from '/home/oracle/backup/SHAIKDB_INCO_21JAN2016_0cqruggs_1_1.bkp';

Starting restore at 21-JAN-16
using channel ORA_DISK_1

channel ORA_DISK_1: restoring control file
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 01/21/2016 15:45:29
ORA-19870: error while restoring backup piece /home/oracle/backup/SHAIKDB_INCO_21JAN2016_0cqruggs_1_1.bkp
ORA-19913: unable to decrypt backup
ORA-28365: wallet is not open

[oracle@collabn1 ~]$ rman

Recovery Manager: Release 11.2.0.1.0 - Production on Thu Jan 21 16:17:59 2016

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: SHAIKDB (not mounted)

RMAN> set decryption identified by shaiksameer2;

executing command: SET decryption
using target database control file instead of recovery catalog

RMAN> restore controlfile from '/home/oracle/backup/SHAIKDB_INCO_21JAN2016_0cqruggs_1_1.bkp';

Starting restore at 21-JAN-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/shaikdb/SHAIKDB/control01.ctl
output file name=/u01/app/oracle/shaikdb/SHAIKDB/control02.ctl
Finished restore at 21-JAN-16

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> catalog start with '/home/oracle/backup';

searching for all files that match the pattern /home/oracle/backup

List of Files Unknown to the Database
=====================================
File Name: /home/oracle/backup/SHAIKDB_INCO_21JAN2016_0cqruggs_1_1.bkp
File Name: /home/oracle/backup/SHAIKDB_INCO_21JAN2016_0dqruggu_1_1.bkp
File Name: /home/oracle/backup/SHAIKDB_INCO_21JAN2016_0eqruggv_1_1.bkp

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /home/oracle/backup/SHAIKDB_INCO_21JAN2016_0cqruggs_1_1.bkp
File Name: /home/oracle/backup/SHAIKDB_INCO_21JAN2016_0dqruggu_1_1.bkp
File Name: /home/oracle/backup/SHAIKDB_INCO_21JAN2016_0eqruggv_1_1.bkp

RMAN> restore database;

Starting restore at 21-JAN-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/shaikdb/SHAIKDB/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/shaikdb/SHAIKDB/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/shaikdb/SHAIKDB/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/shaikdb/SHAIKDB/users01.dbf
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/shaikdb/SHAIKDB/example01.dbf
channel ORA_DISK_1: restoring datafile 00006 to /u01/app/oracle/shaikdb/tbs1
channel ORA_DISK_1: restoring datafile 00007 to /u01/app/oracle/shaikdb/tbs1.dbf
channel ORA_DISK_1: restoring datafile 00008 to /u01/app/oracle/shaikdb/tbs3.dbf
channel ORA_DISK_1: restoring datafile 00009 to /u01/app/oracle/shaikdb/idx1.dbf
channel ORA_DISK_1: restoring datafile 00010 to /u01/app/oracle/shaikdb/lob01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/backup/SHAIKDB_INCO_21JAN2016_0bqrugg3_1_1.bkp
channel ORA_DISK_1: piece handle=/home/oracle/backup/SHAIKDB_INCO_21JAN2016_0bqrugg3_1_1.bkp tag=TEST PASSWORD
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 21-JAN-16

RMAN> recover database;

Starting recover at 21-JAN-16
using channel ORA_DISK_1

starting media recovery

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=6
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=7
channel ORA_DISK_1: reading from backup piece /home/oracle/backup/SHAIKDB_INCO_21JAN2016_0eqruggv_1_1.bkp
channel ORA_DISK_1: piece handle=/home/oracle/backup/SHAIKDB_INCO_21JAN2016_0eqruggv_1_1.bkp tag=TAG20160121T153911
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/app/oracle/product/11.2.0.2/SHAIKPROD/dbs/arch1_6_901723553.dbf thread=1 sequence=6
archived log file name=/u01/app/oracle/product/11.2.0.2/SHAIKPROD/dbs/arch1_7_901723553.dbf thread=1 sequence=7
unable to find archived log
archived log thread=1 sequence=8
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 01/21/2016 16:20:00
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 8 and starting SCN of 1863900

RMAN> alter database open resetlogs;

database opened

RMAN> exit


Recovery Manager complete.
[oracle@collabn1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Thu Jan 21 16:20:34 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SHAIKDB>select open_mode,database_role from v$database;

OPEN_MODE         DATABASE_ROLE
-------------------- ----------------
READ WRITE         PRIMARY

How to change APPLSYSPUB password in EBS.

$
0
0

Change APPLSYSPUB password



Shutdown the apptier:
 
Backup and change PWD in the below files:
$CONTEXT_FILE
$FND_SECURE  → .dbc file
$IAS_CONFIG_HOME/Apache/Jserv/etc/formservlet.ini
appTier .env files ($APPL_TOP)

sqlplus / as sysadb

SQL> create table applsys.fnd_oracle_userid_02102016 as select * from applsys.fnd_oracle_userid;

Table created.

SQL> create table applsys.fnd_user_02102016 as select * from applsys.fnd_user;

Table created.


Change password:
FNDCPASS apps/xxxxxxxxxxxx 0 Y system/xxxxxxxxxxxx ORACLE APPLSYSPUB xxxxxxxxxxxx


Start the apptier:

How to Open Ebiz 11i FORMS on Linux or Ubuntu or Mac

$
0
0
How to Open Ebiz 11i FORMS on Linux or Ubuntu or Mac


mkdir -p $HOME/.mozilla/plugins

shaiksameer@shaikslinux1:~/.mozilla/plugins$ ls -lrt
total 4
lrwxrwxrwx 1 shaiksameer XXXX 73 Nov 12 17:25 libnpjp2.so -> /usr/local/home/shaiksameer/stage/jre1.7.0_45/lib/i386/libnpjp2.so

if 64 bit then:
~/stage/jdk1.7.0_45/jre/lib/amd64/libnpjp2.so
Viewing all 191 articles
Browse latest View live