Quantcast
Channel: Sameer Shaik. B.E,M.S,M.B.A,P.M.P,C.S.M
Viewing all 191 articles
Browse latest View live

cursor: mutex X cursor: mutex S

$
0
0
Symtons:  Bad Performance, Query running long. Not using optimal plan so on..or  New upgrade to 11g

In AWR you see.


It is evident that most of the db time is spent on Mutex X and S

What is a Mutex:



Mutexes are a lighter-weight and more granular concurrency mechanism than latches. Mutexes take advantage of CPU architectures that offer the compare and swap instructions (or similar). The reason for obtaining a mutex in the first place, is to ensure that certain operations are properly managed for concurrency. E.g., if one session is changing a data structure in memory, then another session must wait to acquire the mutex before it can make a similar change - this prevents unintended changes that would lead to corruptions or crashes if not serialized. 



·  The library cache mutex is acquired for similar purposes that the library cache latches were acquired in prior versions of Oracle. In 10g, mutexes were introduced for certain operations in the library cache.  Starting with 11g, the library cache latches were replaced by mutexes, hence this new wait event. 



·  This wait event is present whenever a library cache mutex - X is held in exclusive mode by a session and other sessions need to wait for it to be released.  There are many different operations in the library cache that will require a mutex.

In my case: 

I was seeing huge wait events on Cursor Mutex X exclusive so on further analysis. 


In Library cache analysis we can clearly see there 53% Misses, ahaa.......



 In my version count I can clearly see there are 4000 different plans or versions for one single query which is very bad and this will cause library cache contention and eventually exhaustion and too much CPU use so on..
 
In the above screen shot we can see that 45% of the DB time is spent on parsing which is bad..  

From the above it is evident that some thing is not right and its time to work with Oracle Support.
 Since patch for Bug 10187168 is already applied. we were told make some parameter changes.


alter system set "_cursor_features_enabled"=1026 scope=spfile SID='*';
alter system set event="106001 trace name context forever,level 50" scope=spfile SID='*';
shared_pool_reserved_size 268435456 # set to 10% of SHARED_POOL_SIZE or 512M explicitly
"_memory_broker_stat_interval"=999

These parameter changes fixed the Mutex and hard parsing issues for us.
 
Known Bugs:

      Bug 10270888 - ORA-600[kgxEndExamine-Bad-State] / mutex waits after a self deadlock
      Bug 9591812 - Wrong wait events in 11.2 ("cursor: mutex S" instead of "cursor: mutex X")
      Bug 9499302 - Improve concurrent mutex request handling
      Bug 7441165 - Prevent preemption while holding a mutex (fix only works on Solaris)
      Bug 8575528 - Missing entries in V$MUTEX_SLEEP.location



How to change SCAN VIPs in 11gR2

$
0
0

Below process is testing and worked when the SCAN vips are configured via the DNS.

Backup:
Hosts file
ocrconfig -manualbackup
 voting disks using "dd"

following command can be used:
$ nslookup dbhost-sv1

Name:   dbhost-sv1.mydomain.com
Address: 10.ns.ip.133
Name:   dbhost-sv1.mydomain.com
Address: 10.ns.ip.135
Name:   dbhost-sv1.mydomain.com
Address: 10.ns.ip.134


Stop the scan resources:

# $GRID_HOME/bin/srvctl stop scan_listener
# $GRID_HOME/bin/srvctl stop scan
# $GRID_HOME/bin/srvctl status scan
>srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node dbhostdb02
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node dbhostdb01
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node dbhostdb01



Confirm whether they are really down:


Make the change in the DNS:
# nslookup dbhost-sv1  << should be new IPs
>/usr/sbin/nslookup dbhostdb01-sv1


Name:   dbhostdb01-sv1.mydomain.com
Address: 10.ip.ip.156
Name:   dbhostdb01-sv1.mydomain.com
Address: 10.ip.ip.158
Name:   dbhostdb01-sv1.mydomain.com
Address: 10.ip.ip.157

Now tell CRS to update the SCAN VIP resources:
# $GRID_HOME/bin/srvctl modify scan –n dbhost-sv1  ( This will refresh the new IPs for SCAN)
# $GRID_HOME/bin/srvctl config scan ( verify the new IP's are reflected. )
>srvctl config scan
SCAN name: dbhostdb01-sv1, Network: 1/10.ip.ip.0/255.255.255.0/bge0
SCAN VIP name: scan1, IP: /dbhostdb01-sv1.mydomain.com/10.ip.ip.156
SCAN VIP name: scan2, IP: /dbhostdb01-sv1.mydomain.com/10.ip.ip.158
SCAN VIP name: scan3, IP: /dbhostdb01-sv1.mydomain.com/10.ip.ip.157



# $GRID_HOME/bin/srvctl status scan
>srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node dbhostdb02
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node dbhostdb01
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node dbhostdb01


# $GRID_HOME/bin/srvctl start scan
# $GRID_HOME/bin/srvctl start scan_listener

Reference:952903.1

Active data guard in 11g - How to activate "Active Data guard" feature in 11g.

$
0
0
How to activate Active Data guard feature in 11g.

Here I am converting an existing physical standby database to Read only standby database with real time redo apply.

Given you have a UP and running physical standby if not please follow the below Note:
Step by Step Guide on Creating Physical Standby Using RMAN DUPLICATE...FROM ACTIVE DATABASE [ID 1075908.1]

Standby Redo Logs - what are they and when to use them ??
-------------------------------------------------------------

Starting Oracle 9i you have to opportunity to add Standby Rodo Log Groups to
your Online Redo Log Groups. These Standby Redo Logs then store the information
received from the Primary Database. In case of a Failover situation, you will
have less data loss than without Standby Redo Logs.

Standby Redo Logs are only supported for the Physical Standby Database in
Oracle 9i and as well for Logical Standby Databases in 10g. Standby Redo Logs
are only used if you have the LGWR activated for archival to the Remote Standby
Database.

The great Advantage of Standby Redo Logs is that every Entry written into
the Online RedoLogs of the Primary Database is transfered to the Standby
Site and written into the Standby Redo Logs at the same time; threfore, you
reduce the probability of Data Loss on the Standby Database.

Starting with 10g it is possible to start Real-Time Apply with Physical and
Logical Standby Databases. With Real-Time Apply, the Redo is applied to the
Strandby Database from the Standby RedoLog instead of waiting until an Archive
Log is created. So Standby Redo Logs are required for Real-Time Apply.


On Primary:

SQL> show parameter unique

NAME                  PRODX

It is important you have the standby logfile size at a minimum of the primary redo logfile size.

SQL> SELECT distinct(to_char((bytes*0.000001),'9990.999')) size_mb
FROM v$log; 

SIZE_MB
---------
  262.144



SQL> select group#,thread#,members from v$log;

    GROUP#    THREAD#    MEMBERS
---------- ---------- ----------
         1          1          2
         3          1          2
         2          1          2


On Standby:


QL> show parameter unique

NAME                  STANDX

Cancel the recovery:

SQL> alter database recover managed standby database cancel;

Database altered.

Add the standby redo logfile groups: ( 4-6 here )

SQL> alter database add standby logfile group 4 ('/u01/u0090/oradata/STANDX/stndby_redo_1a.log') size 265m;

Database altered.

SQL> alter database add standby logfile group 5 ('/u01/u0091/oradata/STANDX/stndby_redo_2a.log') size 265m;

Database altered.

SQL> alter database add standby logfile group 6('/u01/u0092/oradata/STANDX/stndby_redo_3a.log') size 265m;

Database altered.

Now add the second member to the above groups if would like or add these while creating the groups itself:


SQL> alter database add standby logfile member '/u01/u0092/oradata/STANDX/stndby_redo_1b.log' reuse to group 4;

Database altered.

SQL>  alter database add standby logfile member '/u01/u0091/oradata/STANDX/stndby_redo_3b.log' reuse to group 6;

Database altered.

SQL> alter database add standby logfile member '/u01/u0090/oradata/STANDX/stndby_redo_2b.log' reuse to group 5;

Database altered.

Now open the standby database in Read only mode:

SQL> shut immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 2990735360 bytes
Fixed Size                  2106176 bytes
Variable Size             872422592 bytes
Database Buffers         2097152000 bytes
Redo Buffers               19054592 bytes
Database mounted.
Database opened.
SQL>


Now start the media recovery on the standby database:

SQL>  recover managed standby database using current logfile disconnect;
Media recovery complete.


Example to make sure we can use the standby database:
Here the standby is waiting for the logseq 5865#

SQL> select process,status,thread#,sequence# from v$managed_standby;

PROCESS                     STATUS                                  THREAD#  SEQUENCE#
--------------------------- ------------------------------------ ---------- ----------
ARCH                        CONNECTED                                     0          0
ARCH                        CONNECTED                                     0          0
ARCH                        CONNECTED                                     0          0
ARCH                        CONNECTED                                     0          0
MRP0                        WAIT_FOR_GAP                                  1       5865
RFS                         IDLE                                          0          0

6 rows selected.




On Primary current sequence is 5865#

SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /crm01/u0085/oradata/HRCRMPRD
Oldest online log sequence     5863
Next log sequence to archive   5865
Current log sequence           5865

Create a sample user on primary, create some test tables and then switch the logfiles to reflect the changes at the standby.

SQL> create user myuser identified by standbytest ;

User created.

SQL> grant dba to myuser;

Grant succeeded.

SQL> GRANT CONNECT TO myuser;

Grant succeeded.

SQL> exit


Connect as the new user on Primary:

>sqlplus myuser/standbytest

SQL*Plus: Release 11.1.0.7.0 - Production on Sun Sep 16 02:19:40 2012


SQL> create table readonly_standby as select  * from dba_tables;

Table created.

SQL> select count(*) from readonly_standby;

  COUNT(*)
----------
      8511

SQL> create table index_dba as select * from dba_indexes;

Table created.

SQL> select count(*) from index_dba;

  COUNT(*)
----------
     12423

SQL> commit;

Commit complete.

Now switch the log sequence:

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL> /

System altered.

On Standby the 5865 sequence is being applied now:

SQL>
SQL> select process,status,thread#,sequence# from v$managed_standby;

PROCESS                     STATUS                                  THREAD#  SEQUENCE#
--------------------------- ------------------------------------ ---------- ----------
ARCH                        CLOSING                                       1       5868
ARCH                        CONNECTED                                     0          0
ARCH                        CLOSING                                       1       5866
ARCH                        CLOSING                                       1       5867
MRP0                        APPLYING_LOG                                  1       5865
RFS                         IDLE                                          0          0
RFS                         IDLE                                          1       5865
RFS                         IDLE                                          0          0

8 rows selected.




PROCESS                     STATUS                                  THREAD#  SEQUENCE#
--------------------------- ------------------------------------ ---------- ----------
ARCH                        CLOSING                                       1       5868
ARCH                        CONNECTED                                     0          0
ARCH                        CLOSING                                       1       5865
ARCH                        CLOSING                                       1       5867
MRP0                        WAIT_FOR_GAP                                  1       5869
RFS                         IDLE                                          0          0
RFS                         IDLE                                          0          0
RFS                         IDLE                                          0          0

8 rows selected.

Verify if the got created or not:


SQL> select username,account_status,created from dba_users where username='myuser';

USERNAME                                                                                   ACCOUNT_STATUS                        

                                                          CREATED
------------------------------------------------------------------------------------------

myuser                                                      16-SEP-12                       OPEN                                  

       

SQL> select owner,table_name from dba_tables where OWNER='myuser';

OWNER                                                                                      TABLE_NAME

myuser                                                                                    READONLY_STANDBY
myuser                                                                                    INDEX_DBA


ORA-38305: object not in RECYCLE BIN

$
0
0
Perils of having recyclebin turned off in your database.

For me it was a requirement to have the recyclebin turned off, since my application creates temporary staging tables and drops them daily and I cannot afford to keep them in my database.
If  you have recyclebin turned off at the database level then the dropped table will no longer reside in the database i.e you cannot flashback the table as the table segments will be dropped ( un allocated and released to database for reuse)

SQL>show parameter recyclebin

recyclebin                           string      OFF



SQL> create table test123 tablespace users  as select * from dba_tables ;

Table created.

SQL> commit;

Commit complete.



SQL> select count(*) from test123 ;

  COUNT(*)
----------
      3068

SQL> drop table test123;

Table dropped.

SQL> select OWNER,OBJECT_NAME from dba_recyclebin;

no rows selected


SQL> flashback table test123 to  before drop;
flashback table test123 to  before drop
*
ERROR at line 1:
ORA-38305: object not in RECYCLE BIN

ORA-08104: this index object is being online built or rebuilt

$
0
0


alter index sameer.myindex rebuild online
*
ERROR at line 1:
ORA-08104: this index object 59081 is being online built or rebuilt


As long as the index is not rebuild all access to the index will result in ORA-8104 or ORA-8106.

In case you are not performing the DBMS_REPAIR.ONLINE_INDEX_CLEAN operation , SMON will eventually cleanup the locked index so no actions are actually needed. However, letting SMON do the cleanup can be a bit of 'hit and miss' as SMON will try to cleanup every 60 minutes and if it cannot get a lock on the object with NOWAIT it will just try again later. In a highly active database with many transactions this can cause the rebuild to take a long time as SMON won't get the lock with NOWAIT. Other cases like uncommitted transactions against the table will also result in SMON not rebuilding the index.

Thus, you have 2 options:

1. Let SMON automatically performing the cleanup
or

SQL> select obj#,flags from ind$ where obj#=75350;

      OBJ#      FLAGS
---------- ----------
     59081       3742


2. Run DBMS_REPAIR.ONLINE_INDEX_CLEAN. This operation is faster, but will put load on the system.
declare
isclean boolean;
begin
isclean :=false;
while isclean=false
loop
isclean := DBMS_REPAIR.ONLINE_INDEX_CLEAN(dbms_repair.all_index_id,dbms_repair.lock_wait);
dbms_lock.sleep(10);
end loop;
end;
/


SQL> select obj#,flags from ind$ where obj#=75350;

      OBJ#      FLAGS
---------- ----------
     59081       3251


If the current load on the system is not affected but this broken index, I would suggest you to wait the quiet period of the database and then run DBMS_REPAIR.ONLINE_INDEX_CLEAN(dbms_repair.all_index_id,dbms_repair.lock_wait);.

ORA-01502: index or partition of such index is in unusable state

$
0
0
Exploring ORA-01502 error and why we usually get this error message. I am not explaining why/how the index status changed to unusable( mostly due to the table move and alter index xxxx unusable ..)

You will get

ORA-01502: index  or partition of such index is in unusable state

If you have the parameter skip_unusable_indexes= false  then it makes sense that oracle reported this error during DML activity.

If you don't care about the unusable indexes during DML and you want the optimizer to choose different(expensive) execution plans during SELECT then you can set the parameter skip_unusable_indexes= true at the instance level or at the session level and move on.

what if you got this error even when you have the paramater skip_unusable_indexes set to true at the instance level..

i.e
SQL> show parameter skip

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
skip_unusable_indexes                boolean     TRUE

SKIP_UNUSABLE_INDEXES enables or disables the use and reporting of tables with unusable indexes or index partitions. If a SQL statement uses a hint that forces the usage of an unusable index, then this hint takes precedence over initialization parameter settings, including SKIP_UNUSABLE_INDEXES. If the optimizer chooses an unusable index, then an ORA-01502 error will result. (See Oracle Database Administrator's Guide for more information about using hints.)

Values:   true

    Disables error reporting of indexes and index partitions marked UNUSABLE. This setting allows all operations (inserts, deletes, updates, and selects) on tables with unusable indexes or index partitions.




and you still got the error during DML activity on this table, why?

SQL> insert into TEST_TABLE (salesrep_dim_pk) values (55555);
insert into test_table(salesrep_dim_pk) values (55555)
*
ERROR at line 1:
ORA-01502: index 'TEST_INDEX_UNIQUE' or partition of such index is in unusable state

Example:
SQL> create table TEST_TABLE as select * from my_table;

Table created.

SQL> CREATE UNIQUE INDEX TEST_INDEX_UNIQUE ON TEST_TABLE (SALESREP_DIM_PK);

Index created.

SQL> commit;

Commit complete.

SQL> select sum(bytes) from dba_segments where segment_name='TEST_INDEX_UNIQUE';

SUM(BYTES)
----------
    196608

SQL> select * from TEST_TABLE where salesrep_dim_pk =95056;

Execution Plan
----------------------------------------------------------
Plan hash value: 684699485

-------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                   |     1 |  1470 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_TABLE        |     1 |  1470 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | TEST_INDEX_UNIQUE |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SALESREP_DIM_PK"=95056)

Above explain plan shows the optimizer is using the index.

Now insert some data into the data..some DML...

SQL> insert into test_table (salesrep_dim_pk) values (55555);

1 row created.

SQL> delete from test_table where salesrep_dim_pk=55555;

1 row deleted.

SQL> commit;

Commit complete.


****** Now mark the index UNUSABLE *****
SQL> alter index TEST_INDEX_UNIQUE unusable;

Index altered.

SQL> select sum(bytes) from dba_segments where segment_name='TEST_INDEX_UNIQUE';

SUM(BYTES)
----------
    196608
***** This is bad even though the index is unusable the segments were not dropped for the index ***
****The default behaviour is oracle drops the segments for an unusable index *****



SQL>  select * from test_table where salesrep_dim_pk =95056;

Execution Plan
----------------------------------------------------------
Plan hash value: 74755328

----------------------------------------------------------------------------------------
| Id  | Operation                 | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |            |   151 |   216K|    10   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS STORAGE FULL| TEST_TABLE |   151 |   216K|    10   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - storage("SALESREP_DIM_PK"=95056)
       filter("SALESREP_DIM_PK"=95056)

****As expected the access path changed from INDEX to FULL TABLE SCAN as the index is unusable******

***** try doing some DML ********

SQL> insert into TEST_TABLE (salesrep_dim_pk) values (55555);
insert into TEST_TABLE (salesrep_dim_pk) values (55555)
*
ERROR at line 1:
ORA-01502: index 'TEST_INDEX_UNIQUE' or partition of such index is in unusable state

***Since the index is a non unique index oracle will not allows us to do any dml activity on the underlying table
on which the index is unusable and will not drop the index segments either ****

  Note:
    If an index is used to enforce a UNIQUE constraint on a table, then allowing insert and update operations on the table might violate the constraint. Therefore, this setting does not disable error reporting for unusable indexes that are unique.



SQL> alter index TEST_INDEX_UNIQUE rebuild;

Index altered.

SQL> select sum(bytes) from dba_segments where segment_name='TEST_INDEX_UNIQUE';

SUM(BYTES)
----------
    196608

SQL> insert into TEST_TABLE (salesrep_dim_pk) values (55555);

1 row created.

SQL> commit;

Commit complete.


where as non unique unusable index doesn't throw this error.

Create table TEST_TABLE as select * from my_table;

****Created non-unique index here ********
create index TEXT_IDX on TEST_TABLE(customer_num,customer_name);

commit;

SQL> select customer_num from TEST_TABLE where customer_num='8.1502701000322E15';

Execution Plan
----------------------------------------------------------
Plan hash value: 3224766131

-----------------------------------------------------------------------------
| Id  | Operation        | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                    |    12 |    96 |     4   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| TEXT_IDX |    12 |    96 |     4   (0)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("CUSTOMER_NUM"=8150270100032200)

SQL> select wo_num from TEST_TABLE where  wo_num='1.00037856314112E15';

Execution Plan
----------------------------------------------------------
Plan hash value: 3979868219

----------------------------------------------------------------------------------------
| Id  | Operation                 | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |            |     7 |    49 |   317K  (2)| 00:16:09 |
|*  1 |  TABLE ACCESS STORAGE FULL| TEST_TABLE |     7 |    49 |   317K  (2)| 00:16:09 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - storage("WO_NUM"=1000378563141120)
       filter("WO_NUM"=1000378563141120)

SQL> ALTER INDEX TEXT_IDX unusable;

Index altered.



SQL> select index_name,status from dba_indexes where index_name='TEXT_IDX';

INDEX_NAME                     STATUS
------------------------------ --------
TEXT_IDX                       UNUSABLE


SQL> select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='TEXT_IDX';

SUM(BYTES)/1024/1024/1024
-------------------------



SQL> exec dbms_stats.gather_table_stats('MYSCHEMA','TEST_TABLE',estimate_percent=>dbms_stats.auto_sample_size,method_opt=>'FOR ALL COLUMNS SIZE 1',cascade=>TRUE);

PL/SQL procedure successfully completed.


SQL> update TEST_TABLE set customer_num='1234567890' where customer_num='8.1502701000322E15';

6 rows updated.

SQL> alter index TExt_idx rebuild;


Index altered.

SQL> SQL>
SQL> commit;

Commit complete.

SQL> select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='TEXT_IDX';

SUM(BYTES)/1024/1024/1024
-------------------------
               2.00976563


ALTER TABLE MOVE

$
0
0
You can use dbms_redef for reorg:-

Redefining Tables Online

In any database system, it is occasionally necessary to modify the logical or physical structure of a table to:
  • Improve the performance of queries or DML
  • Accommodate application changes
  • Manage storage
Oracle Database provides a mechanism to make table structure modifications without significantly affecting the availability of the table. The mechanism is called online table redefinition. Redefining tables online provides a substantial increase in availability compared to traditional methods of redefining tables.
When a table is redefined online, it is accessible to both queries and DML during much of the redefinition process. The table is locked in the exclusive mode only during a very small window that is independent of the size of the table and complexity of the redefinition, and that is completely transparent to users.
Online table redefinition requires an amount of free space that is approximately equivalent to the space used by the table being redefined. More space may be required if new columns are added.

 but for people looking for quick fix or short cuts here it is..

Re-org or move objects around dirty way:-

To move TABLES:

SELECT'ALTER TABLE '||OWNER||'.'||TABLE_NAME ||' MOVE TABLESPACE APPS_TS_TX_DATA;'
 FROM DBA_TABLES WHERE TABLESPACE_NAME='APPS_TS_TX_DATA_OLD'
 and owner='JTF'

To move indexes:
SELECT'ALTER INDEX '||OWNER||'.'||INDEX_NAME ||' REBUILD TABLESPACE APPS_TS_TX_IDX;'
 FROM DBA_INDEXES WHERE TABLESPACE_NAME='APPS_TS_TX_DATA_OLD'


To move LOB SEGMENT:
SELECT 'ALTER TABLE '||OWNER ||'.'|| TABLE_NAME || ' MOVE LOB('|| COLUMN_NAME ||') STORE AS (TABLESPACE APPS_TS_TX_DATA);'
 FROM DBA_LOBS WHERE TABLESPACE_NAME='APPS_TS_TX_DATA_OLD'


To move TABLE SUBPARTITION:
 select 'ALTER TABLE '||TABLE_OWNER||'.'||TABLE_NAME||' MOVE SUBPARTITION '||SUBPARTITION_NAME ||' TABLESPACE APPS_TS_TX_DATA;'
    FROM DBA_TAB_SUBPARTITIONS WHERE TABLESPACE_NAME='APPS_TS_TX_DATA_OLD'



To move LOB SUB PARTITION:

SCRIPT:-
SELECT 'ALTER TABLE '||TABLE_OWNER ||'."'|| TABLE_NAME || '" MOVE SUBPARTITION '|| SUBPARTITION_NAME ||'  TABLESPACE APPS_TS_TX_DATA LOB ('||
COLUMN_NAME||') STORE AS (TABLESPACE APPS_TS_TX_DATA);'
 FROM DBA_LOB_SUBPARTITIONS WHERE TABLESPACE_NAME='APPS_TS_TX_DATA_OLD'

EX:-
   ALTER TABLE FPA."AW$FPAPJP" MOVE SUBPARTITION SYS_IL_SUBP127 TABLESPACE APPS_TS_TX_DATA  LOB(AWLOB) STORE AS (TABLESPACE APPS_TS_TX_DATA);


If there are many objects to move:

Datapump:

expdp \"/ as sysdba\" directory=TABPSLACE_exp dumpfile=exp_tablespace.dmp logfile=mydb_exp.log tablespaces='APPS_TS_TX_DATA_OLD'

Starting "SYS"."SYS_EXPORT_TABLESPACE_01":  "/******** AS SYSDBA" directory=tablespace_exp dumpfile=mydb_tablespace.dmp logfile=mydb_exp.log tablespaces=APPS_TS_TX_DATA_OLD
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 179.5 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/RLS_POLICY
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS



Import it with REMAP_TABLESPACE or as desired...

RMAN switch database to copy

$
0
0
RMAN image copy .. switching database to image copy backup.


>r sql
sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Mon Aug 23 15:09:10 2010

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1828716544 bytes
Fixed Size                  2041368 bytes
Variable Size            1258297832 bytes
Database Buffers          553648128 bytes
Redo Buffers               14729216 bytes
Database mounted.


labrman01(MYDB)  /ora_backup/backups/MYDB
>rman

Recovery Manager: Release 10.2.0.4.0 - Production on Mon Aug 23 15:10:13 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

RMAN> connect target /

connected to target database: MYDB (DBID=2563884143, not open)

RMAN> connect catalog rcat10g/rcat10g@rcat;

connected to recovery catalog database

RMAN> switch database to copy;

datafile 1 switched to datafile copy "/ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-SYSTEM_FNO-1_unlm2jhp"
datafile 2 switched to datafile copy "/ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-UNDOTBS1_FNO-2_ullm2jgm"
datafile 3 switched to datafile copy "/ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-SYSAUX_FNO-3_uolm2jhr"
datafile 4 switched to datafile copy "/ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-USERS_FNO-4_uqlm2jib"
datafile 5 switched to datafile copy "/ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-TOOLS_FNO-5_urlm2jij"
datafile 6 switched to datafile copy "/ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-MYDB_DATA_FNO-6_uklm2jgm"
datafile 8 switched to datafile copy "/ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-MYDB_BI_FNO-8_umlm2jgm"
datafile 9 switched to datafile copy "/ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-MYDB_APPS_FNO-9_uplm2jib"
datafile 10 switched to datafile copy "/ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-HYP_HAL_DATA_FNO-10_uslm2jiq"
starting full resync of recovery catalog
full resync complete


RMAN>  run{
2> set until time "to_date('08/23/2010 15:00:00','mm/dd/yyyy hh24:mi:ss')";
3> recover database;
4> }

executing command: SET until clause

Starting recover at 23-AUG-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=1092 devtype=DISK
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00002: /ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-UNDOTBS1_FNO-2_ullm2jgm
channel ORA_DISK_1: reading from backup piece /ora_backup/backups/MYDB/MYDB_image_v7lm2n02_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/ora_backup/backups/MYDB/MYDB_image_v7lm2n02_1_1 tag=TAG20100823T145351
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-SYSTEM_FNO-1_unlm2jhp
channel ORA_DISK_1: reading from backup piece /ora_backup/backups/MYDB/MYDB_image_v9lm2n0r_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/ora_backup/backups/MYDB/MYDB_image_v9lm2n0r_1_1 tag=TAG20100823T145351
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00008: /ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-MYDB_BI_FNO-8_umlm2jgm
channel ORA_DISK_1: reading from backup piece /ora_backup/backups/MYDB/MYDB_image_v8lm2n02_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/ora_backup/backups/MYDB/MYDB_image_v8lm2n02_1_1 tag=TAG20100823T145351
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00003: /ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-SYSAUX_FNO-3_uolm2jhr
channel ORA_DISK_1: reading from backup piece /ora_backup/backups/MYDB/MYDB_image_valm2n1b_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/ora_backup/backups/MYDB/MYDB_image_valm2n1b_1_1 tag=TAG20100823T145351
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00009: /ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-MYDB_APPS_FNO-9_uplm2jib
channel ORA_DISK_1: reading from backup piece /ora_backup/backups/MYDB/MYDB_image_vblm2n1c_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/ora_backup/backups/MYDB/MYDB_image_vblm2n1c_1_1 tag=TAG20100823T145351
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00005: /ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-TOOLS_FNO-5_urlm2jij
channel ORA_DISK_1: reading from backup piece /ora_backup/backups/MYDB/MYDB_image_vdlm2n1s_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/ora_backup/backups/MYDB/MYDB_image_vdlm2n1s_1_1 tag=TAG20100823T145351
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00006: /ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-MYDB_DATA_FNO-6_uklm2jgm
channel ORA_DISK_1: reading from backup piece /ora_backup/backups/MYDB/MYDB_image_v6lm2n01_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/ora_backup/backups/MYDB/MYDB_image_v6lm2n01_1_1 tag=TAG20100823T145351
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: /ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-USERS_FNO-4_uqlm2jib
channel ORA_DISK_1: reading from backup piece /ora_backup/backups/MYDB/MYDB_image_vclm2n1r_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/ora_backup/backups/MYDB/MYDB_image_vclm2n1r_1_1 tag=TAG20100823T145351
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00010: /ora_backup/backups/MYDB/MYDB_image_data_D-MYDB_I-2563884143_TS-HYP_HAL_DATA_FNO-10_uslm2jiq
channel ORA_DISK_1: reading from backup piece /ora_backup/backups/MYDB/MYDB_image_velm2n23_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/ora_backup/backups/MYDB/MYDB_image_velm2n23_1_1 tag=TAG20100823T145351
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

starting media recovery

archive log thread 1 sequence 6 is already on disk as file /adm02/u8001/MYDB/arch/arch_HYPQ_0001_0000000006_0727545786.arc
archive log thread 1 sequence 7 is already on disk as file /adm02/u8001/MYDB/arch/arch_HYPQ_0001_0000000007_0727545786.arc
archive log thread 1 sequence 8 is already on disk as file /adm02/u8001/MYDB/arch/arch_HYPQ_0001_0000000008_0727545786.arc
archive log filename=/adm02/u8001/MYDB/arch/arch_HYPQ_0001_0000000006_0727545786.arc thread=1 sequence=6
media recovery complete, elapsed time: 00:00:03
Finished recover at 23-AUG-10


RMAN> run{
2> sql 'alter database open resetlogs';
3> }

sql statement: alter database open resetlogs
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN> list incarnation;


List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
761     774     MYDB     2563884143       PARENT  1          29-OCT-07
761     762     MYDB     2563884143       PARENT  10513166137703 21-JUL-10
761     4636    MYDB     2563884143       PARENT  10513166276407 20-AUG-10
761     5187    MYDB     2563884143       PARENT  10513166280927 20-AUG-10
761     6336    MYDB     2563884143       PARENT  10513166285247 20-AUG-10
761     6786    MYDB     2563884143       CURRENT 10513166407407 23-AUG-10
761     1681    MYDB     2563884143       ORPHAN  10513167403561 20-AUG-10

RAC gv$ view dynamic views

$
0
0

Gv$views... That are related in day to day troubleshooting


GV$ACCESS
GV$ACTIVE_INSTANCES
GV$ACTIVE_SERVICES
GV$ACTIVE_SESSION_HISTORY
GV$ALERT_TYPES
GV$AQ
GV$ARCHIVE
GV$ARCHIVED_LOG
GV$ARCHIVE_DEST
GV$ARCHIVE_DEST_STATUS
GV$ARCHIVE_GAP
GV$ARCHIVE_PROCESSES
GV$ASH_INFO
GV$ASM_ALIAS
GV$ASM_CLIENT
GV$ASM_DISK
GV$ASM_DISKGROUP
GV$ASM_DISKGROUP_STAT
GV$ASM_DISK_IOSTAT
GV$ASM_DISK_STAT
GV$ASM_FILE
GV$ASM_OPERATION
GV$BACKUP
GV$BGPROCESS
GV$BH
GV$BSP
GV$BUFFERED_PUBLISHERS
GV$BUFFERED_QUEUES
GV$BUFFERED_SUBSCRIBERS
GV$BUFFER_POOL
GV$BUFFER_POOL_STATISTICS
GV$CALLTAG
GV$CELL
GV$CELL_CONFIG
GV$CELL_REQUEST_TOTALS
GV$CELL_STATE
GV$CLUSTER_INTERCONNECTS
GV$CONFIGURED_INTERCONNECTS
GV$CONTEXT
GV$CONTROLFILE
GV$CONTROLFILE_RECORD_SECTION
GV$CR_BLOCK_SERVER
GV$CURRENT_BLOCK_SERVER
GV$DATABASE
GV$DATABASE_BLOCK_CORRUPTION
GV$DATABASE_INCARNATION
GV$DATAFILE
GV$DATAFILE_COPY
GV$DATAFILE_HEADER
GV$DATAGUARD_STATUS
GV$DATAPUMP_JOB
GV$DATAPUMP_SESSION
GV$DBLINK
GV$DB_CACHE_ADVICE
GV$DB_OBJECT_CACHE
GV$DB_PIPES
GV$DELETED_OBJECT
GV$DETACHED_SESSION
GV$DIAG_INFO
GV$DISPATCHER
GV$DISPATCHER_RATE
GV$DYNAMIC_REMASTER_STATS
GV$ENCRYPTED_TABLESPACES
GV$ENQUEUE_LOCK
GV$ENQUEUE_STAT
GV$ENQUEUE_STATISTICS
GV$EVENTMETRIC
GV$EVENT_NAME
GV$EXECUTION
GV$FILEMETRIC
GV$FILEMETRIC_HISTORY
GV$FILESPACE_USAGE
GV$FILESTAT
GV$FILE_CACHE_TRANSFER
GV$FILE_PING
GV$FIXED_TABLE
GV$FIXED_VIEW_DEFINITION
GV$FLASHBACK_DATABASE_LOG
GV$FLASHBACK_DATABASE_LOGFILE
GV$FLASHBACK_DATABASE_STAT
GV$GC_ELEMENT
GV$GC_ELEMENTS_WITH_COLLISIONS
GV$GES_BLOCKING_ENQUEUE
GV$GES_ENQUEUE
GV$GLOBALCONTEXT
GV$GLOBAL_BLOCKED_LOCKS
GV$GLOBAL_TRANSACTION
GV$HM_CHECK
GV$HM_FINDING
GV$HM_INFO
GV$HM_RECOMMENDATION
GV$INSTANCE
GV$INSTANCE_CACHE_TRANSFER
GV$INSTANCE_LOG_GROUP
GV$INSTANCE_RECOVERY
GV$IOFUNCMETRIC
GV$IOFUNCMETRIC_HISTORY
GV$IOSTAT_FILE
GV$IOSTAT_FUNCTION
GV$IOSTAT_FUNCTION_DETAIL
GV$IOSTAT_NETWORK
GV$IO_CALIBRATION_STATUS
GV$LATCH
GV$LATCHHOLDER
GV$LATCHNAME
GV$LATCH_CHILDREN
GV$LATCH_MISSES
GV$LATCH_PARENT
GV$LIBCACHE_LOCKS
GV$LIBRARYCACHE
GV$LIBRARY_CACHE_MEMORY
GV$LOCK
GV$LOCKED_OBJECT
GV$LOCKS_WITH_COLLISIONS
GV$LOCK_ACTIVITY
GV$LOCK_ELEMENT
GV$LOCK_TYPE
GV$LOG
GV$LOGFILE
GV$LOGHIST
GV$MANAGED_STANDBY
GV$MEMORY_CURRENT_RESIZE_OPS
GV$MEMORY_DYNAMIC_COMPONENTS
GV$MEMORY_RESIZE_OPS
GV$MEMORY_TARGET_ADVICE
GV$METRIC
GV$MUTEX_SLEEP
GV$MUTEX_SLEEP_HISTORY
GV$MVREFRESH
GV$MYSTAT
GV$OBSOLETE_PARAMETER
GV$OPEN_CURSOR
GV$OSSTAT
GV$PARALLEL_DEGREE_LIMIT_MTH
GV$PARAMETER
GV$PARAMETER2
GV$PARAMETER_VALID_VALUES
GV$PGASTAT
GV$PGA_TARGET_ADVICE_HISTOGRAM
GV$PGA_TARGET_ADVICE
GV$POLICY_HISTORY
GV$PQ_SESSTAT
GV$PQ_SLAVE
GV$PQ_SYSSTAT
GV$PQ_TQSTAT
GV$PROCESS
GV$PROCESS_MEMORY
GV$PROCESS_MEMORY_DETAIL
GV$PROCESS_MEMORY_DETAIL_PROG
GV$PWFILE_USERS
GV$PX_BUFFER_ADVICE
GV$PX_INSTANCE_GROUP
GV$PX_PROCESS
GV$PX_PROCESS_SYSSTAT
GV$PX_SESSION
GV$PX_SESSTAT
GV$QUEUE
GV$QUEUEING_MTH
GV$RECOVERY_FILE_STATUS
GV$RECOVERY_LOG
GV$RECOVERY_PROGRESS
GV$RECOVERY_STATUS
GV$RECOVER_FILE
GV$RESOURCE
GV$RESOURCE_LIMIT
GV$RESTORE_POINT
GV$RESULT_CACHE_DEPENDENCY
GV$RESULT_CACHE_MEMORY
GV$RESULT_CACHE_OBJECTS
GV$RESULT_CACHE_STATISTICS
GV$ROWCACHE
GV$SCHEDULER_RUNNING_JOBS
GV$SEGMENT_STATISTICS
GV$SEGSTAT
GV$SEGSTAT_NAME
GV$SESSION
GV$SESSION_BLOCKERS
GV$SESSION_CURSOR_CACHE
GV$SESSION_EVENT
GV$SESSION_LONGOPS
GV$SESSION_WAIT
GV$SESSION_WAIT_CLASS
GV$SESSION_WAIT_HISTORY
GV$SESSTAT
GV$SESS_IO
GV$SGA
GV$SGAINFO
GV$SGASTAT
GV$SGA_DYNAMIC_COMPONENTS
GV$SGA_DYNAMIC_FREE_MEMORY
GV$SGA_TARGET_ADVICE
GV$SHARED_POOL_ADVICE
GV$SHARED_POOL_RESERVED
GV$SHARED_SERVER
GV$SORT_SEGMENT
GV$TEMPSEG_USAGE
GV$SORT_USAGE
GV$SPPARAMETER
GV$SQL
GV$SQLAREA
GV$SQLAREA_PLAN_HASH
GV$SQLCOMMAND
GV$SQLSTATS
GV$SQLSTATS_PLAN_HASH
GV$SQLTEXT
GV$SQL_BIND_CAPTURE
GV$SQL_BIND_DATA
GV$SQL_BIND_METADATA
GV$SQL_CURSOR
GV$SQL_FEATURE
GV$SQL_FEATURE_HIERARCHY
GV$SQL_HINT
GV$SQL_JOIN_FILTER
GV$SQL_MONITOR
GV$SQL_OPTIMIZER_ENV
GV$SQL_PLAN
GV$SQL_PLAN_MONITOR
GV$SQL_PLAN_STATISTICS
GV$SQL_PLAN_STATISTICS_ALL
GV$SQL_SHARED_CURSOR
GV$SQL_SHARED_MEMORY
GV$SQL_WORKAREA
GV$SQL_WORKAREA_ACTIVE
GV$SQL_WORKAREA_HISTOGRAM
GV$STATISTICS_LEVEL
GV$SYSAUX_OCCUPANTS
GV$SYSMETRIC
GV$SYSMETRIC_HISTORY
GV$SYSMETRIC_SUMMARY
GV$SYSSTAT
GV$SYSTEM_CURSOR_CACHE
GV$SYSTEM_EVENT
GV$SYSTEM_PARAMETER
GV$SYSTEM_PARAMETER2
GV$TABLESPACE
GV$TEMPFILE
GV$TEMPORARY_LOBS
GV$TEMPSTAT
GV$TRANSACTION
GV$TRANSACTION_ENQUEUE
GV$UNDOSTAT
GV$VERSION
GV$WAITSTAT
GV$WORKLOAD_REPLAY_THREAD

ORA-01586 ORA-39701 database must be mounted EXCLUSIVE and not open for this operation

$
0
0

Dropping the database :
 
Mount the database in restrict mode:
 SQL> startup mount restrict;
ORACLE instance started.


Database mounted.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
dbm1

SQL> drop database;

Database dropped.

What if you get the below error:


SQL> startup mount  restrict;
ORACLE instance started.


Database mounted.
SQL> drop database;
drop database
*
ERROR at line 1:
ORA-01586: database must be mounted EXCLUSIVE and not open for this operation

Even the startup upgrade will fail:

SQL> startup upgrade
ORACLE instance started.

Total System Global Area 3206836224 bytes
Fixed Size                  2232640 bytes
Variable Size             704646848 bytes
Database Buffers         2348810240 bytes
Redo Buffers              151146496 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-39701: database must be mounted EXCLUSIVE for UPGRADE or DOWNGRADE
Process ID: 46501
Session ID: 652 Serial number: 3


Then mostly likely you are destroying a RAC database and you need to change the cluster_database=FALSE.

SQL> alter system set cluster_database=FALSE scope=spfile;


System altered.

SQL> startup mount restrict;
ORACLE instance started.


Redo Buffers              151146496 bytes
Database mounted.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
dbm1

SQL> drop database;

Database dropped.

ORA-15204: database version is incompatible with diskgroup while creating the database.

$
0
0
While creating the database via DBCA you may see the below error:




or get the below error:
"ORA-15204: database version 11.2.0.0.0 is incompatible with diskgroup DBFS_DG" while creating the database.

Fix:

Change the compatible parameter value in the dbca templates:

go to $ORACLE_HOME/assistants/dbca/templates

>vi New_Database.dbt  or  General_Purpose.dbc <= depends on which template you are using.

Change the below from
         initParam name="compatible" value="11.2.0.0.0"
Change the below from
         initParam name="compatible" value="11.2.0.3.0"
        


EXADATA X2-2 Creating DBFS fliesystem/share on EXADATA X2-2

$
0
0
 Environment:
EXADATA X2-2
OS: Linux
Image version: 11.2.3.2.0.120713
DB - 11.2.0.3

For Linux database servers, there are several steps to perform as root. Solaris database servers do not require this step and can

skip it. First, add the oracle user to the fuse group on Linux.  Run these commands as the root user.

(root)# dcli -g ~/dbs_group -l root usermod -a -G fuse oracle

Create the /etc/fuse.conf file with the user_allow_other option. Ensure proper privileges are applied to this file.

(root)# dcli -g ~/dbs_group -l root "echo user_allow_other > /etc/fuse.conf"
(root)# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf


For all database servers, create an empty directory that will be used as the mount point for the DBFS filesystem.

(root)# dcli -g ~/dbs_group -l root mkdir /dbfs_direct


[root@exadb01 onecommand]# pwd
/opt/oracle.SupportTools/onecommand

[root@exadb01 onecommand]# dcli -g dbs_group -l root mkdir /dbfs_share
[root@exadb01 onecommand]# dcli -g dbs_group -l root chown oracle:dba /dbfs_share
[root@exadb01 onecommand]#


exadb01.(MYDB1)  /home/oracle
>dba

SQL*Plus: Release 11.2.0.3.0 Production on Fri Oct 12 23:41:06 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create bigfile tablespace dbfsts datafile '+DBFS_DG' size 32g autoextend on next 8g maxsize 300g NOLOGGING EXTENT MANAGEMENT

LOCAL AUTOALLOCATE  SEGMENT SPACE MANAGEMENT AUTO ;

Tablespace created.


QL>
SQL>  create user dbfs_user identified by default tablespace dbfsts quota unlimited on dbfsts;
SQL> grant create session, create table, create view, create procedure, dbfs_role to dbfs_user;


SQL> conn dbfs_user/
Connected.
SQL> start dbfs_create_filesystem dbfsts FS1
No errors.
--------
CREATE STORE:
begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_FS1', tbl_name =>
'T_FS1', tbl_tbs => 'dbfsts', lob_tbs => 'dbfsts', do_partition => false,
partition_key => 1, do_compress => false, compression => '', do_dedup => false,
do_encrypt => false); end;
--------
REGISTER STORE:
begin dbms_dbfs_content.registerStore(store_name=> 'FS_FS1', provider_name =>
'sample1', provider_package => 'dbms_dbfs_sfs'); end;
--------
MOUNT STORE:
begin dbms_dbfs_content.mountStore(store_name=>'FS_FS1', store_mount=>'FS1');
end;
--------
CHMOD STORE:
declare m integer; begin m := dbms_fuse.fs_chmod('/FS1', 16895); end;
No errors.


copy the script:

rw-r--r-- 1 oracle   oinstall 11592 Oct 13 00:09 mount-dbfs.sh

[root@exadb01 tmp]# dos2unix mount-dbfs.sh mount-dbfs.sh
dos2unix: converting file mount-dbfs.sh to UNIX format ...
dos2unix: converting file mount-dbfs.sh to UNIX format ...
[root@exadb01 tmp]#


Edit the variable settings in the top of the script for your environment. Edit or confirm the settings for the following

variables in the script. Comments in the script will help you to confirm the values for these variables.

    DBNAME
    MOUNT_POINT
    DBFS_USER
    ORACLE_HOME (should be the RDBMS ORACLE_HOME directory)
    LOGGER_FACILITY (used by syslog to log the messages/output from this script)
    MOUNT_OPTIONS
    DBFS_PASSWD (used only if WALLET=false)
    DBFS_PWDFILE_BASE (used only if WALET=false)
    WALLET (must be true or false)
    TNS_ADMIN (used only if WALLET=true)
    DBFS_LOCAL_TNSALIAS

After editing, copy the script (rename it if desired or needed) to the proper directory (GI_HOME/crs/script) on database nodes

and set proper permissions on it, as the root user:


[root@exadb01 onecommand]# dcli -g dbs_group -l root -d /u01/app/11.2.0.3/grid/crs/script -f /tmp/mount-dbfs.sh
[root@exadb01 onecommand]# dcli -g dbs_group -l root chown oracle:dba /u01/app/11.2.0.3/grid/crs/script/mount-dbfs.sh
[root@exadb01 onecommand]#  dcli -g dbs_group -l root chmod 750 /u01/app/11.2.0.3/grid/crs/script/mount-dbfs.sh



With the appropriate preparation steps for one of the two mount methods complete, the Clusterware resource for DBFS mounting can

now be registered. Register the Clusterware resource by executing the following as the RDBMS owner of the DBFS repository

database (typically "oracle") user. The ORACLE_HOME and DBNAME should reference your Grid Infrastructure ORACLE_HOME directory

and your DBFS repository database name, respectively. If mounting multiple filesystems, you may also need to modify the

ACTION_SCRIPT and RESNAME. For more information, see section below regarding Creating and Mounting Multiple DBFS Filesystems.

Create this short script and run it as the RDBMS owner (typically "oracle") on only one database server in your cluster.

##### start script add-dbfs-resource.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/11.2.0/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
haanDBNAME=fsdb
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/11.2.0/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
  -type local_resource \
  -attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
         CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
         START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
         STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
         SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource.sh

Change it to:
##### start script add-dbfs-resource.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/11.2.0.3/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_share
DBNAME=MYDB
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/11.2.0.3/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
  -type local_resource \
  -attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
         CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
         START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
         STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
         SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource.sh
~






>vi add-dbfs-resource.sh
exadb01.(MYDB1)  /home/oracle/sshaik
>sh ./add-dbfs-resource.sh



exadb01.(MYDB1)  /home/oracle/sshaik
>srvctl stop database -d MYDB -f
exadb01.(MYDB1)  /home/oracle/sshaik
>srvctl start database -d MYDB


>crsctl stat res dbfs_share -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
dbfs_share
               OFFLINE OFFLINE      exadb01
               OFFLINE OFFLINE      exadb02
               OFFLINE OFFLINE      exadb03
               OFFLINE OFFLINE      exadb04

exadb01.(MYDB1)  /home/oracle/sshaik
> /u01/app/11.2.0.3/grid/bin/crsctl start resource dbfs_share
CRS-2672: Attempting to start 'dbfs_share' on 'exadb01'
CRS-2672: Attempting to start 'dbfs_share' on 'exadb02'
CRS-2672: Attempting to start 'dbfs_share' on 'exadb04'
CRS-2672: Attempting to start 'dbfs_share' on 'exadb03'
CRS-2676: Start of 'dbfs_share' on 'exadb01' succeeded
CRS-2676: Start of 'dbfs_share' on 'exadb04' succeeded
CRS-2676: Start of 'dbfs_share' on 'exadb03' succeeded
CRS-2676: Start of 'dbfs_share' on 'exadb02' succeeded


exadb01.(MYDB1)  /home/oracle/sshaik
> /u01/app/11.2.0.3/grid/bin/crsctl stat res dbfs_share -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
dbfs_share
               ONLINE  ONLINE       exadb01
               ONLINE  ONLINE       exadb02
               ONLINE  ONLINE       exadb03
               ONLINE  ONLINE       exadb04




    To unmount DBFS on all nodes, run this as the oracle user:
    (oracle)$ /bin/crsctl stop res dbfs_mount

    Note the following regarding restarting the database now that the dependencies have been added between the dbfs_mount

resource and the DBFS repository database resource.

    Note: After creating the dbfs_mount resource, in order to stop the DBFS repository database when the dbfs_mount resource is

ONLINE, you will have to specify the force flag when using srvctl. For example: "srvctl stop database -d fsdb -f". If you do not

specify the -f flag, you will receive an error like this:

    (oracle)$ srvctl stop database -d fsdb
    PRCD-1124 : Failed to stop database fsdb and its services
    PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME ENDS_WITH .svc)) && (TYPE == ora.service.type))

&& ((STATE != OFFLINE) || (TARGET != OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STATE !=

OFFLINE)))
    CRS-2529: Unable to act on 'ora.fsdb.db' because that would require stopping or relocating 'dbfs_mount', but the force option

was not specified

    Using the -f flag allows a successful shutdown and results in no output.

    Also note that once the dbfs_mount resource is started and then the database it depends on is shut down as shown above (with

the -f flag), the database will remain down. However, if Clusterware is then stopped and started, because the dbfs_mount resource

still has a target state of ONLINE, it will cause the database to be started automatically when normally it would have remained

down. To remedy this, ensure that dbfs_mount is taken offline (crsctl stop resource dbfs_mount) at the same time the DBFS

database is shutdown.








offnote:-

Note that the "crsctl stop cluster -all" syntax may not be used as it leaves ohasd running and Solaris database hosts require it

to be restarted for the workaround to take effect.

ORA-01502: index or partition of such index is in unusable state

$
0
0
Exploring ORA-01502 error and why we usually get this error message. I am not explaining why/how the index status changed to unusable( mostly due to the table move and alter index xxxx unusable ..)

You will get

ORA-01502: index  or partition of such index is in unusable state

If you have the parameter skip_unusable_indexes= false  then it makes sense that oracle reported this error during DML activity.

If you don't care about the unusable indexes during DML and you want the optimizer to choose different(expensive) execution plans during SELECT then you can set the parameter skip_unusable_indexes= true at the instance level or at the session level and move on.

what if you got this error even when you have the paramater skip_unusable_indexes set to true at the instance level..

i.e
SQL> show parameter skip

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
skip_unusable_indexes                boolean     TRUE

SKIP_UNUSABLE_INDEXES enables or disables the use and reporting of tables with unusable indexes or index partitions. If a SQL statement uses a hint that forces the usage of an unusable index, then this hint takes precedence over initialization parameter settings, including SKIP_UNUSABLE_INDEXES. If the optimizer chooses an unusable index, then an ORA-01502 error will result. (See Oracle Database Administrator's Guide for more information about using hints.)

Values:   true

    Disables error reporting of indexes and index partitions marked UNUSABLE. This setting allows all operations (inserts, deletes, updates, and selects) on tables with unusable indexes or index partitions.




and you still got the error during DML activity on this table, why?

SQL> insert into skip_index (salesrep_dim_pk) values (55555);
insert into skip_index (salesrep_dim_pk) values (55555)
*
ERROR at line 1:
ORA-01502: index 'SYS.SKIP_INDEX_UNIQUE' or partition of such index is in unusable state

Example:
SQL> create table TEST_TABLE as select * from  cn.cn_d_salesreps;

Table created.

SQL> CREATE UNIQUE INDEX TEST_INDEX_UNIQUE ON SKIP_INDEX (SALESREP_DIM_PK);

Index created.

SQL> commit;

Commit complete.

SQL> select sum(bytes) from dba_segments where segment_name='SKIP_INDEX_UNIQUE';

SUM(BYTES)
----------
    196608

SQL> select * from TEST_TABLE where salesrep_dim_pk =95056;

Execution Plan
----------------------------------------------------------
Plan hash value: 684699485

-------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                   |     1 |  1470 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_TABLE        |     1 |  1470 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | TEST_INDEX_UNIQUE |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SALESREP_DIM_PK"=95056)

Above explain plan shows the optimizer is using the index.

Now insert some data into the data..some DML...

SQL> insert into skip_index (salesrep_dim_pk) values (55555);

1 row created.

SQL> delete from skip_index where salesrep_dim_pk=55555;

1 row deleted.

SQL> commit;

Commit complete.


****** Now mark the index UNUSABLE *****
SQL> alter index TEST_INDEX_UNIQUE unusable;

Index altered.

SQL> select sum(bytes) from dba_segments where segment_name='TEST_INDEX_UNIQUE';

SUM(BYTES)
----------
    196608
***** This is bad even though the index is unusable the segments were not dropped for the index ***
****The default behaviour is oracle drops the segments for an unusable index *****



SQL>  select * from skip_index where salesrep_dim_pk =95056;

Execution Plan
----------------------------------------------------------
Plan hash value: 74755328

----------------------------------------------------------------------------------------
| Id  | Operation                 | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |            |   151 |   216K|    10   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS STORAGE FULL| TEST_TABLE |   151 |   216K|    10   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - storage("SALESREP_DIM_PK"=95056)
       filter("SALESREP_DIM_PK"=95056)

****As expected the access path changed from INDEX to FULL TABLE SCAN as the index is unusable******

***** try doing some DML ********

SQL> insert into TEST_TABLE (salesrep_dim_pk) values (55555);
insert into TEST_TABLE (salesrep_dim_pk) values (55555)
*
ERROR at line 1:
ORA-01502: index 'TEST_INDEX_UNIQUE' or partition of such index is in unusable state

***Since the index is a non unique index oracle will not allows us to do any dml activity on the underlying table
on which the index is unusable and will not drop the index segments either ****

  Note:
    If an index is used to enforce a UNIQUE constraint on a table, then allowing insert and update operations on the table might violate the constraint. Therefore, this setting does not disable error reporting for unusable indexes that are unique.



SQL> alter index TEST_INDEX_UNIQUE rebuild;

Index altered.

SQL> select sum(bytes) from dba_segments where segment_name='TEST_INDEX_UNIQUE';

SUM(BYTES)
----------
    196608

SQL> insert into TEST_TABLE (salesrep_dim_pk) values (55555);

1 row created.

SQL> commit;

Commit complete.


where as non unique unusable index doesn't throw this error.

Create table TEST_TABLE as select * from my_table;

****Created non-unique index here ********
create index TEXT_IDX on TEST_TABLE(customer_num,customer_name);

commit;

SQL> select customer_num from TEST_TABLE where customer_num='8.1502701000322E15';

Execution Plan
----------------------------------------------------------
Plan hash value: 3224766131

-----------------------------------------------------------------------------
| Id  | Operation        | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT |          |    12 |    96 |     4   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| TEXT_IDX |    12 |    96 |     4   (0)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("CUSTOMER_NUM"=8150270100032200)

SQL> select wo_num from TEST_TABLE where  wo_num='1.00037856314112E15';

Execution Plan
----------------------------------------------------------
Plan hash value: 3979868219

----------------------------------------------------------------------------------------
| Id  | Operation                 | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |            |     7 |    49 |   317K  (2)| 00:16:09 |
|*  1 |  TABLE ACCESS STORAGE FULL| TEST_TABLE |     7 |    49 |   317K  (2)| 00:16:09 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - storage("WO_NUM"=1000378563141120)
       filter("WO_NUM"=1000378563141120)

SQL> ALTER INDEX TEXT_IDX unusable;

Index altered.



SQL> select index_name,status from dba_indexes where index_name='TEXT_IDX';

INDEX_NAME                     STATUS
------------------------------ --------
TEXT_IDX                       UNUSABLE


SQL> select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='TEXT_IDX';

SUM(BYTES)/1024/1024/1024
-------------------------



SQL> exec dbms_stats.gather_table_stats('MYSCHEMA','TEST_TABLE',estimate_percent=>dbms_stats.auto_sample_size,method_opt=>'FOR ALL COLUMNS SIZE 1',cascade=>TRUE);

PL/SQL procedure successfully completed.


SQL> update TEST_TABLE set customer_num='1234567890' where customer_num='8.1502701000322E15';

6 rows updated.

SQL> alter index TExt_idx rebuild;


Index altered.

SQL> SQL>
SQL> commit;

Commit complete.

SQL> select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='TEXT_IDX';

SUM(BYTES)/1024/1024/1024
-------------------------
               2.00976563


oracle Move datafile into ASM

$
0
0

How to move a datafile that was created on the OS to ASM.

Issue:
Datafile created on the OS instead of in ASM

Given:
RAC 3 nodes
tablespace has multiple datafiles 

 Check the status of the file:
select df.file#,to_char(df.creation_time,'mm-dd-yyyy hh24:mi:ss') created,df.name,ts.name,df.status from v$datafile df,v$tablespace ts where df.ts#=ts.ts# and df.file#=127

FILE#    CREATION_TIME        NAME                TS_NAME        STATUS
127    12-10-2012 05:06:27    dbhome_1/dbs/DBNAME_DATA_01    APPS_TS_TX_IDX    ONLINE


RMAN> connect target /

connected to target database: DBNAME (DBID=2919937482)

RMAN> copy datafile 127 to '+DBNAME_DATA_01';

Starting backup at 10-DEC-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=854 instance=DBNAME1 device type=DISK
allocated channel: ORA_DISK_2

channel ORA_DISK_1: starting datafile copy
input datafile file number=00127 name=/icm01/u0001/app/oracle/product/11.2.0/dbhome_1/dbs/DBNAME_DATA_01
output file name=+DBNAME_DATA_01/DBNAME/datafile/apps_ts_tx_idx.480.801661731 tag=TAG20121210T114844 RECID=54 STAMP=801661781
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:56
Finished backup at 10-DEC-12

Starting Control File and SPFILE Autobackup at 10-DEC-12
piece handle=+DBNAME_FRA_01/DBNAME/autobackup/2012_12_10/s_801661785.5682.801661787 comment=NONE
Finished Control File and SPFILE Autobackup at 10-DEC-12

RMAN> switch datafile 127 to copy;

datafile 127 switched to datafile copy "+DBNAME_DATA_01/DBNAME/datafile/apps_ts_tx_idx.480.801661731"

RMAN> exit


Recovery Manager complete.


SQL*Plus: Release 11.2.0.1.0 Production on Mon Dec 10 11:50:32 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


FILE#    CREATION_TIME        NAME                TS_NAME        STATUS
127    12-10-2012 05:06:27    +apps_ts_tx_idx.480.801661731   APPS_TS_TX_IDX    RECOVER


>rman

Recovery Manager: Release 11.2.0.1.0 - Production on Mon Dec 10 11:51:22 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: DBNAME (DBID=2919937482)

RMAN> recover datafile 127;

Starting recover at 10-DEC-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=3679 instance=DBNAME1 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=3955 instance=DBNAME1 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=4524 instance=DBNAME1 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=4799 instance=DBNAME1 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=5082 instance=DBNAME1 device type=DISK

starting media recovery
media recovery complete, elapsed time: 00:00:07

Finished recover at 10-DEC-12

RMAN> exit


Recovery Manager complete.


SQL*Plus: Release 11.2.0.1.0 Production on Mon Dec 10 11:52:06 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


FILE#    CREATION_TIME        NAME                TS_NAME        STATUS
127    12-10-2012 05:06:27    +apps_ts_tx_idx.480.801661731   APPS_TS_TX_IDX    OFFLINE


SQL> alter database datafile 127 online;

Database altered.

SQL>

FILE#    CREATION_TIME        NAME                TS_NAME        STATUS
127    12-10-2012 05:06:27    +apps_ts_tx_idx.480.801661731   APPS_TS_TX_IDX    ONLINE


DBNAMEb01cdp(DBNAME1)  /icm01/u0001/app/oracle/product/11.2.0/dbhome_1/dbs
>rm DBNAME_DATA_01

Find Exadata IO saved by smart scan and offload to cells

$
0
0
Find amount of I/O saved by smart scan vs Total I/Oor
Find amount of I/O transported from cell to DB or
Find amount of I/O saved by storage index or
Find amount of I/O saved by predicate offload

Image version: 11.2.3.1.0.120304


Verify what are all the functions and operators qualify for smart_scan
from V$SQLFN_METADATA.



Example-1:

select sid,value/1024/1024 IO_MB,name from v$sesstat st,v$statname sn
where st.statistic#=sn.statistic#
and st.sid=1
and (sn.name like 'cell physical%' or sn.name like 'cell io%' or sn.name like 'physical%total bytes')


INST_ID    SID    IO_IN_MB    STAT_NAME
=======================================================
3    676    0    physical read total bytes
3    676    0    physical write total bytes
3    676    0    cell physical IO interconnect bytes
3    676    0    cell physical IO bytes saved during optimized file creation
3    676    0    cell physical IO bytes saved during optimized RMAN file restore
3    676    0    cell physical IO bytes eligible for predicate offload
3    676    0    cell physical IO bytes saved by storage index
3    676    0    cell physical IO bytes sent directly to DB node to balance CPU
3    676    0    cell physical IO interconnect bytes returned by smart scan



SQL> select count(*) from cn.LINES_API_ALL where processed_period_id< 2011001;

  COUNT(*)
----------
   4300815
Elapsed: 00:00:44.54

Execution Plan
----------------------------------------------------------
Plan hash value: 616062978


Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage("PROCESSED_PERIOD_ID"<2011001 br="br">       filter("PROCESSED_PERIOD_ID"<2011001 br="br">




INST_ID    SID    IO_IN_MB    STAT_NAME
=======================================================
3    676    6873.57    physical read total bytes
3    676    0    physical write total bytes
3    676    6873.57    cell physical IO interconnect bytes
3    676    0    cell physical IO bytes saved during optimized file creation
3    676    0    cell physical IO bytes saved during optimized RMAN file restore
3    676    0    cell physical IO bytes eligible for predicate offload
3    676    0    cell physical IO bytes saved by storage index
3    676    0    cell physical IO bytes sent directly to DB node to balance CPU
3    676    0    cell physical IO interconnect bytes returned by smart scan

Now force it to use direct path reads by using parallel hint.

SQL>  set timing on echo on linesize 1000 pages 300
SQL> set autot trace exp stat
SQL>  select /*+ PARALLEL(T,64) */count(*) from cn.LINES_API_ALL T where processed_period_id< 2011001;

Elapsed: 00:00:01.53

Execution Plan
----------------------------------------------------------
Plan hash value: 859133999



Predicate Information (identified by operation id):
---------------------------------------------------

   6 - storage("PROCESSED_PERIOD_ID"<2011001 br="br">       filter("PROCESSED_PERIOD_ID"<2011001 br="br">

Statistics
----------------------------------------------------------
        192  recursive calls
       9920  db block gets
   37668038  consistent gets
   37347799  physical reads
          0  redo size
        529  bytes sent via SQL*Net to client
        520  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed


INST_ID    SID    IO_IN_MB    STAT_NAME
=======================================================
3    676    291779.69    physical read total bytes
3    676    0                  physical write total bytes
3    676    92.42            cell physical IO interconnect bytes
3    676    0                  cell physical IO bytes saved during optimized file creation
3    676    0                     cell physical IO bytes saved during optimized RMAN file restore
3    676    291779.68    cell physical IO bytes eligible for predicate offload
3    676    279727.9    cell physical IO bytes saved by storage index
3    676    0                     cell physical IO bytes sent directly to DB node to balance CPU
3    676    92.41            cell physical IO interconnect bytes returned by smart scan


From the above it is clear that smart scan offloaded the IO to the cell and transported only
the necessary IO to the db server i.e around 92 mb and overall elapsed time is apprx 2 secs down from 44 secs.

Example-2

Create tables forces direct path reads, lets check an example:

SQL> drop table sshaik_api_all;

Table dropped.

SQL> alter system flush buffer_cache;

System altered.

SQL> alter system flush shareD_pool;

System altered.




SQL> alter session set cell_offload_processing=false;

Session altered.

Elapsed: 00:00:00.00
SQL> create table sshaik_api_all as select * from cn.LINES_API_ALL where processed_period_id<=2011002;

Table created.

Elapsed: 00:05:27.322011001>2011001>2011001>2011001>

No smart scan or cell offloading and it took around 5mins 27 secs

INST_ID    SID    IO_IN_MB    STAT_NAME
=======================================================
3    112    298494.52    physical read total bytes
3    112    6697.1             physical write total bytes
3    112    311888.73    cell physical IO interconnect bytes
3    112    0                    cell physical IO bytes saved during optimized file creation
3    112    0                    cell physical IO bytes saved during optimized RMAN file restore
3    112    0                   cell physical IO bytes eligible for predicate offload
3    112    0                    cell physical IO bytes saved by storage index
3    112    0                   cell physical IO bytes sent directly to DB node to balance CPU
3    112    0                    cell physical IO interconnect bytes returned by smart scan




SQL> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.20
SQL> alter system flush shared_pool;

System altered.

Elapsed: 00:00:00.07
SQL> drop table sshaik_api_all;

Table dropped.

Elapsed: 00:00:00.81
SQL> purge dba_recyclebin;

DBA Recyclebin purged.



Elapsed: 00:00:00.00
SQL> alter session set cell_offload_processing=true;

Session altered.

Elapsed: 00:00:00.00
SQL> show parameter cell_offload_processing

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cell_offload_processing              boolean     TRUE


SQL> create table sshaik_api_all as select * from cn.LINES_API_ALL where processed_period_id<=2011002;

Table created.



Elapsed: 00:01:21.11

With smart scan and cell offloading, it took around 1min 21 secs

Plan hash value: 3947898682



Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage("PROCESSED_PERIOD_ID"<=2011002)
       filter("PROCESSED_PERIOD_ID"<=2011002)



INST_ID    SID    IO_IN_MB    STAT_NAME
=======================================================
3    112    298615.73    physical read total bytes
3    112    20694.93    physical write total bytes
3    112    19844.59    cell physical IO interconnect bytes
3    112    13966            cell physical IO bytes saved during optimized file creation
3    112    0                    cell physical IO bytes saved during optimized RMAN file restore
3    112    312442.61    cell physical IO bytes eligible for predicate offload
3    112    94806.78       cell physical IO bytes saved by storage index
3    112    0                    cell physical IO bytes sent directly to DB node to balance CPU
3    112    6219.6            cell physical IO interconnect bytes returned by smart scan


SQL> select count(*) from sshaik_api_all;

  COUNT(*)
----------
  11968704

Change Concurrent manager for a Program

$
0
0
How to change a concurrent manager for a program.

Ex:-
1) exclude the program from the standard manager:
SYSADMIN--> Concurrent --> Manger -->Define


 

--> Specialization rules:




Exclude the Program:



Now go to the customer Manager and include the program:

11gR2 -Oracle Index usage monitoring

$
0
0

To start monitoring the index usage:

ALTER INDEX MONITORING USAGE;


To stop:

ALTER INDEX MONITORING USAGE;


To monitor the status if you can login as owner then use

select * from v$object_usage;

If viewing the stats as other/super user then use below:

select d.username, io.name INDEX_NAME, t.name TABLE_NAME,
       decode(bitand(i.flags, 65536), 0, 'NO', 'YES') MONITORING,
       decode(bitand(ou.flags, 1), 0, 'NO', 'YES') INDEX_USED,
       ou.start_monitoring,
       ou.end_monitoring
from sys.obj$ io, sys.obj$ t, sys.ind$ i, sys.object_usage ou,
     dba_users d
where io.owner# = d.user_id
  AND d.username = 'CN'
  and i.obj# = ou.obj#
  and io.obj# = ou.obj#
  and t.obj# = i.bo#;
 
Ex:-



EXADATA I/O CALIBRATE ON Database server

$
0
0
We can usually use calibrate_io procedure but for EXADATA it is recommended we manually set the values on the database server.

Traditional Method:
DBMS_RESOURCE_MANAGER.CALIBRATE_IO
Ex:-
unless until you give latency "10" or higher for the second parameter (SLA) it will fail.

SQL> SET SERVEROUTPUT ON
 DECLARE
 lat INTEGER;
iops INTEGER;
mbps INTEGER;
BEGIN
DBMS_RESOURCE_MANAGER.CALIBRATE_IO (84,10, iops, mbps, lat);
DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
DBMS_OUTPUT.PUT_LINE ('latency = ' || lat);
dbms_output.put_line('max_mbps = ' || mbps);
end;
  /
max_iops = 34349
latency = 11
max_mbps = 10871

Error if you give SLA <=10:
DBMS_RESOURCE_MANAGER.CALIBRATE_IO (84,1, iops, mbps, lat);
DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
DBMS_OUTPUT.PUT_LINE ('latency = ' || lat);
dbms_output.put_line('max_mbps = ' || mbps);
end;

DECLARE
*
ERROR at line 1:
ORA-29355: NULL or invalid MAX_LATENCY argument specified
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296
ORA-06512: at line 6


Exadata recommended:

SQL> select * from resource_io_calibrate$;

no rows selected

SQL> insert into resource_io_calibrate$ values(current_timestamp,current_timestamp, 0, 0, 200, 0, 0);

1 row created.

SQL> select * from resource_io_calibrate$;

START_TIME
---------------------------------------------------------------------------
END_TIME
---------------------------------------------------------------------------
  MAX_IOPS   MAX_MBPS  MAX_PMBPS    LATENCY  NUM_DISKS
---------- ---------- ---------- ---------- ----------
03-JAN-13 10.13.20.640962 PM
03-JAN-13 10.13.20.640962 PM
         0          0        200          0          0


Reference:- (Doc ID 1297112.1) Best Practices for Data Warehousing on the Oracle Database Machine X2-2

Article 6

$
0
0

Oracle Ebiz R12 -- How To Clear Caches

 

In R12 you can directly clear the cache from UI as SYSADMIN.

  1. Functional level (individual product/all product cache)
    Functional Administrator -> Core Services -> Caching Framework -> Global Configuration -> "Go to specific cache or Clear All Cache

 As sysadmin go to -->Functional Administrator


 


 go to --> Core Services


 






Then go to --> Caching Framework











Pick the component from the list or select all to purge all caches:














Oracle 11g Remove trace files automatically using adcri

$
0
0

Oracle Automatic trace files cleanup:

In 11g we can clean/remove the trace files automatically using adrci.


adrci> help purge


Diagnostic data Purging in ADR is controlled by two attributes:
- SHORTP_POLICY which is used for automatically purging short-lived files, i.e. dump files, expressed in hours and defaults to 30 days.
- LONGP_POLICY which is used for automatically purging long-lived files, i.e. incidents, expressed in hours and defaults to 1 year.

     
adrci> show home

adrci> show home
ADR Homes:
diag/rdbms/DB1/DB11
diag/rdbms/DB3/DB31
diag/rdbms/qa/QA1
diag/rdbms/prd/DB11
diag/rdbms/DB2/DB21


Pick the home

adrci> set home diag/rdbms/DB1/DB11

adrci> show control

ADR Home = /icm01/u0001/app/diag/rdbms/DB1/DB11:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1182197510           720                  8760                 2011-12-12 21:30:04.349541 -05:00      


To set the purging policy to 7 days:

adrci> set control (SHORTP_POLICY=168)
adrci> set control (LONGP_POLICY=168)

adrci> show control

ADR Home = /icm01/u0001/app/diag/rdbms/DB1/DB11:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1182197510           168                  168                  2013-01-14 11:21:51.600312 -05:00      

To Manually purge all trace files older than 1 day (1440 mins):
Including core files: cdmp*

adrci> purge -age 1440



To purge core files older than 6 days
 
adrci> purge -age 8640 -type CDUMP


It might be needed to also run the following additional command:

 
adrci> purge -age 8640 -type UTSCDMP



-> to remove sub-folders >6 days-old having a name like "CDMP_" from TRACE.

Purge XML based Alert log file:

adrci> purge -age 60 -type ALERT

do not clean up the Text-formatted alert.log file.

The ADRCI interface is only supposed to modify the XML-formatted alert file, not the Text-formatted alert file,
Viewing all 191 articles
Browse latest View live