Quantcast
Channel: 数据库数据恢复服务热线:13764045638 QQ: 47079569
Viewing all 43 articles
Browse latest View live

将在版本11.2之后废弃或不再支持的特性

$
0
0

12c是Oracle未来最重量级的数据库产品,每个新版本的到来都会带来吐故纳新,例如9i的sqlplusw、10g的isqlplus均在其后续版本中废弃;12c中将被废弃或不支持的特性,例如:Database Control DBconsole、OCFS on Windows、CSSCAN、CSALTER、cursor_sharing = ‘SIMILAR’、Oracle Net Connection Pooling feature。  为了照顾旧应用的兼容性,看来在12c中暂时不会彻底废掉RBO。

更多信息可以参考以下NOTE:

Deprecated and Desupported Features after Oracle Database 11.2:

Document 1484775.1 Database Control To Be Desupported in DB Releases after 11.2
Document 1392280.1 Desupport of Oracle Cluster File System (OCFS) on Windows with Oracle DB 12
Document 1175293.1 Obsolescence Notice: Oracle COM Automation
Document 1175303.1 Obsolescence Notice: Oracle Objects for OLE
Document 1175297.1 Obsolescence Notice: Oracle Counters for Windows Performance Monitor
Document 1418321.1 CSSCAN and CSALTER To Be Desupported After DB 11.2
Document 1169017.1 Deprecating the cursor_sharing = ‘SIMILAR’ setting
Document 1469466.1: Deprecation of Oracle Net Connection Pooling feature in Oracle Database 11g Release 2


【12c新特性】EM Database Express

$
0
0

EM Database Express是Oracle Database 12c 中引入的新特性 ,替代以前版本中的DBCONSOLE,使之EM基于网页管理DB的部署更迅速、方便。

12c EM Database Express Architecture

启动12c EM database express的方式更简便:

 

1. 确认dispatchers参数

SQL> show parameter dispatcher

NAME TYPE VALUE
———————————— ———– ——————————
dispatchers string (PROTOCOL=TCP) (SERVICE=cdb1XD
B)
max_dispatchers integer

 

2.执行 DBMS_XDB.setHTTPPort过程

SQL> exec dbms_XDB.setHttpPort(5500);

PL/SQL procedure successfully completed.

3. 使用浏览器打开页面地址 http://ip地址:5500/em/login 并登陆

 

12c EM Database Express Architecture2

 

 

登陆后的界面:

 

12c EM Database Express Architecture3

【12c新特性】安装12c Standalone Grid Infrastructure

$
0
0

【12c新特性】安装12c Standalone Grid Infrastructure

 

install 12c grid 1

 

install 12c grid 2

install 12c grid 3

install 12c grid 4

 

install 12c grid 5

 

install 12c grid 6

 

install 12c grid 7

 

install 12c grid 8

 

 

install 12c grid 9

 

install 12c grid 10

install 12c grid 11 install 12c grid 12
install 12c grid asm 13 install 12c grid asm 14









 

 

 

 

 

 

 

[grid@localhost stage]$ unzip grid_12.1BETA2.zip
[root@localhost ~]# /g01/app/grid/product/12.1.0/grid/rootupgrade.sh
[root@localhost ~]# /g01/app/grid/product/12.1.0/grid/rootupgrade.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /g01/app/grid/product/12.1.0/grid
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /g01/app/grid/product/12.1.0/grid/crs/install/crsconfig_params

Error while detecting Oracle Grid Infrastructure. ASMCA needs Oracle Grid Infrastructure to configure ASM.

2013/03/31 02:52:34 CLSRSC-164: ASM upgrade failed

2013/03/31 02:52:34 CLSRSC-304: Failed to upgrade ASM for Oracle Restart configuration

Died at /g01/app/grid/product/12.1.0/grid/crs/install/crsupgrade.pm line 2423.
The command ‘/g01/app/grid/product/12.1.0/grid/perl/bin/perl -I/g01/app/grid/product/12.1.0/grid/perl/lib -I/g01/app/grid/product/12.1.0/grid/crs/install /g01/app/grid/product/12.1.0/grid/crs/install/roothas.pl -upgrade’ execution failed
[root@localhost ~]# /g01/app/grid/product/12.1.0/grid/root.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /g01/app/grid/product/12.1.0/grid
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /g01/app/grid/product/12.1.0/grid/crs/install/crsconfig_params
2013/03/31 02:52:49 CLSRSC-350: Cannot configure two CRS instances on the same cluster

2013/03/31 02:52:49 CLSRSC-352: CRS is already configured on this node for crshome=/g01/app/grid/product/11.2.0/grid

The command ‘/g01/app/grid/product/12.1.0/grid/perl/bin/perl -I/g01/app/grid/product/12.1.0/grid/perl/lib -I/g01/app/grid/product/12.1.0/grid/crs/install /g01/app/grid/product/12.1.0/grid/crs/install/roothas.pl ‘ execution failed
[root@localhost ~]# /g01/app/grid/product/12.1.0/grid/rootupgrade.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /g01/app/grid/product/12.1.0/grid
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /g01/app/grid/product/12.1.0/grid/crs/install/crsconfig_params

Error while detecting Oracle Grid Infrastructure. ASMCA needs Oracle Grid Infrastructure to configure ASM.

2013/03/31 02:53:06 CLSRSC-164: ASM upgrade failed

2013/03/31 02:53:06 CLSRSC-304: Failed to upgrade ASM for Oracle Restart configuration

Died at /g01/app/grid/product/12.1.0/grid/crs/install/crsupgrade.pm line 2423.
The command ‘/g01/app/grid/product/12.1.0/grid/perl/bin/perl -I/g01/app/grid/product/12.1.0/grid/perl/lib -I/g01/app/grid/product/12.1.0/grid/crs/install /g01/app/grid/product/12.1.0/grid/crs/install/roothas.pl -upgrade’ execution failed
[root@localhost ~]# /g01/app/grid/product/12.1.0/grid/rootupgrade.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /g01/app/grid/product/12.1.0/grid
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /g01/app/grid/product/12.1.0/grid/crs/install/crsconfig_params

ASM Configuration upgraded successfully.

Creating OCR keys for user ‘grid’, privgrp ‘oinstall’..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4664: Node localhost successfully pinned.
2013/03/31 02:56:41 CLSRSC-329: Replacing Clusterware entries in file ‘/etc/inittab’
2013/03/31 03:00:26 CLSRSC-329: Replacing Clusterware entries in file ‘/etc/inittab’
localhost 2013/03/31 03:04:28 /g01/app/grid/product/12.1.0/grid/cdata/localhost/backup_20130331_030428.olr

localhost 2013/01/29 14:56:43 /g01/app/grid/product/11.2.0/grid/cdata/localhost/backup_20130129_145643.olr
2013/03/31 03:04:28 CLSRSC-327: Successfully configured Oracle Grid Infrastructure for a Standalone Server

 

 

 
[grid@localhost ~]$ pstree -a
init
├─VBoxService
│ ├─{VBoxService}
│ ├─{VBoxService}
│ ├─{VBoxService}
│ ├─{VBoxService}
│ ├─{VBoxService}
│ ├─{VBoxService}
│ └─{VBoxService}
├─acpid
├─anacron -s
├─atd
├─auditd
│ ├─audispd
│ │ └─{audispd}
│ └─{auditd}
├─automount
│ ├─{automount}
│ ├─{automount}
│ ├─{automount}
│ └─{automount}
├─avahi-daemon
│ └─avahi-daemon
├─crond
├─cssdagent
│ ├─{cssdagent}
│ ├─{cssdagent}
│ ├─{cssdagent}
│ ├─{cssdagent}
│ ├─{cssdagent}
│ ├─{cssdagent}
│ ├─{cssdagent}
│ ├─{cssdagent}
│ ├─{cssdagent}
│ └─{cssdagent}
├─cupsd
├─dbus-daemon –system
├─dhclient -1 -q -lf /var/lib/dhclient/dhclient-eth0.leases -pf /var/run/dhclient-eth0.pid eth0
├─evmd.bin
│ ├─evmlogger.bin -o /g01/app/grid/product/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l/g01/app/grid/product/12.1.0/grid/log/[H
│ ├─{evmd.bin}
│ ├─{evmd.bin}
│ ├─{evmd.bin}
│ ├─{evmd.bin}
│ ├─{evmd.bin}
│ ├─{evmd.bin}
│ ├─{evmd.bin}
│ ├─{evmd.bin}
│ ├─{evmd.bin}
│ ├─{evmd.bin}
│ └─{evmd.bin}
├─gam_server
├─gdm-binary -nodaemon
│ └─gdm-binary -nodaemon
│ ├─Xorg :0 -br -audit 0 -auth /var/gdm/:0.Xauth -nolisten tcp vt7
│ └─gdmgreeter
├─gdm-rh-security
│ └─{gdm-rh-security}
├─gpm -m /dev/input/mice -t exps2
├─hald
│ └─hald-runner
│ ├─hald-addon-keyb
│ ├─hald-addon-keyb
│ ├─hald-addon-keyb
│ └─hald-addon-stor
├─hcid
├─hidd --server
├─hpiod
├─hpssd.py ./hpssd.py
├─init.ohasd /etc/init.d/init.ohasd run
├─iscsid
├─iscsid
├─iscsiuio
│ ├─{iscsiuio}
│ ├─{iscsiuio}
│ └─{iscsiuio}
├─klogd -x
├─mingetty tty1
├─mingetty tty2
├─mingetty tty3
├─mingetty tty4
├─mingetty tty5
├─mingetty tty6
├─ocssd.bin
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ ├─{ocssd.bin}
│ └─{ocssd.bin}
├─ohasd.bin reboot
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ ├─{ohasd.bin}
│ └─{ohasd.bin}
├─ora_aqpc_cdb1
├─ora_arc0_cdb1
├─ora_arc1_cdb1
├─ora_arc2_cdb1
├─ora_arc3_cdb1
├─ora_cjq0_cdb1
├─ora_ckpt_cdb1
├─ora_d000_cdb1
├─ora_dbrm_cdb1
├─ora_dbw0_cdb1
├─ora_dia0_cdb1
├─ora_diag_cdb1
├─ora_fbda_cdb1
├─ora_gen0_cdb1
├─ora_lgwr_cdb1
├─ora_lreg_cdb1
├─ora_mman_cdb1
├─ora_mmnl_cdb1
├─ora_mmon_cdb1
├─ora_ofsd_cdb1
├─ora_p000_cdb1
├─ora_p001_cdb1
├─ora_p002_cdb1
├─ora_p003_cdb1
├─ora_pmon_cdb1
├─ora_psp0_cdb1
├─ora_q001_cdb1
├─ora_q002_cdb1
├─ora_qm01_cdb1
├─ora_reco_cdb1
├─ora_s000_cdb1
├─ora_smco_cdb1
├─ora_smon_cdb1
├─ora_tmon_cdb1
├─ora_tt00_cdb1
├─ora_vktm_cdb1
├─ora_w000_cdb1
├─oraagent.bin
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ ├─{oraagent.bin}
│ └─{oraagent.bin}
├─pcscd
│ ├─{pcscd}
│ └─{pcscd}
├─portmap
├─rpc.idmapd
├─rpc.statd
├─sdpd
├─sendmail
├─sendmail
├─smartd -q never
├─sshd
│ ├─sshd
│ │ ├─bash
│ │ │ └─su - grid
│ │ │ └─bash
│ │ ├─bash
│ │ │ └─su - grid
│ │ │ └─bash
│ │ │ └─pstree -a
│ │ ├─bash
│ │ │ └─su - grid
│ │ │ └─bash
│ │ │ └─tail -f ohasd.log
│ │ └─sftp-server
│ └─sshd
│ └─sshd
│ └─xterm -ls -display localhost:10.0
│ └─bash
├─syslogd -m 0
├─tnslsnr LISTENER -inherit
├─tnslsnr LISTENER -inherit
├─udevd -d
├─xfs -droppriv -daemon
├─xinetd -stayalive -pidfile /var/run/xinetd.pid
└─yum-updatesd -tt /usr/sbin/yum-updatesd

[grid@localhost ~]$ cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

# export GRID_HOME=/g01/app/grid/product/11.2.0/grid
export GRID_HOME=/g01/app/grid/product/12.1.0/grid
export PATH=$GRID_HOME/bin:$GRID_HOME/OPatch:/usr/bin:/usr/sbin:/bin:/sbin
export ORACLE_SID=+ASM
# export ORACLE_HOME=/g01/app/grid/product/11.2.0/grid
export ORACLE_HOME=/g01/app/grid/product/12.1.0/grid
export ORACLE_BASE=/g01/app/grid

asmca

[grid@localhost tmp]$ crsctl stat res -t
——————————————————————————–
Name Target State Server State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.EXTDG.dg
ONLINE OFFLINE localhost STABLE
ora.LISTENER.lsnr
ONLINE ONLINE localhost STABLE
ora.NORDG.dg
OFFLINE OFFLINE localhost STABLE
ora.SYSTEMDG.dg
ONLINE OFFLINE localhost STABLE
ora.asm
ONLINE OFFLINE localhost STABLE
ora.ons
OFFLINE OFFLINE localhost STABLE
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.cssd
1 ONLINE ONLINE localhost STABLE
ora.diskmon
1 OFFLINE OFFLINE STABLE
ora.evmd
1 ONLINE ONLINE localhost STABLE
——————————————————————————–

 

 

【Oracle Database 12c新特性】 12c DataPump Expdp/Impdp新特性

$
0
0

在Oracle Database 12c中加入了一些DataPump Expdp/Impdp的新特性,当然包括对CDB的支持,此外还有部分特性。

 

例如DISABLE_ARCHIVE_LOGGING/RECOVERY_LOGGING 减少impdp导入时 TABLE/INDEX产生的redo,注意这仅仅是减少不是禁绝。

 

基本语法如下:

$ impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp SCHEMAS=hr TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y
$ impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp SCHEMAS=hr TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:TABLE
$ impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp SCHEMAS=hr TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TRANSFORM=DISABLE_ARCHIVE_LOGGING:N:INDEX

 

  • 注意 即便你用DISABLE_ARCHIVE_LOGGING:Y 也不代表能完全不产生redo
  • 对于 FORCE LOGGING的数据库 DISABLE_ARCHIVE_LOGGING:Y无效

 

具体使用:

Oracle Database 12c Enterprise Edition Release 12.1.0.0.2 – 64bit Beta
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 17
Current log sequence 19

 
oracle@localhost:~$ expdp system/oracle dumpfile=temp:ogg_maclean.dmp schemas=ogg_maclean

Export: Release 12.1.0.0.2 – Beta on Sun Apr 28 05:14:00 2013

Copyright (c) 1982, 2012, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.0.2 – 64bit Beta
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_02″: system/******** dumpfile=temp:ogg_maclean.dmp schemas=ogg_maclean
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 30 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported “OGG_MACLEAN”.”MACLEAN_PRESS1″ 2.298 MB 84000 rows
. . exported “OGG_MACLEAN”.”MACLEAN_PRESS10″ 2.298 MB 84000 rows
. . exported “OGG_MACLEAN”.”MACLEAN_PRESS2″ 2.298 MB 84000 rows
. . exported “OGG_MACLEAN”.”MACLEAN_PRESS3″ 2.298 MB 84000 rows
. . exported “OGG_MACLEAN”.”MACLEAN_PRESS4″ 2.298 MB 84000 rows
. . exported “OGG_MACLEAN”.”MACLEAN_PRESS5″ 2.298 MB 84000 rows
. . exported “OGG_MACLEAN”.”MACLEAN_PRESS6″ 2.298 MB 84000 rows
. . exported “OGG_MACLEAN”.”MACLEAN_PRESS7″ 2.298 MB 84000 rows
. . exported “OGG_MACLEAN”.”MACLEAN_PRESS8″ 2.298 MB 84000 rows
. . exported “OGG_MACLEAN”.”MACLEAN_PRESS9″ 2.298 MB 84000 rows
Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_02 is:
/tmp/ogg_maclean.dmp
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully completed at Sun Apr 28 05:15:01 2013 elapsed 0 00:00:57

oracle@localhost:~$ ls -lh /tmp/ogg_maclean.dmp
-rw-r—– 1 oracle oinstall 24M Apr 28 05:15 /tmp/ogg_maclean.dmp

 

 
oracle@localhost:~$ impdp system/oracle dumpfile=temp:ogg_maclean.dmp remap_schema=ogg_maclean:ogg_maclean1

Import: Release 12.1.0.0.2 – Beta on Sun Apr 28 05:18:18 2013

Copyright (c) 1982, 2012, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.0.2 – 64bit Beta
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″: system/******** dumpfile=temp:ogg_maclean.dmp remap_schema=ogg_maclean:ogg_maclean1
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported “OGG_MACLEAN1″.”MACLEAN_PRESS1″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN1″.”MACLEAN_PRESS10″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN1″.”MACLEAN_PRESS2″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN1″.”MACLEAN_PRESS3″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN1″.”MACLEAN_PRESS4″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN1″.”MACLEAN_PRESS5″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN1″.”MACLEAN_PRESS6″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN1″.”MACLEAN_PRESS7″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN1″.”MACLEAN_PRESS8″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN1″.”MACLEAN_PRESS9″ 2.298 MB 84000 rows
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully completed at Sun Apr 28 05:18:31 2013 elapsed 0 00:00:10

 
DISABLE_ARCHIVE_LOGGING
oracle@localhost:~$ impdp system/oracle dumpfile=temp:ogg_maclean.dmp remap_schema=ogg_maclean:ogg_maclean2 TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y

Import: Release 12.1.0.0.2 – Beta on Sun Apr 28 05:21:45 2013

Copyright (c) 1982, 2012, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.0.2 – 64bit Beta
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″: system/******** dumpfile=temp:ogg_maclean.dmp remap_schema=ogg_maclean:ogg_maclean2 TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported “OGG_MACLEAN2″.”MACLEAN_PRESS1″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN2″.”MACLEAN_PRESS10″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN2″.”MACLEAN_PRESS2″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN2″.”MACLEAN_PRESS3″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN2″.”MACLEAN_PRESS4″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN2″.”MACLEAN_PRESS5″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN2″.”MACLEAN_PRESS6″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN2″.”MACLEAN_PRESS7″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN2″.”MACLEAN_PRESS8″ 2.298 MB 84000 rows
. . imported “OGG_MACLEAN2″.”MACLEAN_PRESS9″ 2.298 MB 84000 rows
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully completed at Sun Apr 28 05:21:56 2013 elapsed 0 00:00:09

 

如同导出表那样导出视图数据

 Exporting Views as Tables会导出 表的定义和视图数据,而不仅仅是视图定义。以及其依赖的对象,例如约束和授权

 

SQL> create view cnt as select count(*) c1 from MACLEAN_PRESS1;

View created.

oracle@localhost:~$ expdp system/oracle dumpfile=temp:view.dmp views_as_tables=ogg_maclean.cnt

Export: Release 12.1.0.0.2 – Beta on Sun Apr 28 05:52:49 2013

Copyright (c) 1982, 2012, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.0.2 – 64bit Beta
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_TABLE_01″: system/******** dumpfile=temp:view.dmp views_as_tables=ogg_maclean.cnt
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE_DATA
Total estimation using BLOCKS method: 16 KB
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
. . exported “OGG_MACLEAN”.”CNT” 5.046 KB 1 rows
Master table “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/tmp/view.dmp
Job “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully completed at Sun Apr 28 05:53:01 2013 elapsed 0 00:00:10

 

【Database 12c】手动创建CDB Container Database容器数据库

$
0
0

手动建库几乎是每个DBA都需要掌握的技能,而Database 12c中手动创建Container Database容器数据库的过程是如何的呢?

目前12c创建容器数据库Container Database和普通Database存在一点点小的区别,需要指定enable pluggable database,已创建的数据库目前无法转换为容器数据库。

创建必要的目录

mkdir -p /stage/oradata
mkdir -p /stage/fr
mkdir -p /u01/app/oracle/admin/MACLEANCDB/adump

我们创建 实例初始化文件,并创建DB:

 

 

1、 INIT.ORA

##############################################################################
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
##############################################################################

###########################################
# 
###########################################
_enable_pluggable_database=true

###########################################
# Cache and I/O
###########################################
db_block_size=8192

###########################################
# Cursors and Library Cache
###########################################
open_cursors=300

###########################################
# Database Identification
###########################################
db_domain=""
db_name="MACLEANC"

###########################################
# File Configuration
###########################################
db_create_file_dest="/stage/oradata"
db_recovery_file_dest="/stage/fr"
db_recovery_file_dest_size=5061476352

###########################################
# Miscellaneous
###########################################
compatible=12.0.0.0.0
db_unique_name="MACLEANCDB"
diagnostic_dest=/u01/app/oracle

###########################################
# Network Registration
###########################################
#local_listener=LISTENER_MACLEANCDB

###########################################
# Processes and Sessions
###########################################
processes=300

###########################################
# SGA Memory
###########################################
sga_target=1022361600

###########################################
# Security and Auditing
###########################################
audit_file_dest="/u01/app/oracle/admin/MACLEANCDB/adump"
audit_trail=db
remote_login_passwordfile=EXCLUSIVE

###########################################
# Shared Server
###########################################
dispatchers="(PROTOCOL=TCP) (SERVICE=MACLEANCDBXDB)"

###########################################
# Sort, Hash Joins, Bitmap Indexes
###########################################
pga_aggregate_target=340787200

###########################################
# System Managed Undo and Rollback Segments
###########################################
undo_tablespace=UNDOTBS1

 

 

 

2、 创建密码文件

 

 

oracle@localhost:~$ /u01/app/oracle/product/12.1.0/dbhome_1/bin/orapwd file=/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwMACLEANCDB force=y extended=y

Enter password for SYS:

 

 

3、 正式创建DB

 

 

oracle@localhost:~$ export ORACLE_SID=MACLEANCDB
oracle@localhost:~$ sqlplus / as sysdba

SQL> startup nomount pfile='init.ora';
ORACLE instance started.

Total System Global Area 1018830848 bytes
Fixed Size                  2268040 bytes
Variable Size             268436600 bytes
Database Buffers          742391808 bytes
Redo Buffers                5734400 bytes

CREATE DATABASE "MACLEANC"
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE SIZE 700M AUTOEXTEND ON NEXT  10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE SIZE 550M AUTOEXTEND ON NEXT  10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE SIZE 20M AUTOEXTEND ON NEXT  640K MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE  SIZE 200M AUTOEXTEND ON NEXT  5120K MAXSIZE UNLIMITED
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET UTF8
LOGFILE GROUP 1  SIZE 51200K,
GROUP 2  SIZE 51200K,
GROUP 3  SIZE 51200K
USER SYS IDENTIFIED BY "oracle" USER SYSTEM IDENTIFIED BY "oracle"
enable pluggable database;

Database created.

set linesize 2048;
column ctl_files NEW_VALUE ctl_files;
select concat('control_files=''', concat(replace(value, ', ', ''','''), '''')) ctl_files from v$parameter where name ='control_files';
host echo &ctl_files >> /u01/app/oracle/admin/MACLEANCDB/scripts/init.ora;
spool off

将控制文件信息写入到中==》 echo &ctl_files >> /u01/app/oracle/admin/MACLEANCDB/scripts/init.ora;

 

 

4、创建默认使用的USERS表空间

 

SQL> CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE  SIZE 5M AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT  AUTO;

Tablespace created.

SQL> ALTER DATABASE DEFAULT TABLESPACE "USERS";

Database altered.

 

 

 

5、 执行必要的数据字典创建脚本

 

 

 

alter session set "_oracle_script"=true;
alter pluggable database pdb$seed close;
alter pluggable database pdb$seed open;
host perl /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/catcon.pl -l /u01/app/oracle/admin/MACLEANCDB/scripts -b catalog /u01/ap
p/oracle/product/12.1.0/dbhome_1/rdbms/admin/catalog.sql;
host perl /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/catcon.pl -l /u01/app/oracle/admin/MACLEANCDB/scripts -b catblock /u01/a
pp/oracle/product/12.1.0/dbhome_1/rdbms/admin/catblock.sql;
host perl /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/catcon.pl -l /u01/app/oracle/admin/MACLEANCDB/scripts -b catproc /u01/ap
p/oracle/product/12.1.0/dbhome_1/rdbms/admin/catproc.sql;
host perl /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/catcon.pl -l /u01/app/oracle/admin/MACLEANCDB/scripts -b catoctk /u01/ap
p/oracle/product/12.1.0/dbhome_1/rdbms/admin/catoctk.sql;
host perl /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/catcon.pl -l /u01/app/oracle/admin/MACLEANCDB/scripts -b owminst /u01/ap
p/oracle/product/12.1.0/dbhome_1/rdbms/admin/owminst.plb;
host perl /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/catcon.pl -l /u01/app/oracle/admin/MACLEANCDB/scripts -b pupbld -u SYSTE
M/&&systemPassword /u01/app/oracle/product/12.1.0/dbhome_1/sqlplus/admin/pupbld.sql;
connect "SYSTEM"/"&&systemPassword"
set echo on
spool /u01/app/oracle/admin/MACLEANCDB/scripts/sqlPlusHelp.log append
host perl /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/catcon.pl -l /u01/app/oracle/admin/MACLEANCDB/scripts -b hlpbld -u SYSTE
M/&&systemPassword -a 1  /u01/app/oracle/product/12.1.0/dbhome_1/sqlplus/admin/help/hlpbld.sql 1helpus.sql;
@/u01/app/oracle/product/12.1.0/dbhome_1/sqlplus/admin/help/hlpbld.sql helpus.sql;

 

 

 

6、 创建一个PDB

 

 

cp init.ora  /u01/app/oracle/product/12.1.0/dbhome_1/dbs/initMACLEANCDB.ora 

startup force;

CREATE PLUGGABLE DATABASE MACLEANCDB ADMIN USER MACadmin IDENTIFIED BY oracle
 FILE_NAME_CONVERT=(
  '/stage/oradata/MACLEANCDB/DC36ED41771D435CE0430100007FA00B/datafile/o1_mf_system_8rns0lxf_.dbf', '/stage/oradata/PDB1/datafile/system01.clone',
  '/stage/oradata/MACLEANCDB/DC36ED41771D435CE0430100007FA00B/datafile/o1_mf_sysaux_8rns13dk_.dbf', '/stage/oradata/PDB1/datafile/sysaux1.dbf.clone',
  '/stage/oradata/MACLEANCDB/DC36ED41771D435CE0430100007FA00B/datafile/o1_mf_temp_8rns1d89_.tmp', '/stage/oradata/PDB1/datafile/temp1.tmp.clone'
  )
 STORAGE UNLIMITED;

Oracle数据库新版本12c信息汇总

$
0
0

12c database2

Oracle甲骨文公司的旗舰产品Oracle Database 12c进入release发布的倒计时, 可能在今年7月在上海举行的OOW之前发布。

Oracle Database 12c是甲骨文公司上千名软件工程师耗时7~8年研发的超重量级RDBMS管理型数据库管理系统,可以说是目前世界上技术最为领先的DB产品,业界分析Oracle DB在技术上领先对手5年左右(一家之言)。

 

在这里我们来汇总了解12c的一些新知识!

 

【12c新特性】安装12c Standalone Grid Infrastructure

【12c新特性】EM Database Express

将在版本11.2之后废弃或不再支持的特性

解读Tom介绍的Oracle Database 12c的12个新特性

解读Oracle Database 12.1新特性Pluggable Databases

12c分页查询特性FETCH FIRST ROWS,OFFSET ROWS FETCH NEXT ROW LIMIT Clause子句

【12c新特性】dbms_stats report_gather_auto_stats统计信息报告特性

12c新特性:Recover Table

Oracle Database 12c(12.1) Beta已经开始内部测试

 

 

12c database1

12c 12.1.0.0.2 Beta

 

【12c新特性】12c中新加入的Enqueue Lock

$
0
0

12c中新加入的Enqueue Lock列表如下:

 

其中值得注意的 ,为CDB加入了不少enqueue

BC ==》 Container lock held while creating/dropping  a container

PB ==》 Enqueue used to synchronize PDB DDL operations

select A.* from ksqst_12cR1 A where  A.KSQSTTYP not in (select B.KSQSTTYP from ksqst_11gR2@MACDBN  B);

 

AC Synchronizes partition id
AQ kwsptGetOrMapDqPtn
AQ kwsptGetOrMapQPtn
BA subscriber access to bitmap
BC Container lock held while creating a container
BC Container lock held while dropping a container
BC Group lock held while creating a contained file
BC Group lock held while creating a container
BC Group lock held while dropping a container group
BI Enqueue held while a contained file is cleaned up or deleted
BI Enqueue held while a contained file is created
BI Enqueue held while a contained file is identified
BV Enqueue held while a container group is rebuilding
BZ Enqueue held while a contained file is resized
CB Synchronizes accesses to the CBAC roles cached in KGL
CC decrypting and caching column key
CP Synchronization
FH Serializes flush of ILM stats to disk
FO Synchronizes various Oracle File system operations
IC Gets a unique client ID
IF File Close
IF File Open
IP Enqueue used to synchronize instance state changes for PDBs
KI Synchronizes Cross-Instance Calls
MC Serializes log creation/destruction with log flushes
MF Serializes flushes for a SGA log in bkgnd
MF Serializes flushes for a single SGA log – client
MF Serializes flushes for a single SGA log – destroy
MF Serializes flushes for a single SGA log – error earlier
MF Serializes flushes for a single SGA log – space lack
MF Serializes multiple processes in creating the swap space
OP Synchronizing access to ols$profile when deleting unused profiles
OP Synchronizing access to ols$user when inserting user entries
PA lock held for during modify a privilege capture
PA lock held for during reading privilege captur status
PB Enqueue used to synchronize PDB DDL operations
PQ kwslbFreShadowShrd:LB syncronization with Truncate
PQ kwsptChkTrncLst:Truncate
PQ kwsptLoadDqCache: Add DQ Partitions.
PQ kwsptLoadDqCache:Drop DQ Partitions.
PQ kwsptLoadQCache: Add Q Partitions.
PQ kwsptLoadQCache:Drop Q Partitions.
PQ kwsptMapDqPtn:Drop DQ Partitions in foreground
PQ kwsptMapQPtn: Add Q Partitions in foreground
PY Database RTA info access on AVM
PY Instance RTA info access on AVM
RA Flood control in RAC. Acquired in no-wait.
RQ AQ indexed cached commit
RQ AQ uncached commit WM update
RQ AQ uncached dequeue
RQ Cross process updating disk
RQ Cross(export) – truncate subshard
RQ Cross(import) – free shadow shard
RQ Dequeue updating scn
RQ Enqueue commit rac cached
RQ Enqueue commit uncached
RQ Free shadow – Cross(import) shard
RQ Parallel cross(update scn) – truncate subshard
RQ Truncate – Cross(export) subshard
RZ Synchronizes access to the foreign log cache while a structure is being inserted
RZ Synchronizes access to the foreign log cache while a structure is being removed
SG Synchronize access to ols$groups when creating a group
SG Synchronize access to ols$groups when zlllabGroupTreeAddGroup does a read
SG Synchronizing access to ols$groups when alter group parent
SG Synchronizing access to ols$groups when dropping a group
ZS lock held while writing to/renaming/deleting spillover audit file

【12c新特性】12c中新后台进程

$
0
0

【12c新特性】12c中新后台进程,主要包括但不局限于:

 

OFSD Oracle File Server BG
RMON rolling migration monitor
IPC0 IPC Service 0
BW36 db writer process 36
BW99 db writer process 99
TMON Transport Monitor
RTTD Redo Transport Test Driver
TPZ1 Test Process Z1
TPZ2 Test Process Z2
TPZ3 Test Process Z3
LREG Listener Registration
AQPC AQ Process Coord
FENC IOServer fence monitor
VUBG Volume Driver Umbilical Background
SCRB ASM Scrubbing Master

 

 

可以看到这里LREG进程开始负责对Listener Registration监听器的注册:

Service registration enables the listener to determine whether a database service and its service handlers are available. A service handler is a dedicated server process or dispatcher that acts as a connection point to a database. During registration, the LREG process provides the listener with the instance name, database service names, and the type and addresses of service handlers. This information enables the listener to start a service handler when a client request arrives.

Figure 16-5 shows two databases, each on a separate host. The database environment is serviced by two listeners, each on a separate host. The LREG process running in each database instance communicates with both listeners to register the database.

 

截止目前12c的官方文档中的配图还有问题, 图示还是用PMON注册监听。

12c pmon LREG

 

 

Reference:

E16655_01/E16655_01/server.121/e17633/dist_pro.htm#CHDIBHAD


【12c新特性】12cR1中新加入的Statistic

$
0
0

【12c新特性】12cR1中新加入的Statistic

 

select  A.* from v$sysstat A  where A.name not in (select B.name from v$sysstat@db_11gR2 B);

 

STATISTIC# NAME CLASS VALUE STAT_ID CON_ID
52 physical read partial requests 8 0 286702467 0
54 physical write requests optimized 8 0 2483607112 0
55 physical write request redirties 8 0 4146911311 0
56 physical write total bytes optimized 8 0 4085960041 0
57 physical write partial requests 8 0 1535615968 0
70 ka messages sent 32 0 4222258831 0
71 ka grants received 32 0 2310418695 0
81 consistent gets pin 8 248409 1168838199 0
82 consistent gets pin (fastpath) 8 240756 2910712465 0
83 consistent gets examination 8 46775 1966540185 0
84 consistent gets examination (fastpath) 8 45808 1990445227 0
86 fastpath consistent get quota limit 40 0 560973176 0
178 flashback securefile cache read optimizations for block new 8 0 955255216 0
179 flashback securefile direct read optimizations for block new 8 0 963322245 0
180 physical reads cache for securefile flashback block new 8 0 2429466467 0
181 physical reads direct for securefile flashback block new 8 0 3121545084 0
184 data warehousing scanned objects 8 0 247471814 0
185 data warehousing scanned chunks 8 0 3880771368 0
186 data warehousing scanned chunks – memory 8 0 1765983694 0
187 data warehousing scanned chunks – flash 8 0 3811273611 0
188 data warehousing scanned chunks – disk 8 0 1684884558 0
189 data warehousing evicted objects 8 0 1827708704 0
190 data warehousing evicted objects – cooling 8 0 1769197766 0
191 data warehousing evicted objects – replace 8 0 547725926 0
192 data warehousing cooling action 8 0 2905230597 0
200 Streaming Stall Reap 2 0 3489516369 0
201 Streaming No-Stall Reap 2 0 2378677367 0
210 redo writes (group 0) 2 164 2952991530 0
211 redo writes (group 1) 2 16 1083730459 0
212 redo writes (group 2) 2 0 2759403975 0
213 redo writes (group 3) 2 0 3475566097 0
214 redo writes (group 4) 2 0 1807859197 0
215 redo writes (group 5) 2 0 1792560815 0
216 redo writes (group 6) 2 0 1695728381 0
217 redo writes (group 7) 2 0 1074957749 0
218 redo writes adaptive all 2 180 3061077218 0
219 redo writes adaptive worker 2 180 3220418890 0
221 redo blocks written (group 0) 2 995 2520028696 0
222 redo blocks written (group 1) 2 301 3244346714 0
223 redo blocks written (group 2) 2 0 1273391004 0
224 redo blocks written (group 3) 2 0 1050845280 0
225 redo blocks written (group 4) 2 0 2795831152 0
226 redo blocks written (group 5) 2 0 615604096 0
227 redo blocks written (group 6) 2 0 764128333 0
228 redo blocks written (group 7) 2 0 435637049 0
229 redo write size count (   4KB) 2 145 4206847440 0
230 redo write size count (   8KB) 2 15 3604386338 0
231 redo write size count (  16KB) 2 11 1937637258 0
232 redo write size count (  32KB) 2 7 2689404784 0
233 redo write size count (  64KB) 2 0 3887142398 0
234 redo write size count ( 128KB) 2 2 2998280397 0
235 redo write size count ( 256KB) 2 0 2120393820 0
236 redo write size count ( 512KB) 2 0 3912524051 0
237 redo write size count (1024KB) 2 0 395882065 0
238 redo write size count (inf) 2 0 4145578355 0
251 redo synch time overhead (usec) 128 3142053 3961087021 0
252 redo synch time overhead count (  2ms) 128 35 1771370497 0
253 redo synch time overhead count (  8ms) 128 0 2324186582 0
254 redo synch time overhead count ( 32ms) 128 0 2882285036 0
255 redo synch time overhead count (128ms) 128 0 1234629759 0
256 redo synch time overhead count (inf) 128 3 2239006192 0
261 redo write info find 2 38 3584739253 0
262 redo write info find fail 2 0 553778103 0
267 gc cr blocks served with BPS 40 0 1600220233 0
275 gc current blocks served with BPS 40 0 1004484383 0
278 gc cr blocks received with BPS 40 0 3270643842 0
281 gc current blocks received with BPS 40 0 301773697 0
282 gc ka grants received 40 0 912334553 0
283 gc ka grant receive time 40 0 3746639269 0
289 gc cleanout saved 40 0 4119317321 0
290 gc cleanout applied 40 0 1976898865 0
291 gc cleanout no space 40 0 522936568 0
293 gc reader bypass waits 40 0 1120557156 0
298 gc force cr disk read 40 395 1058102273 0
307 AVM files created count 128 0 1887082337 0
308 AVM files deleted count 128 0 4223523824 0
309 AVM file bytes allocated 128 0 3731650962 0
310 AVM au bytes allocated 128 0 3441520794 0
311 AVM file bytes deleted 128 0 1514042146 0
312 AVM non-flash bytes requested 128 0 1829484955 0
313 AVM flash bytes requested 128 0 965137504 0
314 AVM bytes for file maps 128 0 2904743103 0
315 AVM bytes read from flash 128 0 4263147678 0
316 AVM bytes read from disk 128 0 2004986892 0
317 AVM count when 10% of buckets in pb 128 0 652947275 0
318 AVM count when 25% of buckets in pb 128 0 3588709547 0
319 AVM count when 50% of buckets in pb 128 0 2879014823 0
320 AVM count when 75% of buckets in pb 128 0 1964315023 0
321 AVM count when 90% of buckets in pb 128 0 226051874 0
322 AVM count – borrowed from other node 128 0 4037843577 0
323 AVM count – searched in pb 128 0 4000147916 0
324 AVM spare statistic 1 128 0 47653185 0
325 AVM spare statistic 2 128 0 3191674657 0
326 AVM spare statistic 3 128 0 2665872976 0
327 AVM spare statistic 4 128 0 2816010972 0
328 AVM spare statistic 5 128 0 4250363583 0
329 AVM spare statistic 6 128 0 3756487597 0
330 AVM spare statistic 7 128 0 2604881032 0
331 AVM spare statistic 8 128 0 176682480 0
345 storage index soft misses in bytes 8 0 2809906174 0
353 cell num smart IO sessions in rdbms block IO due to open fail 64 0 1611570469 0
363 cell num smartio automem buffer allocation attempts 64 0 145506540 0
364 cell num smartio automem buffer allocation failures 64 0 727055891 0
365 cell num smartio transient cell failures 64 0 2276204331 0
366 cell num smartio permanent cell failures 64 0 299072157 0
367 cell num bytes of IO reissued due to relocation 64 0 3754903472 0
388 recovery marker 2 0 2982845773 0
389 cvmap unavailable 2 0 3849353583 0
390 recieve buffer unavailable 2 0 3480097050 0
462 tracked transactions 128 0 4230695614 0
463 foreground propagated tracked transactions 128 0 2081753160 0
464 slave propagated tracked transactions 128 0 275867045 0
465 large tracked transactions 128 0 1755433832 0
466 very large tracked transactions 128 0 4033000846 0
467 fbda woken up 128 0 138331311 0
468 tracked rows 128 0 943642878 0
469 CLI Flush 128 73 670819718 0
470 CLI BG attempt Flush 128 73 2751550570 0
471 CLI Client Flush 128 0 2418073855 0
472 CLI Imm Wrt 128 0 47996927 0
473 CLI Buf Wrt 128 0 1466815534 0
474 CLI Thru Wrt 128 2 2721289668 0
475 CLI Prvtz Lob 128 0 1688196485 0
476 CLI SGA Alloc 128 32 2076026298 0
477 CLI BG ENQ 128 73 2537508108 0
478 CLI BG Fls done 128 2 1898500432 0
479 CLI Flstask create 128 73 4150293767 0
480 CLI bytes fls to table 128 1376 872375576 0
481 CLI bytes fls to ext 128 0 2251457522 0
482 Heatmap SegLevel – Write 128 0 2305866014 0
483 Heatmap SegLevel – Full Table Scan 128 0 3635715785 0
484 Heatmap SegLevel – IndexLookup 128 0 4088384827 0
485 Heatmap SegLevel – TableLookup 128 0 26595750 0
486 Heatmap SegLevel – Flush 128 0 3466367062 0
487 Heatmap SegLevel – Segments flushed 128 0 2885452372 0
504 KTFB alloc req 128 0 3506976771 0
505 KTFB alloc space (block) 128 0 254882839 0
506 KTFB alloc time (ms) 128 0 573758863 0
507 KTFB free req 128 25 1286187813 0
508 KTFB free space (block) 128 1528 1243401580 0
509 KTFB free time (ms) 128 266 408510199 0
510 KTFB apply req 128 16 2829590811 0
511 KTFB apply time (ms) 128 902 1827629900 0
512 KTFB commit req 128 9 2268695636 0
513 KTFB commit time (ms) 128 16659 3807444826 0
514 KTFB alloc myinst 128 0 637674164 0
515 KTFB alloc steal 128 0 3819194715 0
516 KTFB alloc search FFB 128 0 1572111054 0
522 Heatmap BlkLevel Tracked 128 0 417269865 0
523 Heatmap BlkLevel Not Tracked – Memory 128 0 3244920981 0
524 Heatmap BlkLevel Not Updated – Repeat 128 0 1235344528 0
525 Heatmap BlkLevel Flushed 128 0 3201601810 0
526 Heatmap BlkLevel Flushed to SYSAUX 128 0 153666168 0
527 Heatmap BlkLevel Flushed to BF 128 0 329477246 0
528 Heatmap BlkLevel Ranges Flushed 128 0 3869669302 0
529 Heatmap BlkLevel Ranges Skipped 128 0 120128078 0
530 Heatmap BlkLevel Flush Task Create 128 0 1236100146 0
531 Heatmap Blklevel Flush Task Count 128 0 1887039906 0
568 index compression (ADVANCED LOW) prefix change at block 128 0 1089998764 0
569 index compression (ADVANCED LOW) prefix no change at block 128 0 2879842113 0
570 index compression (ADVANCED LOW) blocks not compressed 128 0 3703793538 0
571 index compression (ADVANCED LOW) reorg avoid split 128 0 2501129012 0
573 index compression (ADVANCED HIGH) leaf block splits avoided 128 0 228768206 0
575 index compression (ADVANCED HIGH) leaf block 90_10 splits faile 128 0 3445701516 0
612 HSC OLTP Compression wide compressed row pieces 128 0 784760009 0
669 EHCC Used on ZFS Tablespace 128 0 2536989047 0
670 EHCC Used on Pillar Tablespace 128 0 3901974308 0
671 EHCC Conventional DMLs 128 0 547882683 0
672 EHCC Block Compressions 128 0 2852097326 0
673 EHCC Attempted Block Compressions 128 0 726324667 0
674 SecureFiles DBFS Link Operations 128 0 408804124 0
675 SecureFiles Move to DBFS Link 128 0 2159528439 0
676 SecureFiles Copy from DBFS Link 128 0 3313150606 0
677 SecureFiles Get DBFS Link Reference 128 0 3776855272 0
678 SecureFiles Put DBFS Link Reference 128 0 1020980477 0
679 SecureFiles Implicit Copy from DBFS Link 128 0 2864160252 0
680 SecureFiles DBFS Link streaming reads 128 0 2291010287 0
681 SecureFiles DBFS Link Overwrites 128 0 3546571658 0
682 index cmph ld, CU under-est 128 0 3487869306 0
683 index cmph ld, CU fit, add rows 128 0 3074245919 0
684 index cmph ld, CU fit 128 0 312995821 0
685 index cmph ld, CU over-est 128 0 3287792462 0
686 index cmph ld, retry in over-est 128 0 2794871331 0
687 index cmph ld, CU negative comp 128 0 747638515 0
688 index cmph ld, lf blks flushed 128 0 3933169485 0
689 index cmph ld, lf blks w/o CU 128 0 2058955770 0
690 index cmph ld, lf blks w/o unc r 128 0 1877031790 0
691 index cmph ld, lf blks w/ und CU 128 0 500852118 0
692 index cmph ld, rows compressed 128 0 2461980696 0
693 index cmph ld, rows uncompressed 128 0 1487477542 0
694 index cmph gencu, uncomp sentinals 128 0 3972713215 0
707 Number of NONE redactions 1 0 2910416594 0
708 Number of FULL redactions 1 0 4021003316 0
709 Number of PARTIAL redactions 1 0 2340397149 0
710 Number of FORMAT_PRESERVING redactions 1 0 2739332778 0
711 Number of RANDOM redactions 1 0 2308447938 0
712 Number of REGEXP redactions 1 0 3081010860 0
795 OLAP Paging Manager Cache Hit 64 0 249788237 0
796 OLAP Paging Manager Cache Miss 64 0 2631123639 0
797 OLAP Paging Manager New Page 64 0 1639856938 0
798 OLAP Paging Manager Cache Write 64 0 2077400790 0
799 OLAP Session Cache Hit 64 0 3766195924 0
800 OLAP Session Cache Miss 64 0 1569481295 0
801 OLAP Aggregate Function Calc 64 0 3109348342 0
802 OLAP Aggregate Function Precompute 64 0 352609299 0
803 OLAP Aggregate Function Logical NA 64 0 2269374713 0
804 OLAP Paging Manager Pool Size 64 0 3621573995 0
805 OLAP Import Rows Pushed 64 0 3846608240 0
806 OLAP Import Rows Loaded 64 0 2782483173 0
807 OLAP Row Source Rows Processed 64 0 1032576542 0
808 OLAP Engine Calls 64 0 4076583183 0
809 OLAP Temp Segments 64 0 3547622716 0
810 OLAP Temp Segment Read 64 0 1927042645 0
811 OLAP Perm LOB Read 64 0 2809117898 0
812 OLAP Paging Manager Cache Changed Page 64 0 2200669834 0
813 OLAP Fast Limit 64 0 283242358 0
814 OLAP GID Limit 64 0 1120107350 0
815 OLAP Unique Key Attribute Limit 64 0 3812252850 0
816 OLAP INHIER Limit 64 0 2844959843 0
817 OLAP Full Limit 64 0 2189109011 0
818 OLAP Custom Member Limit 64 0 3030144806 0
819 OLAP Row Id Limit 64 0 3437716459 0
820 OLAP Limit Time 64 0 2592657924 0
821 OLAP Row Load Time 64 0 953132701 0

【12c新特性】12cR1 ROWID IO Batching特性

$
0
0

在介绍12cR1的这个优化器特性之前,我们先来看如下的例子:

 

SQL> create table sample nologging tablespace users as select rownum t1  from dual  connect by level<=900000;  

Table created.  

SQL> alter table sample add t2 number;

Table altered.

update sample set t2=dbms_random.value(1,999999);

900000 rows updated.

SQL> commit;
Commit complete.

SQL> create index ind_t1 on sample(t1) nologging tablespace users;
Index created.

SQL> create index ind_t2 on sample(t2) nologging tablespace users;
Index created.

SQL> exec dbms_stats.gather_table_stats(USER,'SAMPLE',cascade=>TRUE);
PL/SQL procedure successfully completed.

SQL> select blocks,NUM_ROWS from dba_tables where table_name='SAMPLE';

    BLOCKS   NUM_ROWS
---------- ----------
      9107     902319

SQL> select CLUSTERING_FACTOR,LEAF_BLOCKS,DISTINCT_KEYS,index_name from dba_indexes where table_name='SAMPLE';

CLUSTERING_FACTOR LEAF_BLOCKS DISTINCT_KEYS INDEX_NAME
----------------- ----------- ------------- ------------------------------
             1370        2004        900000 IND_T1
           899317        4148        900000 IND_T2

alter session set events '10046 trace name context forever,level 12';

set autotrace traceonly;

alter system flush buffer_cache;

alter session set "_optimizer_batch_table_access_by_rowid"=true;

 select /*+ index(sample ind_t2) */ * from sample where t2 between 1 and 999997;

 select /*+ index(sample ind_t2) */ *
from
 sample where t2 between 1 and 999997

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch    60001      4.68       8.56      12754    1810330          0      899999
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    60003      4.68       8.56      12754    1810330          0      899999

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
    899999     899999     899999  TABLE ACCESS BY INDEX ROWID BATCHED SAMPLE (cr=1810330 pr=12754 pw=0 time=20413784 us cost=903657 size=24300000 card=900000)
    899999     899999     899999   INDEX RANGE SCAN IND_T2 (cr=63873 pr=4150 pw=0 time=4655140 us cost=4155 size=0 card=900000)(object id 92322)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                   60001        0.00          0.32
  Disk file operations I/O                        1        0.00          0.00
  db file sequential read                     11388        0.00          1.70
  SQL*Net message from client                 60001        0.00          8.95
  db file parallel read                         197        0.00          0.00

 alter system flush buffer_cache;

alter session set "_optimizer_batch_table_access_by_rowid"=false;

 select /*+ index(sample ind_t2) */ * from sample where t2 between 1 and 999997;

 select /*+ index(sample ind_t2) */ *
from
 sample where t2 between 1 and 999997

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch    60001      4.70       8.82      12754    1810333          0      899999
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    60003      4.70       8.82      12754    1810333          0      899999

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
    899999     899999     899999  TABLE ACCESS BY INDEX ROWID SAMPLE (cr=1810333 pr=12754 pw=0 time=25464232 us cost=903657 size=24300000 card=900000)
    899999     899999     899999   INDEX RANGE SCAN IND_T2 (cr=63874 pr=4150 pw=0 time=4404956 us cost=4155 size=0 card=900000)(object id 92322)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                   60001        0.00          0.32
  db file sequential read                     12754        0.00          1.85
  SQL*Net message from client                 60001        0.00          8.95

 

 

我们看到了一个陌生的operation “ TABLE ACCESS BY INDEX ROWID BATCHED” 注意 这个Batched是 之前的版本没有的。

 

须知 TABLE ACCESS BY ROWID 这种常见操作是从 子数据集合(例如INDEX中)获得必要的ROWID, 以便在表上定位到对应的行fetch对应数据。 若该行不在Buffer Cache中,则该 Table Access by ROWID的数据集合需要等待必要的IO完成才能处理下一个ROWID。 在很多场景中IO延迟在这里成为重要的瓶颈, 由于不管是RANGE SCAN、FULL SCAN还是Access By Rowid默认均使用DB FILE SEQUENTIAL READ所以如果访问的数据恰巧不在内存里+ 它要Fetch大量的数据行则 往往其整体相应速度和逻辑读要多于全表扫描。

 

常见在以下三种场景中多需要Table Access by Rowid的数据源访问:

  1. Index Range SCan
  2. Bitmap index plan
  3. Nested Loop Join

 

所以Oracle开发人员想到了要使用prefetch预读取数据源来提升性能,通过遍历ROWID以找出那些需要完成的IO操作并prefetch其数据源,将那些数据块预先读入。这里的实现上应当是通过buffer 驱动数据源哪里获得的ROWID,之后通过遍历这些 ROWID对应的的找到需要做物理读的数据块,并使用向量Io操作(例如上文中的db file parallel read)来prefetch这些数据块到buffer cache中,这样TABLE ACCESS By ROWID的访问就可以保证必要的块(主要是表块)均在buffer cache中。

使用此Batching Io特性可以有效减少IO延迟造成的性能损耗,但并不是任何场景都有效。由于实际能buffer的ROWID是有限的,而且是在不知道哪些ROWID对应需要IO哪些不需要的情况下全部都复制到buffer中,所以如果buffer的所有ROWID对应只需要少量的IO,则该IO Batching特性带来的性能改善将最小化。 亦或者遇到的ROWID对应的数据块全部在内存在 一点Io都不需要,则这种prefetch数据的行为有画蛇添足之嫌,反倒会徒增CPU时间片。

 

目前控制该特性的 优化器参数为_ optimizer_batch_table_access_by_rowid,该参数2个选项 TRUE /FALSE负责控制是否启用Table access by ROWID IO batching。

 

还可以通过 BATCH_TABLE_ACCESS_BY_ROWID和 NO_BATCH_TABLE_ACCESS_BY_ROWID 2个HINT来控制是否启用该特性, HINT的优先级高于参数optimizer_batch_table_access_by_rowid。不过目前在12.1.0.1.0上测试该HINT仍有一些问题。

 

 

 

SQL> select * from V$VERSION where rownum=1;

BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production              0

  1* select name from v$SQL_HINT where name like '%BATCH%'

NAME
----------------------------------------------------------------
NLJ_BATCHING
NO_NLJ_BATCHING
BATCH_TABLE_ACCESS_BY_ROWID
NO_BATCH_TABLE_ACCESS_BY_ROWID

SQL> alter session set "_optimizer_batch_table_access_by_rowid"=true;

Session altered.

SQL>   select /*+     index(sample ind_t2)  NO_BATCH_TABLE_ACCESS_BY_ROWID */ * from sample where t2 between 1 and 999997;

899999 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3882332507

----------------------------------------------------------------------------------------------
| Id  | Operation                           | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |        |   900K|    23M|   903K  (1)| 00:00:36 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| SAMPLE |   900K|    23M|   903K  (1)| 00:00:36 |
|*  2 |   INDEX RANGE SCAN                  | IND_T2 |   900K|       |  4155   (1)| 00:00:01 |
----------------------------------------------------------------------------------------------

 

从谷歌趋势看谁在研究Oracle 12c

$
0
0

google trends

 

从上图中可以看到在2012年 oow期间12c的搜索趋势出现了一个小高潮,在2013年6月迎来了爆发点一路攀升,目前搜索量已不亚于”Oracle 11g”。

 

从地区上看 不管是12c还是11g,最感兴趣的地区 始终是印度 的卡纳塔克邦和安得拉邦 2个地区,班加罗尔市。

三哥三姐不愧为IT领跑者,对Oracle 12c的研究走到世界最前列!壮哉,我大印度IT产业!

 

12c regional

 

12c regional2

 

 

 

美国本土的话主要集中在 加利福尼亚和 马塞诸塞 2个州。

12c

12c Pluggable Database Container Database特性专题

$
0
0

 

 

以下是对12c Pluggable Database Container Database的一些探索:

Day One:

环境描述: 12.1.0.1 单机 DB_NAME=MAC有2个Pluggable Database MACP1和 MACP2

 

 

oracle@localhost:~$ ps -ef|grep MAC

oracle    4491     1  0 10:21 ?        00:00:00 ora_pmon_MAC

oracle    4495     1  0 10:21 ?        00:00:00 ora_psp0_MAC

oracle    4499     1  8 10:21 ?        00:00:07 ora_vktm_MAC

oracle    4505     1  0 10:21 ?        00:00:00 ora_gen0_MAC

oracle    4509     1  0 10:21 ?        00:00:00 ora_mman_MAC

oracle    4517     1  0 10:21 ?        00:00:00 ora_diag_MAC

oracle    4521     1  0 10:21 ?        00:00:00 ora_dbrm_MAC

oracle    4525     1  0 10:21 ?        00:00:00 ora_dia0_MAC

oracle    4529     1  0 10:21 ?        00:00:00 ora_dbw0_MAC

oracle    4533     1  0 10:21 ?        00:00:00 ora_lgwr_MAC

oracle    4537     1  0 10:21 ?        00:00:00 ora_ckpt_MAC

oracle    4541     1  0 10:21 ?        00:00:00 ora_smon_MAC

oracle    4545     1  0 10:21 ?        00:00:00 ora_reco_MAC

oracle    4549     1  0 10:21 ?        00:00:00 ora_lreg_MAC

oracle    4553     1  1 10:21 ?        00:00:01 ora_mmon_MAC

oracle    4557     1  0 10:21 ?        00:00:00 ora_mmnl_MAC

oracle    4561     1  0 10:21 ?        00:00:00 ora_d000_MAC

oracle    4565     1  0 10:21 ?        00:00:00 ora_s000_MAC

oracle    4591     1  0 10:21 ?        00:00:00 ora_tmon_MAC

oracle    4595     1  0 10:21 ?        00:00:00 ora_tt00_MAC

oracle    4599     1  0 10:21 ?        00:00:00 ora_smco_MAC

oracle    4603     1  0 10:21 ?        00:00:00 ora_w000_MAC

oracle    4607     1  0 10:21 ?        00:00:00 ora_aqpc_MAC

oracle    4616     1  3 10:21 ?        00:00:03 ora_p000_MAC

oracle    4620     1  5 10:21 ?        00:00:03 ora_p001_MAC

oracle    4624     1  1 10:21 ?        00:00:01 ora_p002_MAC

oracle    4628     1  0 10:21 ?        00:00:00 ora_p003_MAC

oracle    4684     1  0 10:21 ?        00:00:00 ora_qm02_MAC

oracle    4692     1  0 10:21 ?        00:00:00 ora_q002_MAC

oracle    4696     1  0 10:21 ?        00:00:00 ora_q003_MAC

oracle    4704     1  2 10:21 ?        00:00:01 ora_cjq0_MAC

oracle    4713     1  0 10:21 ?        00:00:00 ora_vkrm_MAC

oracle    4721     1  0 10:21 ?        00:00:00 ora_p004_MAC

oracle    4725     1  0 10:21 ?        00:00:00 ora_p005_MAC

oracle    4760     1  0 10:22 ?        00:00:00 ora_p006_MAC

oracle    4764     1  0 10:22 ?        00:00:00 ora_p007_MAC

oracle    4768     1  0 10:22 ?        00:00:00 ora_p008_MAC

oracle    4772     1  2 10:22 ?        00:00:00 ora_j000_MAC

oracle    4780     1  1 10:22 ?        00:00:00 ora_j002_MAC

oracle    4784     1  0 10:22 ?        00:00:00 ora_j003_MAC

oracle    4788     1  2 10:22 ?        00:00:00 ora_j001_MAC

oracle    4792     1  0 10:22 ?        00:00:00 ora_j004_MAC

oracle    4796     1  0 10:22 ?        00:00:00 ora_j005_MAC

oracle    4800     1  2 10:22 ?        00:00:00 ora_j006_MAC

oracle    4805     1  1 10:22 ?        00:00:00 ora_j007_MAC

oracle    4809     1  8 10:22 ?        00:00:01 ora_j008_MAC

oracle    4813     1  0 10:22 ?        00:00:00 ora_j009_MAC

oracle    4817     1  0 10:22 ?        00:00:00 ora_j010_MAC

oracle    4821     1  1 10:22 ?        00:00:00 ora_j011_MAC

oracle    4825     1  1 10:22 ?        00:00:00 ora_j012_MAC

oracle    4829     1  0 10:22 ?        00:00:00 ora_j013_MAC

 

oracle@localhost:~$ export ORACLE_SID=MAC

oracle@localhost:~$ sqlplus  / as sysdba

 

SQL*Plus: Release 12.1.0.1.0 Production on Sat Jul 13 10:24:04 2013

 

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

// 首先检查该DB是否为CDB è Container Database

 

SQL> select name, cdb, con_id from v$database;

 

NAME      CDB     CON_ID

——— — ———-

MAC       YES          0

 

 

并检查实例名:

 

SQL> select INSTANCE_NAME, STATUS, CON_ID from v$instance;

 

INSTANCE_NAME    STATUS           CON_ID

—————- ———— ———-

MAC              OPEN                  0

 

 

使用V$PDBS检查实例相关的Pluggable Databases

 

select con_id,dbid,name,open_mode,total_size from v$PDBS;

CON_ID       DBID NAME                           OPEN_MODE  TOTAL_SIZE

———- ———- —————————— ———- ———-

2 4062078151 PDB$SEED                       READ ONLY   283115520

3 1965483069 MACP1                          READ WRITE  288358400

4 1550789943 MACP2                          MOUNTED             0

 

 

V$PDBS的数据来源于X$CON内部视图

 

SELECT inst_id,

       con_id,

       dbid,

       con_uid,

       guid,

       name,

       DECODE (state,

               0, ‘MOUNTED’,

               1, ‘READ WRITE’,

               2, ‘READ ONLY’,

               3, ‘MIGRATE’),

       DECODE (restricted,  0, ‘NO’,  1, ‘YES’),

       stime,

       create_scn,

       total_size

  FROM x$con

 WHERE con_id > 1

 

 

 

而X$CON的数据应当主要来源是控制文件CONTROLFILE中的PLUGGABLE DATABASE RECORDS部分:

 

SQL> oradebug setmypid

Statement processed.

SQL> oradebug dump controlf 3;

Statement processed.

SQL> oradebug tracefile_name

/u01/app/oracle/diag/rdbms/mac/MAC/trace/MAC_ora_4900.trc

 

***************************************************************************

PLUGGABLE DATABASE RECORDS

***************************************************************************

(size = 684, compat size = 684, section max = 10, section in-use = 4,

last-recid= 13, old-recno = 0, last-recno = 0)

(extent = 1, blkno = 540, numrecs = 10)

Pluggable DataBase record=1

id=1

dbid=797473714

name=CDB$ROOT

first datafile link=1

pdbinc=0, pdbrdi=0, status=0×00000000, flags=0×00000000

incrcv scn scn: 0×0000.00000000, clnscn scn: 0×0000.00000000, crescn scn: 0×0000.00000000

dbrls scn: 0×0000.00000000, dbrlc=0

iscn scn: 0×0000.00000000, itime=0

bscn scn: 0×0000.00000000, btime=0

escn scn: 0×0000.00000000, etime=0

Pluggable DataBase record=2

id=2

dbid=4062078151

name=PDB$SEED

first datafile link=7

pdbinc=0, pdbrdi=0, status=0×00000001, flags=0×00000000

incrcv scn scn: 0×0000.00000000, clnscn scn: 0×0000.001a80a5, crescn scn: 0×0000.001a41be

dbrls scn: 0×0000.00000000, dbrlc=0

iscn scn: 0×0000.00000000, itime=0

bscn scn: 0×0000.00000000, btime=0

escn scn: 0×0000.00000000, etime=0

Pluggable DataBase record=3

id=3

dbid=1965483069

name=MACP1

first datafile link=10

pdbinc=0, pdbrdi=0, status=0×00000000, flags=0×00000000

incrcv scn scn: 0×0000.00000000, clnscn scn: 0×0000.00000000, crescn scn: 0×0000.001a849c

dbrls scn: 0×0000.00000000, dbrlc=0

iscn scn: 0×0000.00000000, itime=0

bscn scn: 0×0000.00000000, btime=0

escn scn: 0×0000.00000000, etime=0

Pluggable DataBase record=4

id=4

dbid=1550789943

name=MACP2

first datafile link=13

pdbinc=0, pdbrdi=0, status=0×00000001, flags=0×00000000

incrcv scn scn: 0×0000.00000000, clnscn scn: 0×0000.001b04f4, crescn scn: 0×0000.001a897c

dbrls scn: 0×0000.00000000, dbrlc=0

iscn scn: 0×0000.00000000, itime=0

bscn scn: 0×0000.00000000, btime=0

escn scn: 0×0000.00000000, etime=0

 

 

 

DATA FILE #5:

name #9: /u01/app/oracle/oradata/MAC/pdbseed/system01.dbf

creation size=32000 block size=8192 status=0×80 flg=0×5 head=9 tail=9 dup=1

pdb_id 2, tablespace 0, index=6 krfil=1 prev_file_in_ts=0 prev_file_in_pdb=0

unrecoverable scn: 0×0000.00000000 01/01/1988 00:00:00

Checkpoint cnt:9 scn: 0×0000.001a80a5 06/30/2013 14:27:50

Stop scn: 0×0000.001a80a5 06/30/2013 14:27:50

Creation Checkpointed at scn:  0×0000.001a41be 06/30/2013 14:23:04

thread:1 rba:(0×1.1eb.10)

Hot Backup end marker scn: 0×0000.00000000

aux_file is NOT DEFINED

Plugged readony: NO

Plugin scnscn: 0×0000.00000000

Plugin resetlogs scn/timescn: 0×0000.00000000 01/01/1988 00:00:00

Foreign creation scn/timescn: 0×0000.00000000 01/01/1988 00:00:00

Foreign checkpoint scn/timescn: 0×0000.00000000 01/01/1988 00:00:00

Online move state: 0

 

 

DATA FILE #8:

name #12: /u01/app/oracle/oradata/MAC/MACP1/system01.dbf

creation size=32000 block size=8192 status=0xe flg=0×1 head=12 tail=12 dup=1

pdb_id 3, tablespace 0, index=9 krfil=1 prev_file_in_ts=0 prev_file_in_pdb=0

unrecoverable scn: 0×0000.00000000 01/01/1988 00:00:00

Checkpoint cnt:9 scn: 0×0000.001ed479 07/13/2013 10:22:14

Stop scn: 0xffff.ffffffff 07/01/2013 03:41:49

Creation Checkpointed at scn:  0×0000.001a849c 06/30/2013 14:28:40

thread:1 rba:(0×6.2.10)

Hot Backup end marker scn: 0×0000.00000000

aux_file is NOT DEFINED

Plugged readony: NO

Plugin scnscn: 0×0000.00000000

Plugin resetlogs scn/timescn: 0×0000.00000000 01/01/1988 00:00:00

Foreign creation scn/timescn: 0×0000.00000000 01/01/1988 00:00:00

Foreign checkpoint scn/timescn: 0×0000.00000000 01/01/1988 00:00:00

Online move state: 0

 

 

透过对控制文件信息的探索可以发现 多出了以下几个属性:

 

  1. Plugged readony
  2. Plugin scnscn
  3. Plugin resetlogs scn/timescn
  4. Foreign creation scn/timescn
  5. Foreign checkpoint scn/timescn
  6. Online move state

 

 

SQL>  select TYPE FROM V$CONTROLFILE_RECORD_SECTION where type like ‘%PDB%’;

 

TYPE

—————————-

PDB RECORD

PDBINC RECORD

 

 

下面的测试可以证明V$PDBS的数据来源于控制文件,但是 TOTAL_SIZE需要OPEN PLUGGABLE DB后才能获得

 

SQL> shutdown abort;

ORACLE instance shut down.

SQL> startup mount;

ORACLE instance started.

 

Total System Global Area  835104768 bytes

Fixed Size                  2293880 bytes

Variable Size             562040712 bytes

Database Buffers          268435456 bytes

Redo Buffers                2334720 bytes

Database mounted.

SQL> select con_id,dbid,name,open_mode,total_size from v$PDBS;

 

CON_ID       DBID NAME                           OPEN_MODE  TOTAL_SIZE

———- ———- —————————— ———- ———-

2 4062078151 PDB$SEED                       MOUNTED             0

3 1965483069 MACP1                          MOUNTED             0

4 1550789943 MACP2                          MOUNTED

 

 

SQL> alter pluggable database  MACP1 open;

alter pluggable database  MACP1 open

*

ERROR at line 1:

ORA-01109: database not open

 

 

SQL> alter database open;

 

Database altered.

 

SQL> select con_id,dbid,name,open_mode,total_size from v$PDBS;

 

CON_ID       DBID NAME                           OPEN_MODE  TOTAL_SIZE

———- ———- —————————— ———- ———-

2 4062078151 PDB$SEED                       READ ONLY   283115520

3 1965483069 MACP1                          MOUNTED             0

4 1550789943 MACP2                          MOUNTED             0

 

SQL> alter pluggable database  MACP1 open;

 

Pluggable database altered.

 

SQL> select con_id,dbid,name,open_mode,total_size from v$PDBS;

 

CON_ID       DBID NAME                           OPEN_MODE  TOTAL_SIZE

———- ———- —————————— ———- ———-

2 4062078151 PDB$SEED                       READ ONLY   283115520

3 1965483069 MACP1                          READ WRITE  288358400

4 1550789943 MACP2                          MOUNTED             0

 

 

 

在一个PDB MACP1打开的情况下我们去了解LISTENER 服务注册的情况:

 

 

oracle@localhost:~$ lsnrctl service

 

LSNRCTL for Linux: Version 12.1.0.1.0 – Production on 13-JUL-2013 10:40:20

 

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

 

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))

Services Summary…

Service “MAC” has 1 instance(s).

Instance “MAC”, status READY, has 1 handler(s) for this service…

Handler(s):

“DEDICATED” established:0 refused:0 state:ready

LOCAL SERVER

Service “MACXDB” has 1 instance(s).

Instance “MAC”, status READY, has 1 handler(s) for this service…

Handler(s):

“D000″ established:0 refused:0 current:0 max:1022 state:ready

DISPATCHER <machine: localhost.localdomain, pid: 5139>

(ADDRESS=(PROTOCOL=tcp)(HOST=localhost.localdomain)(PORT=43229))

Service “macp1″ has 1 instance(s).

Instance “MAC”, status READY, has 1 handler(s) for this service…

Handler(s):

“DEDICATED” established:0 refused:0 state:ready

LOCAL SERVER

Service “macp2″ has 1 instance(s).

Instance “MAC”, status READY, has 1 handler(s) for this service…

Handler(s):

“DEDICATED” established:0 refused:0 state:ready

LOCAL SERVER

The command completed successfully

 

 

可以看到MACP1这个PLUGGABLE DATABASE OPEN的情况下是有INSTANCE READY可用的,尝试连接,首先连接CDB$ROOT

 

可以通过SQLPLUS的SHOW CON_ID或者CON_NAME来获得当前的CONTAINER信息

 

也可以通过sys_context(‘userenv’,'CON_NAME’) 、sys_context(‘userenv’,'CON_ID’)来获得

 

 

SQL> SELECT sys_context(‘userenv’,'CON_NAME’) from dual;

 

SYS_CONTEXT(‘USERENV’,'CON_NAME’)

——————————————————————————–

CDB$ROOT

 

SQL> SELECT sys_context(‘userenv’,'CON_ID’) from dual;

 

SYS_CONTEXT(‘USERENV’,'CON_ID’)

——————————————————————————–

1

 

SQL> show CON_ID

 

CON_ID

——————————

1

SQL> show CON_NAME

 

CON_NAME

——————————

CDB$ROOT

 

 

 

oracle@localhost:~$ sqlplus  sys/oracle@localhost:1521/MAC as sysdba

 

SQL*Plus: Release 12.1.0.1.0 Production on Sat Jul 13 10:42:34 2013

 

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

 

SQL> SHOW CON_id

 

CON_ID

——————————

1

 

SQL> show con_name

 

CON_NAME

——————————

CDB$ROOT

 

 

SQL> select name from v$pdbs;

 

NAME

——————————

PDB$SEED

MACP1

MACP2

 

SQL> SELECT PROGRAM,CON_ID FROM V$SESSION

PROGRAM                                              CON_ID

———————————————— ———-

oracle@localhost.localdomain (PMON)                       0

oracle@localhost.localdomain (PSP0)                       0

oracle@localhost.localdomain (VKTM)                       0

oracle@localhost.localdomain (GEN0)                       0

oracle@localhost.localdomain (MMAN)                       0

oracle@localhost.localdomain (RECO)                       0

oracle@localhost.localdomain (DIAG)                       0

oracle@localhost.localdomain (DBRM)                       0

oracle@localhost.localdomain (DIA0)                       0

oracle@localhost.localdomain (DBW0)                       0

oracle@localhost.localdomain (LGWR)                       0

oracle@localhost.localdomain (CKPT)                       0

oracle@localhost.localdomain (SMON)                       0

oracle@localhost.localdomain (LREG)                       0

oracle@localhost.localdomain (MMON)                       0

oracle@localhost.localdomain (MMNL)                       0

oracle@localhost.localdomain (CJQ0)                       0

oracle@localhost.localdomain (TMON)                       0

oracle@localhost.localdomain (TT00)                       0

oracle@localhost.localdomain (SMCO)                       0

oracle@localhost.localdomain (AQPC)                       0

oracle@localhost.localdomain (W000)                       0

sqlplus@localhost.localdomain (TNS V1-V3)                 1

oracle@localhost.localdomain (QM02)                       0

oracle@localhost.localdomain (Q002)                       0

oracle@localhost.localdomain (Q003)                       0

oracle@localhost.localdomain (VKRM)                       0

 

27 rows selected.

 

SQL> select count(*) from tab$;

 

COUNT(*)

———-

2372

 

SQL> select count(*) from dba_tables;

 

COUNT(*)

———-

2325

 

SQL>select count(*) from cdb_tables;

 

COUNT(*)

———-

6957

 

 

 

ALTER SESSION 将当前容器设置为MACP1

 

SQL> alter session set container=MACP1;

 

Session altered.

 

SQL> show con_id

 

CON_ID

——————————

3

 

 

SQL> show con_name

 

CON_NAME

——————————

MACP1

 

 

 

SQL> select name from v$pdbs;

 

NAME

——————————

MACP1

 

SQL> select name from v$database;

 

NAME

———

MAC

 

SQL> SELECT PROGRAM,CON_ID FROM V$SESSION where CON_ID!=0;

 

PROGRAM                                              CON_ID

———————————————— ———-

sqlplus@localhost.localdomain (TNS V1-V3)                 3

 

 

而此时若实际使用CDB$ROOT去查 V$SESSION则可以发现:

 

SQL>  SELECT PROGRAM,CON_ID FROM V$SESSION where CON_ID!=0;

 

PROGRAM                                              CON_ID

———————————————— ———-

sqlplus@localhost.localdomain (TNS V1-V3)                 3

sqlplus@localhost.localdomain (TNS V1-V3)                 1

 

可以看出CDB/Pluggable Database对于V$视图的设计是可以让用户察觉到这是一个Pluggable Database,但不能让用户了解到非本地的其他Pluggable Database的运行情况。

 

通过X$CON内部视图可以看到 ,PDB的这种V$视图的只能观察本Container信息的实现是通过X$视图底层实现的。

 

 

SQL> show con_name

 

CON_NAME

——————————

MACP1

 

SQL> select name from x$CON;

 

NAME

——————————

MACP1

 

 

 

 

 

SQL> show con_name

 

CON_NAME

——————————

CDB$ROOT

 

 

SQL>  select name from x$CON;

 

NAME

——————————

CDB$ROOT

PDB$SEED

MACP1

MACP2

 

 

 

SQL>  select count(*) from tab$;

 

COUNT(*)

———-

2363

 

SQL> select count(*) from dba_tables;

 

COUNT(*)

———-

2316

 

SQL> select count(*) from cdb_tables;

 

COUNT(*)

———-

2316

 

 

 

SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

 

oracle@localhost:~$ sqlplus  sys/oracle@localhost:1521/MACP1 as sysdba

 

SQL*Plus: Release 12.1.0.1.0 Production on Sat Jul 13 10:44:41 2013

 

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

 

SQL> show con_name

 

CON_NAME

——————————

MACP1

SQL> show con_id

 

CON_ID

——————————

3

 

 

oracle@localhost:~$ sqlplus  sys/oracle@localhost:1521/MACP2 as sysdba

 

SQL*Plus: Release 12.1.0.1.0 Production on Sat Jul 13 11:01:32 2013

 

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

SQL> show con_id

 

CON_ID

——————————

4

SQL> show con_name

 

CON_NAME

——————————

MACP2

SQL> select count(*) from tab;

select count(*) from tab

*

ERROR at line 1:

ORA-01219: database or pluggable database not open: queries allowed on fixed

tables or views only

 

 

oracle@localhost:~$ oerr ora 1219

01219, 00000, “database or pluggable database not open: queries allowed on fixed tables or views only”

// *Cause:  A query was issued against an object not recognized as a fixed

//          table or fixed view before the database or pluggable database has

//          been opened.

// *Action: Re-phrase the query to include only fixed objects, or open the

//          database or pluggable database.

 

 

由于MACP2 这个Container由于尚未OPEN所以无法查表,遇到了ORA-1219错误

 

 

 

 

oracle@localhost:~$ sqlplus  / as sysdba

 

SQL*Plus: Release 12.1.0.1.0 Production on Sat Jul 13 11:03:02 2013

 

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

SQL> select name, con_id from v$services;

 

NAME                                                                 CON_ID

—————————————————————- ———-

macp2                                                                     4

macp1                                                                     3

MACXDB                                                                    1

MAC                                                                       1

SYS$BACKGROUND                                                            1

SYS$USERS                                                                 1

 

6 rows selected.

 

 

 

视图cdb_pdbs也记录了PDB的信息

 

SQL> desc cdb_pdbs

Name                                      Null?    Type

—————————————– ——– —————————-

PDB_ID                                    NOT NULL NUMBER

PDB_NAME                                  NOT NULL VARCHAR2(128)

DBID                                      NOT NULL NUMBER

CON_UID                                   NOT NULL NUMBER

GUID                                               RAW(16)

STATUS                                             VARCHAR2(13)

CREATION_SCN                                       NUMBER

CON_ID                                             NUMBER

 

SQL> col PDB_NAME format a8

SQL> col CON_ID format 999999

SQL> select PDB_ID, PDB_NAME, DBID, GUID, CON_ID from cdb_pdbs;

 

PDB_ID PDB_NAME       DBID GUID                              CON_ID

———- ——– ———- ——————————– ——-

3 MACP1    1965483069 E060EC0DEFDA2A4CE0430100007FADFA       1

2 PDB$SEED 4062078151 E060D99341542648E0430100007F6B20       1

4 MACP2    1550789943 E060EF86B0C22ABFE0430100007F6F3C       1

 

 

 

 

SQL> select text from dba_VIEWS where view_name=’CDB_PDBS’;

 

TEXT

——————————————————————————–

SELECT “PDB_ID”,”PDB_NAME”,”DBID”,”CON_UID”,”GUID”,”STATUS”,”CREATION_SCN”,”CON_

ID” FROM CDB$VIEW(“SYS”.”DBA_PDBS”)

 

 

SQL> select text from dba_VIEWS where view_name=’DBA_PDBS’;

 

TEXT

——————————————————————————–

select c.con_id#, o.name, c.dbid, c.con_uid, o.oid$,

decode(c.status, 0, ‘UNUSABLE’, 1, ‘NEW’, 2, ‘NORMAL’, 3, ‘UNPLUGGED’, 4, ‘NEED

S SYNC’,

5, ‘NEEDS UPGRADE’, 6, ‘CONVERTING’, ‘UNDEFINED’),

c.create_scnwrp*power(2,32)+c.create_scnbas

from sys.container$ c, sys.obj$ o

where o.obj# = c.obj# and c.con_id# > 1

 

 

CDB_PDBS的数据来源于CDB$VIEW(“SYS”.”DBA_PDBS”),而DBA_PDBS的数据来源于字典对象container$, CONTAINER$的基表存放在 $ORACLE_HOME/rdbms/admin/dcore.bsq中

 

create table container$

(

obj#            number not null,         /* Object number for the container */

con_id#         number not null,                            /* container ID */

dbid            number not null,                             /* database ID */

con_uid         number not null,                               /* unique ID */

status          number not null,                       /* active, plugged…*/

create_scnwrp   number not null,                       /* creation scn wrap */

create_scnbas   number not null,                       /* creation scn base */

clnscnwrp       number,    /* clean offline scn – zero if not offline clean */

clnscnbas       number,       /* clnscnbas – scn base, clnscnwrp – scn wrap */

rdba            number not null,                 /*  r-dba of the container */

flags           number,                                            /* flags */

spare1          number,                                            /* spare */

spare2          number,                                            /* spare */

spare3          varchar2(“M_IDEN”),                                /* spare */

spare4          varchar2(“M_IDEN”)                                 /* spare */

)

/

 

CREATE UNIQUE INDEX i_container1 ON container$(obj#)

/

 

CREATE UNIQUE INDEX i_container2 ON container$(con_id#)

/

 

CREATE UNIQUE INDEX i_container3 ON container$(con_uid)

/

CREATE UNIQUE INDEX i_container3 ON container$(con_uid)

/

 

create table cdb_file$                    /* file table in a consolidated db */

(

file#         number not null,                    /* file identifier number */

con_id#       number not null,                              /* container ID */

mtime         date,                        /* time it was created, modified */

spare1        number,                                              /* spare */

spare2        number,                                              /* spare */

spare3        number,                                              /* spare */

spare4        number,                                              /* spare */

f_afn         number not null,              /* foreign absolute file number */

f_dbid        number not null,                       /* foreign database id */

f_cpswrp      number not null,                    /* foreign checkpoint scn */

f_cpsbas      number not null,

f_prlswrp     number not null,              /* foreign plugin resetlogs scn */

f_prlsbas     number not null,

f_prlstim     number not null              /* foreign plugin resetlogs time */

)

/

 

CREATE UNIQUE INDEX i_cdbfile1 ON cdb_file$(file#, con_id#)

 

REM to provide PDB lineage information.

 

create table pdb_history$

(

name       varchar2(“M_IDEN”) not null,                  /* Name of the PDB */

con_id#    number  not null,                                /* Container ID */

dbid       number  not null,                                 /* DBID of PDB */

guid       raw(16) not null,                                 /* GUID of PDB */

scnbas     number  not null,             /* SCN base when operation occured */

scnwrp     number  not null,             /* SCN wrap when operation occured */

time       date    not null,                 /* time when operation occured */

operation  varchar2(16)  not null,   /* CREATE, CLONE, UNPLUG, PLUG, RENAME */

db_version number  not null,                            /* Database version */

c_pdb_name varchar2(“M_IDEN”),             /* Created, Cloned from PDB name */

c_pdb_dbid number,                         /* Created, Cloned from PDB DBID */

c_pdb_guid raw(16),                        /* Created, Cloned from PDB GUID */

c_db_name  varchar2(“M_IDEN_128″),            /* Created, Cloned in DB name */

c_db_uname varchar2(“M_IDEN”),         /* Created, Cloned in DB unique name */

c_db_dbid  number,                               /* Created, Cloned in DBID */

clonetag   varchar2(128),                                 /* Clone tag name */

spare1     number,                                                 /* spare */

spare2     number,                                                 /* spare */

spare3     varchar2(“M_IDEN”),                                     /* spare */

spare4     varchar2(“M_IDEN”)                                      /* spare */

)

/

 

 

 

 

 

SQL> create table container_backup as select * from sys.container$;

 

Table created.

 

SQL> truncate table sys.container$;

 

Table truncated.

 

SQL> select PDB_ID, PDB_NAME, DBID, GUID, CON_ID from cdb_pdbs;

 

no rows selected

 

 

虽然目前container$字典基表仍不算BOOTSTRAP对象,但是该基表的数据讹误仍导致无法OPEN CDB$ROOT

 

SQL> create table container_backup as select * from sys.container$;

 

Table created.

 

Insert into sys.container$ select * from container_backup;

 

SQL> truncate table sys.container$;

 

Table truncated.

 

SQL> select PDB_ID, PDB_NAME, DBID, GUID, CON_ID from cdb_pdbs;

 

no rows selected

 

SQL> shutdown abort;

ORACLE instance shut down.

 

 

SQL> startup mount;

ORACLE instance started.

 

Total System Global Area  835104768 bytes

Fixed Size                  2293880 bytes

Variable Size             562040712 bytes

Database Buffers          268435456 bytes

Redo Buffers                2334720 bytes

Database mounted.

SQL> show con_name

 

CON_NAME

——————————

CDB$ROOT

SQL> alter database open ;

alter database open

*

ERROR at line 1:

ORA-01092: ORACLE instance terminated. Disconnection forced

ORA-00600: internal error code, arguments: [kpdbLoadCbk-bad-obj#], [185], [],

[], [], [], [], [], [], [], [], []

Process ID: 6250

Session ID: 1 Serial number: 5

 

 

 

Recovery sets nab of thread 1 seq 33 to 88 with 8 zeroblks

Count of ofsmtab$: 0 entries

07/13/2013 11:16:12 07/13/2013 11:16:122013-07-13 11:16:12.705: [  OCRMSG]prom_waitconnect: CONN NOT ESTABLISHED (0,29,1,2)

2013-07-13 11:16:12.705: [  OCRMSG]GIPC error [29] msg [gipcretConnectionRefused]

2013-07-13 11:16:12.705: [  OCRMSG]prom_connect: error while waiting for connection complete [24]

2013-07-13 11:16:12.706: [  OCRMSG]prom_waitconnect: CONN NOT ESTABLISHED (0,29,1,2)

2013-07-13 11:16:12.706: [  OCRMSG]GIPC error [29] msg [gipcretConnectionRefused]

2013-07-13 11:16:12.706: [  OCRMSG]prom_connect: error while waiting for connection complete [24]

2013-07-13 11:16:12.708: [  OCRMSG]prom_waitconnect: CONN NOT ESTABLISHED (0,29,1,2)

2013-07-13 11:16:12.708: [  OCRMSG]GIPC error [29] msg [gipcretConnectionRefused]

2013-07-13 11:16:12.708: [  OCRMSG]prom_connect: error while waiting for connection complete [24]

2013-07-13 11:16:12.709: [  OCRMSG]prom_waitconnect: CONN NOT ESTABLISHED (0,29,1,2)

2013-07-13 11:16:12.709: [  OCRMSG]GIPC error [29] msg [gipcretConnectionRefused]

2013-07-13 11:16:12.709: [  OCRMSG]prom_connect: error while waiting for connection complete [24]

 

*** 2013-07-13 11:16:12.969

Incident 31361 created, dump file: /u01/app/oracle/diag/rdbms/mac/MAC/incident/incdir_31361/MAC_ora_6250_i31361.trc

ORA-00600: internal error code, arguments: [kpdbLoadCbk-bad-obj#], [185], [], [], [], [], [], [], [], [], [], []

 

ORA-00600: internal error code, arguments: [kpdbLoadCbk-bad-obj#], [185], [], [], [], [], [], [], [], [], [], []

ORA-00600: internal error code, arguments: [kpdbLoadCbk-bad-obj#], [185], [], [], [], [], [], [], [], [], [], []

 

 

 

Stack call:

kpdbLockPdbSwitch=> kglget=> kglLock=> kgllkal=> kglLoadOnLock=> kpdbLoadCbk=>报错

 

可以看到内核新增了一个MODULE名字叫 KPDB

 

 

 

【Oracle Database 12c新特性】ORACLE_MAINTAINED

$
0
0

ORACLE_MAINTAINED是Oracle 12c中一系列视图的新增信息字段,该字段代表对象或用户是Oracle提供的脚本生成的,即Oracle-Supplied objects。

 

ORACLE_MAINTAINED VARCHAR2(1) Denotes whether the object was created, and is maintained, by Oracle-supplied scripts (such as catalog.sql or catproc.sql). An object for which this column has the value Y must not be changed in any way except by running an Oracle-supplied script.

我们来看看那些视图有该字段

oracle@localhost:/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin$ grep -i "ORACLE_MAINTAINED" *|grep comment
cdcore.sql:comment on column USER_OBJECTS.ORACLE_MAINTAINED is
cdcore.sql:comment on column ALL_OBJECTS.ORACLE_MAINTAINED is
cdcore.sql:comment on column DBA_OBJECTS.ORACLE_MAINTAINED is
cdcore.sql:comment on column USER_OBJECTS_AE.ORACLE_MAINTAINED is
cdcore.sql:comment on column ALL_OBJECTS_AE.ORACLE_MAINTAINED is
cdcore.sql:comment on column DBA_OBJECTS_AE.ORACLE_MAINTAINED is
cdenv.sql:comment on column USER_USERS.ORACLE_MAINTAINED is
cdenv.sql:comment on column ALL_USERS.ORACLE_MAINTAINED is
cdenv.sql:comment on column DBA_USERS.ORACLE_MAINTAINED is
cdsec.sql:comment on column DBA_ROLES.ORACLE_MAINTAINED is

DBA_USERS、DBA_OBJECTS、DBA_OBJECTS_AE以及与之相关的ALL_、USER_视图均有ORACLE_MAINTAINED字段。

以下为ORACLE MAINTAINED用户名


  1* select username from dba_users where ORACLE_MAINTAINED='Y'
SQL> /

USERNAME
--------------------------------------------------------------------------------------------------------------------------------
AUDSYS
GSMUSER
SPATIAL_WFS_ADMIN_USR
SPATIAL_CSW_ADMIN_USR
APEX_PUBLIC_USER
SYSDG
DIP
SYSBACKUP
MDDATA
GSMCATUSER
SYSKM
XS$NULL
OJVMSYS
ORACLE_OCM
OLAPSYS
SI_INFORMTN_SCHEMA
DVSYS
ORDPLUGINS
XDB
ANONYMOUS
CTXSYS
ORDDATA
GSMADMIN_INTERNAL
APPQOSSYS
APEX_040200
WMSYS
DBSNMP
ORDSYS
MDSYS
DVF
FLOWS_FILES
SYS
SYSTEM
OUTLN
LBACSYS

【Oracle Database 12c新特性】Information Lifecycle Management ILM和Storage Enhancements

$
0
0

Oracle Database 12c中引入了Information Lifecycle Management ILM 信息生命周期管理和Storage Enhancements 存储增强的特性。

Lifecycle Management ILM 的一个最重要部分是 Automatic Data Placement 自动数据存放, 简称ADP。

存储增强方面 12c引入了在线移动Datafile的特性 Online Move Datafile, 该特性允许用户在线将一个有数据的datafile在存储之间移动,且数据库保持打开并访问该文件。

 

O

【Oracle Database 12c新特性】wait event DISPLAY_NAME

$
0
0

在Oracle database 12c 中引入V$EVENT_NAME 视图新增字段DISPLAY_NAME,该字段用以更详细地解释对应的等待事件:

 

DISPLAY_NAME VARCHAR2(64) A clearer and more descriptive name for the wait event that appears in the NAME column. Names that appear in the DISPLAY_NAME column can change across Oracle Database releases, therefore customer scripts should not rely on names that appear in theDISPLAY_NAME column across releases.

 

可惜的是目前并非所有的event都有对应的DISPLAY_NAME,我们列出在12.1.0.1中现有的display name:

 

select name,display_name,wait_class from v$event_name  where name!=display_name order by name

 

NAME DISPLAY_NAME WAIT_CLASS
DFS db file lock quiesce for datafile offline Other
Image redo gen delay redo resource management Other
LGWR real time apply sync standby apply advance notification Idle
concurrent I/O completion online move datafile IO completion Administrative
control file sequential read control file read System I/O
control file single write control file write System I/O
datafile copy range completion online move datafile copy range completion Administrative
datafile move cleanup during resize online move datafile resize cleanup Other
db file parallel read db list of blocks read User I/O
db file parallel write db list of blocks write System I/O
db file scattered read db multiblock read User I/O
db file sequential read db single block read User I/O
db file single write db single block write User I/O
log buffer space log buffer full – LGWR bottleneck Configuration
log file parallel write log file redo write System I/O
log file sequential read log file multiblock read System I/O
log file single write log file header write System I/O
log file sync commit: log file sync Commit
wait for possible quiesce finish quiesce database completion Administrative

【12c database 新特性】Adaptive Execution Plans 自适应的执行计划

$
0
0

12c R1 中引入了SQL优化的新特性- Adaptive Execution Plans 自适应的执行计划,该特性让优化器optimizer 可以在运行时(runtime)自动适配一个性能不良的执行计划, 并避免在后续的仍选择该性能糟糕的执行计划。

 

SQL优化器将在运行时 最终确定其使用的执行计划, 这样可以检测到优化器一开始评估的执行计划可能不是最优的。这样执行计划就可以自动适配到实际的运行条件中。一个自适应的执行计划adaptive plan 是在优化器第一次硬解析得到执行计划后在运行时选择了与原计划有区别的子计划,选择子计划subplan的原因是优化器认为一开始的评估并不准确。

 

换大白话来说, 即便统计信息准确 优化器的评估也可能与实际有出入,但没法在执行前知道, 现在的办法是 先让优化器和平时一样给一个认为”最佳的”执行计划, 在执行过长中对某些数据源获得的结果集做buffer 来统计实际行数  然后和优化器评估的做比较,看是否准确,不准确则变化之后的可以改动的执行计划。

 

优化器optimizer 自适应执行计划是基于语句执行时的执行信息统计数据的,这些数据在执行时被收集。所有的自适应技术都可能执行一个不同于优化器最初硬解析获得的plan的计划。 这是12C中对查询处理引擎的重要提升, 优化器的判断将更注重了解过去的执行情况,即优化器有了 前事不忘后事之师的能力。

 

自适应执行计划主要有以下2个技术:

 

  • Dynamic Plans动态计划: 动态计划是指在语句执行期间在多个子计划之间选择;对于动态计划,优化器optimizer需要决定哪一个子计划subplans最终将包含在本次的动态计划中, 哪些执行统计信息需要收集以便选择子计划,以及做出选择需要机遇的阀值。
  • Reoptimization再次优化: 与Dynamic Plans不同的是,Reoptimization是在当前执行之后再次执行时改变执行计划。对于Reoptimization而言,优化器必须判断在原执行计划的哪一步收集哪些统计信息,以及reoptimization是否可行。

 

 

adaptive execution plans

OPTIMIZER_ADAPTIVE_REPORTING_ONLY 参数控制 report-only模式的自适应优化。当该参数设置为TRUE,则自适应的优化器以report-only模式运行,仅收集自适应优化器所需要的信息,但是不采取改变执行计划的行动。

 

Dynamic Plans

动态执行计划仍是一个执行计划,只是它有着多个不同的内置计划选项。在第一次执行时, 在某个特定的子计划激活之前,优化器将作出最终的决定,选择哪一个选项被使用。优化器的选择基于它运行到这一个步骤的整个过程间观察到的数据。 动态计划是优化器最终启用的final plan不同于硬解析时获得的默认计划default plan, 由于final plan比default plan更了解实际情况,所以往往可以改善查询性能。

 

subplan 子计划是整个执行计划的一个部分,优化器在运行时判断是否要切换到这个备选的子计划。

在语句执行时,统计信息收集器statistics collector 将buffer缓存一部分行数据。在该statistics collector之后部分的计划可能存在备选的子计划,每个子计划对应不同的统计信息收集器返回不同子集的可能值。 一把来说对应一个子计划的集合是一个范围。如果统计信息落在一个有效的子计划的值范围中,且不是默认的执行计划,则优化器将选择备选的子计划。 当优化器选择了子计划后,上述说的对行的buffer将停止。 统计信息收集器停止收集行数据行,并直接将他们传递到后面的计划中。在后续的子游标的执行过程中,优化器将停止buffer,并使用同样的final plan。

在动态执行计划中,执行计划自动从优化器的不良计划选择基础上做适配,并在第一次运行过程中纠正计划决策。

在V$SQL视图上增加了IS_RESOLVED_DYNAMIC_PLAN字段用以表示最终计划final plan不同于default plan。 基于动态计划dynamic plan 找到的信息将以SQL PLAN directives的形式保留。

 

 

 

declare
cursor PLAN_DIRECTIVE_IDS is select directive_id from DBA_SQL_PLAN_DIRECTIVES;
begin
for z in PLAN_DIRECTIVE_IDS loop
DBMS_SPD.DROP_SQL_PLAN_DIRECTIVE(z.directive_id);
end loop;
end;
/

explain plan for select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p
where o.unit_price=15 and quantity>1 and p.product_id=o.product_id;

select * from table(dbms_xplan.display());

Plan hash value: 1255158658
www.askmaclean.com
-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                        |     4 |   128 |     7   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |                        |       |       |            |          |
|   2 |   NESTED LOOPS               |                        |     4 |   128 |     7   (0)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL         | ORDER_ITEMS            |     4 |    48 |     3   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN         | PRODUCT_INFORMATION_PK |     1 |       |     0   (0)| 00:00:01 |
|   5 |   TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION    |     1 |    20 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("O"."UNIT_PRICE"=15 AND "QUANTITY">1)
   4 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")

alter session set events '10053 trace name context forever,level 1';

OR 

alter session set events 'trace[SQL_Plan_Directive] disk highest';

select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p
where o.unit_price=15 and quantity>1 and p.product_id=o.product_id;

---------------------------------------------------------------+-----------------------------------+
| Id  | Operation                      | Name                  | Rows  | Bytes | Cost  | Time      |
---------------------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT               |                       |       |       |     7 |           |
| 1   |  HASH JOIN                     |                       |     4 |   128 |     7 |  00:00:01 |
| 2   |   NESTED LOOPS                 |                       |       |       |       |           |
| 3   |    NESTED LOOPS                |                       |     4 |   128 |     7 |  00:00:01 |
| 4   |     STATISTICS COLLECTOR       |                       |       |       |       |           |
| 5   |      TABLE ACCESS FULL         | ORDER_ITEMS           |     4 |    48 |     3 |  00:00:01 |
| 6   |     INDEX UNIQUE SCAN          | PRODUCT_INFORMATION_PK|     1 |       |     0 |           |
| 7   |    TABLE ACCESS BY INDEX ROWID | PRODUCT_INFORMATION   |     1 |    20 |     1 |  00:00:01 |
| 8   |   TABLE ACCESS FULL            | PRODUCT_INFORMATION   |     1 |    20 |     1 |  00:00:01 |
---------------------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
1 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")
5 - filter(("O"."UNIT_PRICE"=15 AND "QUANTITY">1))
6 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")

=====================================
SPD: BEGIN context at statement level
=====================================
Stmt: ******* UNPARSED QUERY IS *******
SELECT /*+ OPT_ESTIMATE (@"SEL$1" JOIN ("P"@"SEL$1" "O"@"SEL$1") ROWS=13.000000 ) OPT_ESTIMATE (@"SEL$1" TABLE "O"@"SEL$1" ROWS=13.000000 ) */ "P"."PRODUCT_NAME" "PRODUCT_NAME" FROM "OE"."ORDER_ITEMS" "O","OE"."PRODUCT_INFORMATION" "P" WHERE "O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1 AND "P"."PRODUCT_ID"="O"."PRODUCT_ID"
Objects referenced in the statement
  PRODUCT_INFORMATION[P] 92194, type = 1
  ORDER_ITEMS[O] 92197, type = 1
Objects in the hash table
  Hash table Object 92197, type = 1, ownerid = 6573730143572393221:
    No Dynamic Sampling Directives for the object
  Hash table Object 92194, type = 1, ownerid = 17822962561575639002:
    No Dynamic Sampling Directives for the object
Return code in qosdInitDirCtx: ENBLD
===================================
SPD: END context at statement level
===================================
=======================================
SPD: BEGIN context at query block level
=======================================
Query Block SEL$1 (#0)
Return code in qosdSetupDirCtx4QB: NOCTX
=====================================
SPD: END context at query block level
=====================================
SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267
SPD: Inserted felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES
SPD: qosdCreateFindingSingTab retCode = CREATED, fid = 2896834833840853267
SPD: qosdCreateDirCmp retCode = CREATED, fid = 2896834833840853267
SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SKIP_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267
SPD: Modified felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES
SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 5618517328604016300
SPD: Modified felem, fid=5618517328604016300, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO
SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 1142802697078608149
SPD: Modified felem, fid=1142802697078608149, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO
SPD: Generating finding id: type = 1, reason = 2, objcnt = 2, obItr = 0, objid = 92194, objtyp = 1, vecsize = 0, obItr = 1, objid = 92197, objtyp = 1, vecsize = 0, fid = 1437680122701058051
SPD: Modified felem, fid=1437680122701058051, ftype = 1, freason = 2, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO

select * from table(dbms_xplan.display_cursor(format=>'report')) ;

可以使用report格式查看adaptive plan

Adaptive plan:

-------------

This cursor has an adaptive plan, but adaptive plans are enabled for
reporting mode only.  The plan that would be executed if adaptive plans
were enabled is displayed below.

------------------------------------------------------------------------------------------
| Id  | Operation          | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                     |       |       |     7 (100)|          |
|*  1 |  HASH JOIN         |                     |     4 |   128 |     7   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| ORDER_ITEMS         |     4 |    48 |     3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| PRODUCT_INFORMATION |     1 |    20 |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

SQL> select SQL_ID,IS_RESOLVED_DYNAMIC_PLAN,sql_text from v$SQL WHERE SQL_TEXT like '%MALCEAN%' and sql_text not like '%like%';

SQL_ID                     IS
-------------------------- --
SQL_TEXT
--------------------------------------------------------------------------------
6ydj1bn1bng17              Y
select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p
where o.unit_price=15 and quantity>1 and p.product_id=o.product_id

 

 

 

如上例中 explain plan for 获得的是default plan, 而实际执行时optimizer生成了final plan,并且V$SQL.IS_RESOLVED_DYNAMIC_PLAN显示为Y,说明该执行计划是动态计划。

 

DBA_SQL_PLAN_DIRECTIVES视图记录了系统中保存下来的SQL PLAN DIRECTIVES, 注意在12c中 统一由MMON进程来负责DML 监控和column usage信息刷新到磁盘的工作,而不再是SMON进程。 MMON也负责将SGA中的PLAN DIRECTIVES写出。

也可以通过DBMS_SPD.flush_sql_plan_directive来刷新。

 

 

 select directive_id,type,reason  from DBA_SQL_PLAN_DIRECTIVES
 /

                       DIRECTIVE_ID TYPE                             REASON
----------------------------------- -------------------------------- -----------------------------
               10321283028317893030 DYNAMIC_SAMPLING                 JOIN CARDINALITY MISESTIMATE
                4757086536465754886 DYNAMIC_SAMPLING                 JOIN CARDINALITY MISESTIMATE
               16085268038103121260 DYNAMIC_SAMPLING                 JOIN CARDINALITY MISESTIMATE

SQL>  set pages 9999
SQL>  set lines 300
SQL>  col state format a5
SQL>  col subobject_name format a11
SQL>  col col_name format a11
SQL>  col object_name format a13
SQL>  select d.directive_id, o.object_type, o.object_name, o.subobject_name col_name, d.type, d.state, d.reason
  2  from dba_sql_plan_directives d, dba_sql_plan_dir_objects o
  3  where d.DIRECTIVE_ID=o.DIRECTIVE_ID
  4  and o.object_name in ('ORDER_ITEMS')
  5  order by d.directive_id;

DIRECTIVE_ID OBJECT_TYPE  OBJECT_NAME   COL_NAME    TYPE                             STATE REASON
------------ ------------ ------------- ----------- -------------------------------- ----- -------------------------------------
---
  1.8156E+19 COLUMN       ORDER_ITEMS   UNIT_PRICE  DYNAMIC_SAMPLING                 NEW   SINGLE TABLE CARDINALITY MISESTIMATE
  1.8156E+19 TABLE        ORDER_ITEMS               DYNAMIC_SAMPLING                 NEW   SINGLE TABLE CARDINALITY MISESTIMATE
  1.8156E+19 COLUMN       ORDER_ITEMS   QUANTITY    DYNAMIC_SAMPLING                 NEW   SINGLE TABLE CARDINALITY MISESTIMATE

DBA_SQL_PLAN_DIRECTIVES的数据基于 _BASE_OPT_DIRECTIVE 和 _BASE_OPT_FINDING

SELECT d.dir_own#,
       d.dir_id,
       d.f_id,
       decode(type, 1, 'DYNAMIC_SAMPLING', 'UNKNOWN'),
       decode(state,
              1,
              'NEW',
              2,
              'MISSING_STATS',
              3,
              'HAS_STATS',
              4,
              'CANDIDATE',
              5,
              'PERMANENT',
              6,
              'DISABLED',
              'UNKNOWN'),
       decode(bitand(flags, 1), 1, 'YES', 'NO'),
       cast(d.created as timestamp),
       cast(d.last_modified as timestamp),
       -- Please see QOSD_DAYS_TO_UPDATE and QOSD_PLUS_SECONDS for more details
       -- about 6.5
       cast(d.last_used as timestamp) - NUMTODSINTERVAL(6.5, 'day')
  FROM sys.opt_directive$ d

 

 

使用dbms_spd包管理 SQL PLAN DIRECTIVES, SQL PLAN DIRECTIVES的默认retention 周期为53周:

 

 

 

   Package: DBMS_SPD

    This package provides subprograms for managing Sql Plan
    Directives(SPD). SPD are objects generated automatically by Oracle
    server. For example, if server detects that the single table cardinality
    estimated by optimizer is off from the actual number of rows returned
    when accessing the table, it will automatically create a directive to
    do dynamic sampling for the table. When any Sql statement referencing
    the table is compiled, optimizer will perform dynamic sampling for the
    table to get more accurate estimate.

    Notes:

    DBMSL_SPD is a invoker-rights package. The invoker requires ADMINISTER
    SQL MANAGEMENT OBJECT privilege for executing most of the subprograms of
    this package. Also the subprograms commit the current transaction (if any),
    perform the operation and commit it again.

    DBA view dba_sql_plan_directives shows all the directives created in
    the system and the view dba_sql_plan_dir_objects displays the objects that
    are included in the directives.

	  -- Default value for SPD_RETENTION_WEEKS
  SPD_RETENTION_WEEKS_DEFAULT  CONSTANT varchar2(4)    := '53';

      | STATE          : NEW             : Newly created directive.
    |                : MISSING_STATS   : The directive objects do not
    |                                    have relevant stats.
    |                : HAS_STATS       : The objects have stats.
    |                : PERMANENT       : A permanent directive. Server
    |                                    evaluated effectiveness and these
    |                                    directives are useful.
    |
    | AUTO_DROP      : YES             : Directive will be dropped
    |                                    automatically if not
    |                                    used for SPD_RETENTION_WEEKS.
    |                                    This is the default behavior.
    |                  NO              : Directive will not be dropped
    |                                    automatically.

	    Procedure: flush_sql_plan_directive

      This procedure allows manually flushing the Sql Plan directives that
      are automatically recorded in SGA memory while executing sql
      statements. The information recorded in SGA are periodically flushed
      by oracle background processes. This procedure just provides a way to
      flush the information manually.

 

 

隐藏参数”_optimizer_dynamic_plans”(enable dynamic plans)负责控制动态计划,默认为TRUE使用DYNAMIC PLAN。 设置为FALSE可以关闭该优化器新特性。

 

平心而论,Dynamic Plan能发挥作用的最常见场景是Nested Loop和Hash Join对比的case ,如果优化器选择了Nested loop但该场景中其实也可以用HASH JOIN,则HASH JOIN会作为一个备用的子计划。反之亦然。

在运行时仅有一个subplan最后被选择,其他的则 pass掉。 在join method选择时,转折出现在STATISTICS COLLECTOR的基数cardinality上,如果基数较高则HASH JOIN的成本低于Nested Loop,反之亦然。同时优化器会为subplan选择最佳的access path;

 

 

例如下例中扫描Sales表若获得的符合谓词的基数大于某个阀值,则会选择HASH JOIN,此时SUBPLAN中对customers表往往选择全表扫描;而如果使用Nested Loop,则很可能使用对cust_id相关索引的Range Scan+Access by Rowid。

 

dynamic sql plan

 

 

 

 

 

Cardinality feedback

 

Cardinality feedback这个优化器特性在11.2中被引入,该特性实现了部分re-optimization的功能;  在第一次执行语句结果后,Cardinality feedback将自动为之前优化器评估的不当基数的执行计划加以改善。 造成优化器错误地评估基数的原因有很多种,例如没有统计信息或者统计信息不准确,过于复杂的谓词等等。Cardinality feedback对于变动频繁的表并不适合。 它对那些数据量不随着时间变迁而巨大变化的查询有益。

 

优化器可能为如下这些场景启用Cardinality feedback 监控: 没有统计信息的表,对一个表有多个过滤谓词,或者谓词有复杂的操作,上述的原因均可能造成不当的选择性selectivity 评估。

 

该特性的实际工作原理如下:

在查询执行时,优化器的评估基数将和实际执行时执行统计信息做比较。若存在较大的差异,则将生成一个新的执行计划并后续执行之。 将为实际执行过程中的数据量和数据类型收集统计信息。如果与原始的优化器评估差异较大,则该语句将在下一次执行时使用执行统计信息再次硬解析。 语句只会被监控一次,如果在开始时没有发现巨大的差异,则在今后都不再变化。

注意仅仅是单表的基数Cardinality 将被检验,也就是说对于join Cardinality 连接基数无能为力。 Cardinality feedback的信息将存放在cursor中,当Cursor一旦被aged out则会丢失。

 

 

SELECT /*+ gather_plan_statistics */ product_name    FROM   order_items
o, product_information p    WHERE  o.unit_price = 15    AND    quantity
> 1    AND    p.product_id = o.product_id

Plan hash value: 1553478007

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                     |      1 |        |     13 |00:00:00.01 |      24 |     20 |       |       |          |
|*  1 |  HASH JOIN         |                     |      1 |      4 |     13 |00:00:00.01 |      24 |     20 |  2061K|  2061K|  429K (0)|
|*  2 |   TABLE ACCESS FULL| ORDER_ITEMS         |      1 |      4 |     13 |00:00:00.01 |       7 |      6 |       |       |          |
|   3 |   TABLE ACCESS FULL| PRODUCT_INFORMATION |      1 |      1 |    288 |00:00:00.01 |      17 |     14 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

SELECT /*+ gather_plan_statistics */ product_name    FROM   order_items
o, product_information p    WHERE  o.unit_price = 15    AND    quantity
> 1    AND    p.product_id = o.product_id

Plan hash value: 1553478007

-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                     |      1 |        |     13 |00:00:00.01 |      24 |       |       |          |
|*  1 |  HASH JOIN         |                     |      1 |     13 |     13 |00:00:00.01 |      24 |  2061K|  2061K|  413K (0)|
|*  2 |   TABLE ACCESS FULL| ORDER_ITEMS         |      1 |     13 |     13 |00:00:00.01 |       7 |       |       |          |
|   3 |   TABLE ACCESS FULL| PRODUCT_INFORMATION |      1 |    288 |    288 |00:00:00.01 |      17 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------
Note
-----
   - statistics feedback used for this statement



SQL> select count(*) from v$SQL where SQL_ID='cz0hg2zkvd10y';

  COUNT(*)
----------
         2

SQL>select sql_ID,USE_FEEDBACK_STATS  FROM V$SQL_SHARED_CURSOR where USE_FEEDBACK_STATS ='Y';

SQL_ID        U
------------- -
cz0hg2zkvd10y Y

 

 

如上例中查询就被Cardinality feedback监控到了,由于该查询包括了一个等式过滤谓词和一个不等式的过滤谓词,所以造成了第一个子游标中order_items的基数评估有误。 虽然前后2个自行计划的plan hash value一致(本身执行步骤是一样的),但是确实是2个子游标child cursor了。可以通过gather_plan_statistics获得的actual : A-ROWS  estimate :E-ROWS对比来看基数误差。

 

 

Automatic Re-optimization

 

不同于dynamic plan, Re-optimization是在后续的执行中自适应执行计划  。  这种延迟的优化特性可以针对那些 不当的优化器评估例如不良的分布方式和不当的并行度选择改良其计划。  在语句第一次执行结束后,优化器拥有了更全表的统计信息, 而这些统计信息均可以改善今后的计划选择。 一个查询可能被重新优化  Re-optimization多次, 每一次均可能创建更大和更清晰的优化器调整。

Re-optimization可以做到dynamic plan所做不到的计划改良。  dynamic plan只能做到改善一个全局计划中的相关本地部分, 动态计划对于整体执行计划的改良无能为力。 举例来说,执行计划若存在低效的join order 连接顺序则可能导致不良的性能,但不可能在执行过程中去修改join order连接顺序。 在这种场景中,优化器将自动考虑Re-optimization, 对于Re-optimization 优化器也必须判断何时收集哪些统计信息。

在Oracle database 12c中,join statistics连接统计信息也将被收集。语句将被持续地监控,以便确认统计信息在不同的执行中是否有所波动。Re-optimization可以与绑定变量情况下的adaptive cursor sharing兼容工作。 这个特性改善了查询处理引擎的能力,以便获得更好的执行计划。

优化器基于 统计信息收集器statistics collectors 以及相关的原始优化器评估来判断自动重优化Re-optimization是否可行。若2者的差异大于一个内置的阀值,则优化器将寻找一个替代执行计划。

 

在优化器认识到某个查询是一个Re-optimization的候选对象,则数据库将提交所有收集到的统计信息给优化器。

同时在v$SQL视图上加入了IS_REOPTIMIZABLE字段来说明下一次匹配到该子游标时是否会触发Re-optimization,或者某个子游标包含了Re-optimization的信息,但不会触发Re-optimization ,原因是游标仅以reporting模式编译。

 

 

IS_REOPTIMIZABLE VARCHAR2(1) This columns shows whether the next execution matching this child cursor will trigger a reoptimization. The values are:
  • Y: If the next execution will trigger a reoptimization
  • R: If the child cursor contains reoptimization information, but will not trigger reoptimization because the cursor was compiled in reporting mode
  • N: If the child cursor has no reoptimization information

 

测试1:

 

select plan_table_output from table (dbms_xplan.display_cursor('gwf99gfnm0t7g',NULL,'ALLSTATS LAST'));

SQL_ID  gwf99gfnm0t7g, child number 0
-------------------------------------
SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name
FROM  orders o,   ( SELECT order_id, product_name FROM order_items o,
product_information p     WHERE  p.product_id = o.product_id AND
list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id

Plan hash value: 1906736282

-------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                     |      1 |        |    269 |00:00:00.02 |    1336 |     18 |       |       |          |
|   1 |  NESTED LOOPS         |                     |      1 |      1 |    269 |00:00:00.02 |    1336 |     18 |       |       |          |
|   2 |   MERGE JOIN CARTESIAN|                     |      1 |      4 |   9135 |00:00:00.02 |      34 |     15 |       |       |          |
|*  3 |    TABLE ACCESS FULL  | PRODUCT_INFORMATION |      1 |      1 |     87 |00:00:00.01 |      33 |     14 |       |       |          |
|   4 |    BUFFER SORT        |                     |     87 |    105 |   9135 |00:00:00.01 |       1 |      1 |  4096 |  4096 | 4096  (0)|
|   5 |     INDEX FULL SCAN   | ORDER_PK            |      1 |    105 |    105 |00:00:00.01 |       1 |      1 |       |       |          |
|*  6 |   INDEX UNIQUE SCAN   | ORDER_ITEMS_UK      |   9135 |      1 |    269 |00:00:00.01 |    1302 |      3 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))
   6 - access("O"."ORDER_ID"="ORDER_ID" AND "P"."PRODUCT_ID"="O"."PRODUCT_ID")

SQL_ID  gwf99gfnm0t7g, child number 1
-------------------------------------
SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name
FROM  orders o,   ( SELECT order_id, product_name FROM order_items o,
product_information p     WHERE  p.product_id = o.product_id AND
list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id

Plan hash value: 35479787

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                     |      1 |        |    269 |00:00:00.01 |      63 |      3 |       |       |          |
|   1 |  NESTED LOOPS          |                     |      1 |    269 |    269 |00:00:00.01 |      63 |      3 |       |       |          |
|*  2 |   HASH JOIN            |                     |      1 |    313 |    269 |00:00:00.01 |      42 |      3 |  1321K|  1321K| 1234K (0)|
|*  3 |    TABLE ACCESS FULL   | PRODUCT_INFORMATION |      1 |     87 |     87 |00:00:00.01 |      16 |      0 |       |       |          |
|   4 |    INDEX FAST FULL SCAN| ORDER_ITEMS_UK      |      1 |    665 |    665 |00:00:00.01 |      26 |      3 |       |       |          |
|*  5 |   INDEX UNIQUE SCAN    | ORDER_PK            |    269 |      1 |    269 |00:00:00.01 |      21 |      0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")
   3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))
   5 - access("O"."ORDER_ID"="ORDER_ID")

Note
-----
   - statistics feedback used for this statement

   SQL> select IS_REOPTIMIZABLE,child_number FROM V$SQL  A where A.SQL_ID='gwf99gfnm0t7g';

IS CHILD_NUMBER
-- ------------
Y             0
N             1

   1* select child_number,other_xml From v$SQL_PLAN  where SQL_ID='gwf99gfnm0t7g' and other_xml is not nul
SQL> /

CHILD_NUMBER OTHER_XML
------------ --------------------------------------------------------------------------------
           1 <other_xml><info type="cardinality_feedback">yes</info><info type="db_version">1
             2.1.0.1</info><info type="parse_schema"><![CDATA["OE"]]></info><info type="plan_
             hash">35479787</info><info type="plan_hash_2">3382491761</info><outline_data><hi
             nt><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]></hint><hint><![CDATA[OPTIMIZER_FEATUR
             ES_ENABLE('12.1.0.1')]]></hint><hint><![CDATA[DB_VERSION('12.1.0.1')]]></hint><h
             int><![CDATA[ALL_ROWS]]></hint><hint><![CDATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></
             hint><hint><![CDATA[MERGE(@"SEL$2")]]></hint><hint><![CDATA[OUTLINE(@"SEL$1")]]>
             </hint><hint><![CDATA[OUTLINE(@"SEL$2")]]></hint><hint><![CDATA[FULL(@"SEL$F5BB7
             4E1" "P"@"SEL$2")]]></hint><hint><![CDATA[INDEX_FFS(@"SEL$F5BB74E1" "O"@"SEL$2"
             ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PRODUCT_ID"))]]></hint><hint><![CDATA[I
             NDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA[
             LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$2" "O"@"SEL$1")]]></hint><hint><![C
             DATA[USE_HASH(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint><hint><![CDATA[USE_NL(@"SEL$
             F5BB74E1" "O"@"SEL$1")]]></hint></outline_data></other_xml>

           0 <other_xml><info type="db_version">12.1.0.1</info><info type="parse_schema"><![C
             DATA["OE"]]></info><info type="plan_hash">1906736282</info><info type="plan_hash
             _2">2579473118</info><outline_data><hint><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]>
             </hint><hint><![CDATA[OPTIMIZER_FEATURES_ENABLE('12.1.0.1')]]></hint><hint><![CD
             ATA[DB_VERSION('12.1.0.1')]]></hint><hint><![CDATA[ALL_ROWS]]></hint><hint><![CD
             ATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></hint><hint><![CDATA[MERGE(@"SEL$2")]]></hi
             nt><hint><![CDATA[OUTLINE(@"SEL$1")]]></hint><hint><![CDATA[OUTLINE(@"SEL$2")]]>
             </hint><hint><![CDATA[FULL(@"SEL$F5BB74E1" "P"@"SEL$2")]]></hint><hint><![CDATA[
             INDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA
             [INDEX(@"SEL$F5BB74E1" "O"@"SEL$2" ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PROD
             UCT_ID"))]]></hint><hint><![CDATA[LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$1
             " "O"@"SEL$2")]]></hint><hint><![CDATA[USE_MERGE_CARTESIAN(@"SEL$F5BB74E1" "O"@"
             SEL$1")]]></hint><hint><![CDATA[USE_NL(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint></o
             utline_data></other_xml>

 

测试2:

 

 

SELECT /*+gather_plan_statistics*/ * 
FROM   customers 
WHERE  cust_state_province='CA' 
AND    country_id='US';

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  b74nw722wjvy3, child number 0
-------------------------------------
select /*+gather_plan_statistics*/ * from customers where
CUST_STATE_PROVINCE='CA' and country_id='US'

Plan hash value: 1683234692

--------------------------------------------------------------------------------------------------
| Id  | Operation         | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |      1 |        |     29 |00:00:00.01 |      17 |     14 |
|*  1 |  TABLE ACCESS FULL| CUSTOMERS |      1 |      8 |     29 |00:00:00.01 |      17 |     14 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US'))

 SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE
FROM   V$SQL
WHERE  SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%';

SQL_ID        CHILD_NUMBER SQL_TEXT    I
------------- ------------ ----------- -
b74nw722wjvy3            0 select /*+g Y
                           ather_plan_
                           statistics*
                           / * from cu
                           stomers whe
                           re CUST_STA
                           TE_PROVINCE
                           ='CA' and c
                           ountry_id='
                           US'

EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE;

SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, 
       o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON
FROM   DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o
WHERE  d.DIRECTIVE_ID=o.DIRECTIVE_ID
AND    o.OWNER IN ('SH')
ORDER BY 1,2,3,4,5;

DIR_ID                  OWNER OBJECT_NAME   COL_NAME    OBJECT TYPE             STATE REASON
----------------------- ----- ------------- ----------- ------ ---------------- ----- ------------------------
1484026771529551585     SH    CUSTOMERS     COUNTRY_ID  COLUMN DYNAMIC_SAMPLING NEW   SINGLE TABLE CARDINALITY 
                                                                                      MISESTIMATE
1484026771529551585     SH    CUSTOMERS     CUST_STATE_ COLUMN DYNAMIC_SAMPLING NEW   SINGLE TABLE CARDINALITY 
                                            PROVINCE                                  MISESTIMATE        
1484026771529551585     SH    CUSTOMERS                 TABLE  DYNAMIC_SAMPLING NEW   SINGLE TABLE CARDINALITY 
                                                                                      MISESTIMATE

SELECT /*+gather_plan_statistics*/ * 
FROM   customers 
WHERE  cust_state_province='CA' 
AND    country_id='US';

ELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  b74nw722wjvy3, child number 1
-------------------------------------
select /*+gather_plan_statistics*/ * from customers where
CUST_STATE_PROVINCE='CA' and country_id='US'

Plan hash value: 1683234692

-----------------------------------------------------------------------------------------
| Id  | Operation         | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |      1 |        |     29 |00:00:00.01 |      17 |
|*  1 |  TABLE ACCESS FULL| CUSTOMERS |      1 |     29 |     29 |00:00:00.01 |      17 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US'))

Note
-----
   - cardinality feedback used for this statement

 SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE
FROM   V$SQL
WHERE  SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%';

SQL_ID        CHILD_NUMBER SQL_TEXT    I
------------- ------------ ----------- -
b74nw722wjvy3            0 select /*+g Y
                           ather_plan_
                           statistics*
                           / * from cu
                           stomers whe
                           re CUST_STA
                           TE_PROVINCE
                           ='CA' and c
                           ountry_id='
                           US'

b74nw722wjvy3            1 select /*+g N
                           ather_plan_
                           statistics*
                           / * from cu
                           stomers whe
                           re CUST_STA
                           TE_PROVINCE
                           ='CA' and c
                           ountry_id='
                           US'

SELECT /*+gather_plan_statistics*/ CUST_EMAIL
FROM   CUSTOMERS
WHERE  CUST_STATE_PROVINCE='MA'
AND    COUNTRY_ID='US';

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  3tk6hj3nkcs2u, child number 0
-------------------------------------
Select /*+gather_plan_statistics*/ cust_email From   customers Where
cust_state_province='MA' And    country_id='US'

Plan hash value: 1683234692

-------------------------------------------------------------------------------
|Id | Operation         | Name      | Starts|E-Rows|A-Rows| A-Time    |Buffers|
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT  |           |     1 |      |    2 |00:00:00.01|    16 |
|*1 |  TABLE ACCESS FULL| CUSTOMERS |     1 |     2|    2 |00:00:00.01|    16 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("CUST_STATE_PROVINCE"='MA' AND "COUNTRY_ID"='US'))

Note
-----
   - dynamic sampling used for this statement (level=2)
   - 1 Sql Plan Directive used for this statement

   EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE;

SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, 
       o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON
FROM   DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o
WHERE  d.DIRECTIVE_ID=o.DIRECTIVE_ID
AND    o.OWNER IN ('SH')
ORDER BY 1,2,3,4,5;

DIR_ID              OW OBJECT_NA COL_NAME    OBJECT  TYPE            STATE         REASON
------------------- -- --------- ---------- ------- ---------------  ------------- ------------------------
1484026771529551585 SH CUSTOMERS COUNTRY_ID  COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY 
                                                                                   MISESTIMATE
1484026771529551585 SH CUSTOMERS CUST_STATE_ COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY 
                                 PROVINCE                                          MISESTIMATE
1484026771529551585 SH CUSTOMERS             TABLE  DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY
                                                                                   MISESTIMATE

【Oracle Database 12c新特性】TTnn TMON新的redo传输后台进程

$
0
0

在Oracle 11g中 Data Guard的redo传输工作主要由以下3组后台进程实现:

  • ARCi (FAL – archived redo shipping, ping, local only archivals)
  • NSAi (async) 12.1 name: TTnn ,
  • NSSi (sync) –– live redo shipping

 

但从版本12c开始 使用TTnn  例如TT00进程来负责async 异步的redo传输。 另一个后台进程TMON来负责做Redo transport monitor。

 

SQL> select banner from v$version where rownum=1;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production

SQL> select program,pid from v$process where program like '%TMON%' or Program like '%TT%';

PROGRAM                               PID
------------------------------ ----------
ORACLE.EXE (TMON)                       7
ORACLE.EXE (TT00)                      24

 

 

 

 

这样做的目的是 在11g 中因为NSAi async redo ship异步传输进程仍需要LGWR进程的通知才能工作,造成短暂的redo 传输延迟; 所以在12c中TTnn进程的redo传输不再依赖于LGWR。

注意是  这里讨论的是async redo ship 异步redo传输!

11g时:

 

11g nsa

 

12c时

12c ttnn tmon

 

 

TTnn TMON Data Guard ASYNC

【Oracle Database 12c新特性】ASM Scrubbing Disk Groups

$
0
0

在12.1中Oracle ASM提供了一个改善可用性和可靠度的的新特性 称作Scrubbing Disk Groups, Disk Scrubbing通过检查数据的逻辑讹误,从而能够在Normal 或者High Redundancy的disk group上修复它们。 Scrubbing 进程需要利用镜像盘来修复逻辑讹误。Disk Scrubbing可以与disk group rebalancing组合使用以减少I/O资源消耗。Disk Scrubbing对产品环境的I/O影响不大。

用户可以指定具体要Scrubbing的磁盘组,特定的磁盘,或者磁盘组内的某一个文件,具体要使用ALTER DISKGROUP命令。如下面的例子:

 

 

SQL> ALTER DISKGROUP data SCRUB POWER LOW;

SQL> ALTER DISKGROUP data SCRUB FILE '+DATA/ORCL/ASKMACLEAN/example.266.806582193' 
       REPAIR POWER HIGH FORCE;

SQL> ALTER DISKGROUP data SCRUB DISK DATA_0005 REPAIR POWER HIGH FORCE;

 

 

当执行如上SCRUB 时:

 

  • 选项REPAIR指定自动修复磁盘讹误,如果未指定REPAIR,则SCRUB仅检查和报告指定目标的逻辑讹误。
  • 选项POWER可以设置为AUTO LOW HIGH 或者MAX。 若POWER未指定,则使用AUTO自动调整。
  • 选项WAIT 指定该命令直到scrubbing 命令完成才返回。若WAIT不指定,则scrubbing操作将加入到scrubbing queue 队列,并命令立即返回
  • 若FORCE选项被指定,则即便系统I/O负载很高或者在系统级别已经禁用了scrubbing ,还是执行该命令。

 

【12c新特性】RAC Cluster Hub Node-Leaf Node

$
0
0

原帖地址:http://www.askmaclean.com/archives/12c-rac-cluster-hub-node-leaf-node.html

 

在12c的cluster中引入了很多新特性和新概念,其中重复最多的几个名词除了flex cluster、flux asm之外 还有Hub Node和Leaf Node,这里来介绍Hub Node和Leaf Node.

 

flex cluster arch

 

  • Hub Node官方解释:
    • A node in and Oracle Flex Cluster that is tightly connected with other servers and has direct access to a shared disk.
  • Leaf Node官方解释:
    • Servers that are loosely coupled with Hub Nodes, which may not have direct access to the shared storage.

可以看到主题区别在于 Leaf Node不能直接访问shared storage ,这意味着leaf node不是share disk的。 这里Hub Node与12c之前的普通cluster node无区别, 而Leaf Node是新技术。

 

Leaf Node的特性:

  • 与 Hub Node相比 更松散地与cluster捆绑
  • 在启动时自动发现Hub Node
  • 通过一个Hub Node连接到集群
  • Hub Node或网络失败都会造成相关的Leaf Node被驱逐
  • 不要求直接访问共享存储
  • 与Hub Node在同一网络

 

使用Leaf Node实现Flex Cluster的好处显而易见:

  • hub-and-spoke技术将cluster分化成可管理的节点组
  • 仅仅需要Hub Node直接访问OCR和Votedisk
  • 通过限制HUB node的数量,从而减少对关键clusterware资源的争用,例如ocr和Votedisk 。
  • 在节点间所需要的网络互动更少
  • 更少的管理用网络流量,例如节点间的心跳

 

 

对比下图可以看到,12节点的Flex cluster包含12个交互通路, 而普通集群则需要 [ n * (n-1)]/2共66个交互通路。

对于上1000节点的集群,上述的差异会更明显。假设有40个Hub Node,每一个Hub Node对应24个Leaf Node,则Flex Cluster将包含1740个交互通路。  与之对比,普通Cluster需要499500个交互通路。

 

flex cluster

 

 

在Flex Cluster中集群中被驱逐的节点无需重启,仅仅cluster software需要重启。

 

如果Hub Node 失败

  • 该节点将被集群驱逐 , 且如果可能则服务将被relocate到其他Hub Node
  • 该Hub Node对应的Leaf Node亦被集群驱逐,如果可能服务也将relocate到其他Leaf Node上

如果Leaf Node失败

  • 该节点将被集群驱逐,如果可能服务将被relocate到另一个Leaf Node上

 

【Oracle Database 12c新特性】32k varchar2 max_string_size

$
0
0

在Oracle Database 12c中,我们可以为varchar2、nvarchar2和RAW数据类型指定32767 bytes 的最大长度了, 以便用户将更长的字符串存储在数据库中。

 

在12c之前的版本中,varchar2和nvarchar2数据类型的最大长度是4000 bytes,而raw是2000 bytes。

varcha2、nvarchar2和raw字段的定义长度将影响字段的内部存储方式

  • 定义为4000字节或更小的varchar2、nvarchar2以及2000字节或更小的raw字段,将被inline存放
  • 定义为4000字节以上的varchar2、nvarchar2以及2000字节以上的raw字段的话,被称作extended character data type columns,以out of line方式存储。

 

参数MAX_STRING_SIZE控制扩展数据类型extended data type的最大长度:

  • STANDARD 代表12c之前的长度限制,即varchar2、nvarchar2 4000 bytes, raw 是2000  bytes
  • EXTENDED 代表12c 32k strings新特性,varchar2、nvarchar2、raw最大长度32k  bytes

 

Extended character data types 扩展字符类型存在以下的限制:

  • 不支持cluster table 簇表和index-organized tables索引组织表
  • 不支持intrapartition的并行DDL、UPDATE和DELETE DML
  • 不支持在Automatic Segment Space Management (ASSM)表空间上的intrapartition parallel direct-path inserts

 

为数据库配置扩展数据类型 的步骤如下:

 

  1. 关闭数据库实例  shutodnw immediate,如果是RAC则需要关闭所有实例
  2. 以upgrade模式启动数据库 实例 startup upgrade;
  3. 修改参数MAX_STRING_SIZE 为EXTENDED   ALTER SYSTEM SET MAX_STRING_SIZE = EXTENDED;
  4. 运行 @?/rdbms/admin/utl32k
  5. 重启数据库实例

 

 

 

 @?/rdbms/admin/utl32k

...Database user "SYS", database schema "APEX_040200", user# "98" 13:29:59
...Compiled 0 out of 2998 objects considered, 0 failed compilation 13:30:00
...263 packages
...255 package bodies
...453 tables
...11 functions
...16 procedures
...3 sequences
...458 triggers
...1322 indexes
...207 views
...0 libraries
...6 types
...0 type bodies
...0 operators
...0 index types
...Begin key object existence check 13:30:00
...Completed key object existence check 13:30:00
...Setting DBMS Registry 13:30:00
...Setting DBMS Registry Complete 13:30:00
...Exiting validate 13:30:00

 

 

 

必须要将MAX_STRING_SIZE 设置为EXTENDED 否则无法使用extended character data type columns 。

 

之后我们可以创建具体有extended character data type columns 的表了。

当然我们也可以将已有的VARCHAR2, NVARCHAR2, 和RAW字段修改其长度, 具体使用ALTER TABLE MODIFY (COLUMN 命令。在此场景中Oracle将实施块中的长度扩展,而不将inline的存储迁移为以外部LOB存储。

 

实际上Oracle并不建议你广泛积极地将现有的varchar2的长度增加为4000 bytes以上,基于以下的原因:

  • 很容易造成链式行row chaining
  • inline存储的数据行将被读取,不管该字段是否被select 。 实际inline的扩展字符类型显然会一定程度上影响性能。
  • 为了迁移到新的out-of-line的存储扩展字符类型方式,用户需要重建表。否则任何类型的表重新组织方式例如alter table move都将无法打破inline存储

 

用户可以将32k的字段加入到现有的堆表,具体使用ALTER TABLE ADD 的DDL语句。

 

Data Pump导入导出以及SQL*Loader均支持extended character data type columns。

现有字段上的索引无法实现数据类型扩展,所以必须先将字段上的索引drop掉,再修改为扩展长度,之后再重建索引。

 

关于32k varchar2 max_string_size 、extended character data type columns的一些演示:

 

 

 

SQL> set linesize 200 pagesize 20000
SQL> select banner from v$version where rownum=1;

BANNER
-----------------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production

SQL> show parameter MAX_STRING_SIZE

NAME                                 TYPE                   VALUE
------------------------------------ ---------------------- -----------------------------
max_string_size                      string                 EXTENDED

SQL> CREATE TABLE long_varchar(id NUMBER,vc VARCHAR2(32767));

表已创建。

SQL> DESC long_varchar
 名称                                      是否为空? 类型
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER
 VC                                                 VARCHAR2(32767)

 SQL> insert into long_varchar values(1,rpad('MACLEAN',30000,'A'));

已创建 1 行。

SQL> commit;

提交完成。

SQL> alter system flush buffer_cache;

系统已更改。

SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from long_varchar;

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID)
------------------------------------ ------------------------------------
                               97217                                    1

SQL> alter system dump datafile 1 block 97217;

系统已更改。

SQL> oradebug setmypid
已处理的语句
SQL> oradebug tracefile_name
C:\APP\XIANGBLI\diag\rdbms\maclean\maclean\trace\maclean_ora_5688.trc

tab 0, row 0, @0x1f65
tl: 59 fb: --H-FL-- lb: 0x1  cc: 2
col  0: [ 2]  c1 02
col  1: [52]
 00 54 00 01 01 0c 00 00 00 01 00 00 00 01 00 00 00 1f 50 05 00 20 05 00 00
 00 00 03 15 e4 00 00 00 00 00 02 00 41 c5 73 00 41 c5 74 00 41 c5 75 00 41
 c5 76
LOB
Locator:
  Length:        84(52)
  Version:        1
  Byte Length:    1
  LobID: 00.00.00.01.00.00.00.1f.50.05
  Flags[ 0x01 0x0c 0x00 0x00 ]:
    Type: BLOB 
    Storage: BasicFile
    Enable Storage in Row 
    Characterset Format: IMPLICIT
    Partitioned Table: No
    Options: ReadWrite 
  Inode: 
    Size:     32
    Flag:     0x05 [ Valid InodeInRow(ESIR) ]
    Future:   0x00 (should be '0x00')
    Blocks:   3
    Bytes:    5604
    Version:  00000.0000000002
    DBA Array[4]:
      0x0041c573 0x0041c574 0x0041c575 0x0041c576
end_of_block_dump							   

可以看到原生的32k varchar实际以BasicFile BLOB的方式out-of-line存储

SQL> create table convert_long(t1 int,t2 varchar2(20));

表已创建。

SQL> insert into convert_long values(1,'MACLEAN');

已创建 1 行。

SQL> commit;

提交完成。

SQL> create index ind_cl on convert_long(t2);

索引已创建。

SQL> alter table convert_long modify t2 varchar2(32767);
alter table convert_long modify t2 varchar2(32767)
*
第 1 行出现错误:
ORA-01404: ALTER COLUMN 将使索引过大

SQL> drop  index ind_cl;

索引已删除。

SQL> alter table convert_long modify t2 varchar2(32767);

表已更改。

SQL> update convert_long set t2=rpad('MACLEAN',30000,'A');

已更新 1 行。

SQL> commit;

提交完成。

SQL> alter system flush buffer_cache;

系统已更改。

SQL>  select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from convert_long;

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID)
------------------------------------ ------------------------------------
                              117121                                    1

SQL> oradebug setmypid
已处理的语句
SQL> oradebug tracefile_name
C:\APP\XIANGBLI\diag\rdbms\maclean\maclean\trace\maclean_ora_4340.trc

可以看到形成了链式行 chained row

tab 0, row 0, @0x7aa
tl: 6120 fb: --H-F--N lb: 0x2  cc: 2
nrid:  0x0041c984.0
col  0: [ 2]  c1 02
col  1: [6105]
 4d 41 43 4c 45 41 4e 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41

 

Viewing all 43 articles
Browse latest View live