当前位置:文档之家› How_To_Upgrade_11.2.0.3_To_11.2.0.4(GI+RAC)

How_To_Upgrade_11.2.0.3_To_11.2.0.4(GI+RAC)

How_To_Upgrade_11.2.0.3_To_11.2.0.4(GI+RAC)
How_To_Upgrade_11.2.0.3_To_11.2.0.4(GI+RAC)

如何升级Oracle Grid Infrastructure和RAC从11.2.0.3到11.2.0.4

张建英(Jane Zhang) | Principal Support Engineer

Oracle Global Software Support

目录

1大概步骤 (1)

1.1升级GI (2)

1.2升级RAC数据库软件 (2)

1.3升级已有的数据库 (3)

2具体步骤 (3)

2.1环境说明 (3)

2.2升级GI (4)

2.2.1下载并解压11.2.0.4 GI介质 (4)

2.2.2利用CUV进行升级前的检查(推荐) (4)

2.2.3升级GI (8)

2.2.4修改grid用户的ORACLE_HOME和PATH到新的路径 (20)

2.3升级RAC软件 (21)

2.3.1下载并解压11.2.0.4 数据库软件 (21)

2.3.2升级数据库软件 (21)

2.4升级已有的数据库 (30)

2.4.1运行utlu112i.sql 来进行升级前的检查 (30)

2.4.2运行11.2.0.4的DBUA来升级数据库 (34)

2.4.3修改oracle用户的ORACLE_HOME和PATH到新的路径 (39)

1大概步骤

把GI 和RAC从11.2.0.3升级到11.2.0.4的大概步骤:

1.1升级GI

1)下载11.2.0.4 GI软件:

11.2.0.4的下载链接:

https://https://www.doczj.com/doc/4d15565066.html,/download/13390677.html

p1*******_112040_platform_3of7.zip是Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware, and Oracle Restart)。

2)安装11.2.0.4 GI到一个新的ORACLE_HOME(不要停止旧的GI,所有节点GI都启动)。

3)安装的时候选择“Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage

Management”。

4)安装结束时,根据提示用root用户在各个节点依次执行rootupgrade.sh 。

5)修改grid用户的环境变量ORACLE_HOME 和PATH 等到新的路径

6)参考11.2 GI 升级的官方文档:

Oracle? Grid Infrastructure Installation Guide

11g Release 2 (11.2) for Linux

E41961-02

F How to Upgrade to Oracle Grid Infrastructure 11g Release 2

https://www.doczj.com/doc/4d15565066.html,/cd/E11882_01/install.112/e41961/procstop.htm#BABEHGJG

1.2升级RAC数据库软件

1)下载11.2.0.4数据库软件:

https://https://www.doczj.com/doc/4d15565066.html,/download/13390677.html

p1*******_112040_platform_1of7.zip

p1*******_112040_platform_2of7.zip

上面的两个补丁包是Oracle Database (includes Oracle Database and Oracle RAC)。

2)在安装前一定要取消oracle用户的ORACLE_BASE, ORACLE_HOME, ORACLE_SID等的设置。

3)安装11.2.0.4 RAC 到一个新的ORACLE_HOME,选择只安装软件不建库(Install database

software only)

4)在安装11.2.0.4的过程中设置正确的ORACLE_BASE and ORACLE_HOME.

5)安装的要求请参考11.2官方文档:

Oracle? Real Application Clusters Installation Guide

11g Release 2 (11.2) for Linux and UNIX

E41962-03

https://www.doczj.com/doc/4d15565066.html,/cd/E11882_01/install.112/e41962/chklist.htm

1.3升级已有的数据库

1)升级前一定要备份数据库。

2)运行utlu112i.sql 来进行升级前的检查(数据库是启动的):

su - oracle

export ORACLE_HOME=旧的ORACLE_HOME

export ORACLE_SID=实例名

$ORACLE_HOME/bin/sqlplus / as sysdba

SQL> @/u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin/utlu112i.sql <==这是新的ORACLE_HOME下面的脚本,修正这个脚本所发现的所有问题。

3)运行11.2.0.4的DBUA来升级数据库:

<新的ORACLE_HOME>/bin/dbua

DBUA 将会执行的工作:

-DBUA会从/etc/oratab获得数据库的信息

- 停止数据库和DBConsole

- 在新的ORACLE_HOME创建密码文件

- 拷贝spfile到新的ORACLE_HOME 并且去除obsolete的参数

- 在DBUA中可以选择备份数据库

- 在DBUA中可以把数据文件从file system/raw devices 迁移到ASM (需要保证diskgroup是mount的)

4)修改oracle用户的环境变量ORACLE_HOME 和PATH 等到新的路径

5)请参考数据库升级到11.2的官方文档:

Oracle? Database Upgrade Guide

11g Release 2 (11.2)

E23633-09

https://www.doczj.com/doc/4d15565066.html,/cd/E11882_01/server.112/e23633/upgrade.htm

2具体步骤

2.1环境说明

For Linux x86_64的两个节点的11.2.0.3,节点名分别为rac1和rac2,数据库名为RACDB。

GI HOME:

11.2.0.3: /u01/app/11.2.0/grid

11.2.0.4: /u01/app/11.2.0.4/grid

DB HOME:

11.2.0.3: /u01/app/oracle/product/11.2.0/dbhome_1

11.2.0.4: /u01/app/oracle/product/11.2.0.4/dbhome_1

2.2升级GI

2.2.1下载并解压11.2.0.4 GI介质

https://https://www.doczj.com/doc/4d15565066.html,/download/13390677.html

p1*******_112040_platform_3of7.zip

[grid@rac1 software]$ ls -l

total 1178160

-rw-r--r-- 1 grid oinstall 1205251894 Sep 2 18:49 p1*******_112040_Linux-x86-64_3of7.zip

用grid用户进行解压:

[grid@rac1 software]$ pwd

/u01/software

[grid@rac1 software]$ unzip p1*******_112040_Linux-x86-64_3of7.zip

在两个节点都创建11.2.0.4的GI的ORACLE_HOME:

# mkdir -p /u01/app/11.2.0.4/grid

# chown grid:oinstall /u01/app/11.2.0.4/grid

2.2.2利用CUV进行升级前的检查(推荐)

用grid用户在节点1执行:

[grid@rac1 ~]$ /u01/software/grid/runcluvfy.sh stage -pre crsinst -upgrade -n rac1,rac2 -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u01/app/11.2.0.4/grid -dest_version 11.2.0.4.0

Performing pre-checks for cluster services setup

Checking node reachability...

Node reachability check passed from node "rac1"

Checking user equivalence...

User equivalence check passed for user "grid"

Checking CRS user consistency

CRS user consistency check successful

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"

Node connectivity passed for interface "eth0"

TCP connectivity check passed for subnet "192.0.2.0"

Check: Node connectivity for interface "eth1"

Node connectivity passed for interface "eth1"

TCP connectivity check passed for subnet "192.168.1.0"

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "192.0.2.0".

Subnet mask consistency check passed for subnet "192.168.1.0".

Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.0.2.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.0.2.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking OCR integrity...

OCR integrity check passed

Checking ASMLib configuration.

Check for ASMLib configuration passed.

Total memory check passed

Available memory check passed

Swap space check failed

Check failed on nodes:

rac2,rac1

Free disk space check passed for "rac2:/u01/app/11.2.0.4/grid"

Free disk space check passed for "rac1:/u01/app/11.2.0.4/grid"

Free disk space check passed for "rac2:/tmp"

Free disk space check passed for "rac1:/tmp"

Check for multiple users with UID value 54322 passed

User existence check passed for "grid"

Group existence check passed for "oinstall"

Membership check for user "grid" in group "oinstall" [as Primary] passed Run level check passed

Hard limits check passed for "maximum open file descriptors"

Soft limits check passed for "maximum open file descriptors"

Hard limits check passed for "maximum user processes"

Soft limits check passed for "maximum user processes"

There are no oracle patches required for home "/u01/app/11.2.0/grid". There are no oracle patches required for home "/u01/app/11.2.0.4/grid". System architecture check passed

Kernel version check passed

Kernel parameter check passed for "semmsl"

Kernel parameter check passed for "semmns"

Kernel parameter check passed for "semopm"

Kernel parameter check passed for "semmni"

Kernel parameter check passed for "shmmax"

Kernel parameter check passed for "shmmni"

Kernel parameter check passed for "shmall"

Kernel parameter check passed for "file-max"

Kernel parameter check passed for "ip_local_port_range"

Kernel parameter check passed for "rmem_default"

Kernel parameter check passed for "rmem_max"

Kernel parameter check passed for "wmem_default"

Kernel parameter check passed for "wmem_max"

Kernel parameter check passed for "aio-max-nr"

Package existence check passed for "make"

Package existence check passed for "binutils"

Package existence check passed for "gcc(x86_64)"

Package existence check passed for "libaio(x86_64)"

Package existence check passed for "glibc(x86_64)"

Package existence check passed for "compat-libstdc++-33(x86_64)"

Package existence check passed for "elfutils-libelf(x86_64)"

Package existence check passed for "elfutils-libelf-devel"

Package existence check passed for "glibc-common"

Package existence check passed for "glibc-devel(x86_64)"

Package existence check passed for "glibc-headers"

Package existence check passed for "gcc-c++(x86_64)"

Package existence check passed for "libaio-devel(x86_64)"

Package existence check passed for "libgcc(x86_64)"

Package existence check passed for "libstdc++(x86_64)"

Package existence check passed for "libstdc++-devel(x86_64)"

Package existence check passed for "sysstat"

Package existence check passed for "ksh"

Check for multiple users with UID value 0 passed

Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...

No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed

Default user file creation mask check passed

Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not exist on any node of the cluster. Skipping further checks File "/etc/resolv.conf" is consistent across nodes

UDev attributes check for OCR locations started...

UDev attributes check passed for OCR locations

UDev attributes check for Voting Disk locations started...

UDev attributes check passed for Voting Disk locations

Time zone consistency check passed

Checking VIP configuration.

Checking VIP Subnet configuration.

Check for VIP Subnet configuration passed.

Checking VIP reachability

Check for VIP reachability passed.

Checking Oracle Cluster Voting Disk configuration...

ASM Running check passed. ASM is running on all specified nodes

Oracle Cluster Voting Disk configuration check passed

Clusterware version consistency passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

2.2.3升级GI

不要停止旧的GI,保持所有节点的GI都是运行的:

[grid@rac1 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS --------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.CRS.dg

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ora.DATA.dg

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ora.LISTENER.lsnr

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ora.RECO.dg

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ora.asm

ONLINE ONLINE rac1 Started

ONLINE ONLINE rac2 Started

ora.gsd

OFFLINE OFFLINE rac1

OFFLINE OFFLINE rac2

https://www.doczj.com/doc/4d15565066.html,work

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ora.ons

ONLINE ONLINE rac1

ONLINE ONLINE rac2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE rac1

ora.cvu

1 ONLINE ONLINE rac1

ora.oc4j

1 ONLINE ONLINE rac1

ora.rac1.vip

1 ONLINE ONLINE rac1

ora.rac2.vip

1 ONLINE ONLINE rac2

ora.racdb.db

1 ONLINE ONLINE rac1 Open

2 ONLINE ONLINE rac2 Open ora.scan1.vip

1 ONLINE ONLINE rac1

[grid@rac1 ~]$

在节点1用grid用户执行:

$cd /u01/software/grid

$./runInstaller

输入11.2.0.4的GI的ORACLE_HOM,这个路径需要提前在两个节点分别创建。

根据提示在节点1用root用户运行rootupgrade.sh:

[root@rac1 grid]# /u01/app/11.2.0.4/grid/rootupgrade.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/11.2.0.4/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

ASM upgrade has started on first node.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1' CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.CRS.dg' on 'rac1'

CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'

CRS-2673: Attempting to stop 'ora.RECO.dg' on 'rac1'

CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'

CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'

CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2'

CRS-2677: Stop of 'ora.RECO.dg' on 'rac1' succeeded

CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded

CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded

CRS-2677: Stop of 'ora.CRS.dg' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.ons' on 'rac1'

CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded

CRS-2673: Attempting to stop 'https://www.doczj.com/doc/4d15565066.html,work' on 'rac1'

CRS-2677: Stop of 'https://www.doczj.com/doc/4d15565066.html,work' on 'rac1' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'

CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'

CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'

CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'

CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'

CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed CRS-4133: Oracle High Availability Services has been stopped.

OLR initialization - successful

Replacing Clusterware entries in inittab

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

根据提示在节点2用root用户运行rootupgrade.sh:

[root@rac2 ~]# /u01/app/11.2.0.4/grid/rootupgrade.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/11.2.0.4/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2' CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac2'

CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2'

CRS-2673: Attempting to stop 'ora.CRS.dg' on 'rac2'

CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'

CRS-2673: Attempting to stop 'ora.RECO.dg' on 'rac2'

CRS-2673: Attempting to stop 'ora.cvu' on 'rac2'

CRS-2677: Stop of 'ora.cvu' on 'rac2' succeeded

CRS-2672: Attempting to start 'ora.cvu' on 'rac1'

CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac2'

CRS-2677: Stop of 'ora.rac2.vip' on 'rac2' succeeded

CRS-2672: Attempting to start 'ora.rac2.vip' on 'rac1'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac2'

CRS-2677: Stop of 'ora.scan1.vip' on 'rac2' succeeded

CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac1'

CRS-2676: Start of 'ora.cvu' on 'rac1' succeeded

CRS-2677: Stop of 'ora.RECO.dg' on 'rac2' succeeded

CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded

CRS-2676: Start of 'ora.rac2.vip' on 'rac1' succeeded

CRS-2676: Start of 'ora.scan1.vip' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac1'

CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded

CRS-2677: Stop of 'ora.CRS.dg' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac2'

CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.ons' on 'rac2'

CRS-2677: Stop of 'ora.ons' on 'rac2' succeeded

CRS-2673: Attempting to stop 'https://www.doczj.com/doc/4d15565066.html,work' on 'rac2'

CRS-2677: Stop of 'https://www.doczj.com/doc/4d15565066.html,work' on 'rac2' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'

CRS-2673: Attempting to stop 'ora.asm' on 'rac2'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'

CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'

CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'rac2'

CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'

CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'

CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed CRS-4133: Oracle High Availability Services has been stopped.

OLR initialization - successful

Replacing Clusterware entries in inittab

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.

Started to upgrade the CSS.

Started to upgrade the CRS.

The CRS was successfully upgraded.

Successfully upgraded the Oracle Clusterware.

Oracle Clusterware operating version was successfully set to 11.2.0.4.0

ASM upgrade has finished on last node.

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

运行完之后,回到OUI的图形界面,点击“确定”继续对Inventory等进行更新。

最后执行下面的命令查看GI的版本,已经达到了11.2.0.4:

[grid@rac1 ~]$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [11.2.0.4.0]

2.2.4修改grid用户的ORACLE_HOME和PATH到新的

路径

所有节点都修改:

[grid@rac1 ~]$ vi ~/.bash_profile

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

export ORACLE_HOME=/u01/app/11.2.0.4/grid; export GRID_HOME

ORACLE_SID=+ASM1; export ORACLE_SID

ORACLE_TERM=xterm; export ORACLE_TERM

BASE_PATH=/usr/sbin:$PATH; export BASE_PATH

PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH

相关主题
文本预览
相关文档 最新文档