Oracle 10g RAC 在IBM AIX上实施ASM
- 格式:doc
- 大小:1.21 MB
- 文档页数:13
RAC(ASM)到单机OGG配置案例环境说明:源端DB:操作系统:AIX 7100-02-07-1524数据库:oracle_11.2.0.1.0 RACgoldengate:for_11g_ppc目标端DB:操作系统:windows 7数据库:oracle_11.2.0.1.0goldengate:for_11g_x86注意:本实验是模拟在不同平台同版本上面安装和配置OGG,并实现简单的DML复制,至于复杂的其他方面希望大家能自己多多实验。
作者:姓名:ZhangQYQQ:5056357配置步骤:1、检查源端和目标端正确的IP解析:源端:# cat /etc/hosts# 10.2.0.2 x25sample # x.25 name/address# 2000:1:1:1:209:6bff:feee:2b7f ipv6sample # ipv6 name/address 127.0.0.1 loopback localhost # loopback (lo0) name/address ::1 loopback localhost # IPv6 loopback (lo0)name/address172.16.16.101 zqdb192.169.79.11 zqdb172.16.16.165 oradg192.169.79.12 oradg172.16.16.166 gc1-scan.zqdb172.16.16.168 zqdb-vip172.16.16.169 oradg-vip目标端:C:\Windows\System32\drivers\etc\hosts无特殊配置2、设置LIBPATH,为了安装OGG所用的动态链接库。
如果没有配置这个路径的话,在安装OGG的过程中会报找不到动态链接库的错误,用户可以自己尝试一下。
源端:# su - oraclezqdb:/home/oracle>$vi .profile".profile" 25 lines, 756 charactersPATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.export PATHif [ -s "$MAIL" ] # This is at Shell startup. In normalthen echo "$MAILMSG" # operation, the Shell checksfi # periodically.OGG_HOME=/oracle/ogg/12.1.2ORACLE_BASE=/oracle/ora11gORACLE_HOME=/oracle/ora11g/product/11gORACLE_SID=ora11g1export ORACLE_BASE ORACLE_HOME ORACLE_SIDORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/dataNLS_LANG=AMERICAN_AMERICA.ZHS16GBKLD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:/libexport ORA_NLS33 NLS_LANG LD_LIBRARY_PATHPATH=$PATH:$ORACLE_HOME/bin:$OGG_HOMEexport PATHLIBPATH=$ORACLE_HOME/lib32:$ORACLE_HOME/libexport LIBPATHexport DISPLAY=172.17.2.203:0.0export PS1="`hostname`":'$PWD>$'目标端:无特殊配置3、在源端创建专用的表空间、schema、并授权。
Oracle 11g RAC for AIX安装步骤目录一、安操作系统检查3二、安装装备工作42.1.创建Grid Infrastructure和oracle用户和组42.2.创建Grid 集群如阿健和Oracle Database目录52.3.检查硬件条件52.4.IP地址分配52.5.调整操作系统参数62.6.配置ntp服务(服务端、客户端)82.7.配置SSH92.8.配置 SSH LoginGraceTime 参数102.9.配置GI用户环境102.10.配置ORACLE环境变量112.11.准备ASM磁盘11三、Grid Infrastructure安装过程153.1.检查root 用户umask为022153.2.检查grid 用户umask为022153.3.检查oracle 用户umask为022153.4./etc/hosts准备153.5.检查ifconfig值153.6.清理socket文件153.7.开始安装GI16四、安装Oracle Database 11gR2(不建库)29五、安装最新的OPatch33六、Oracle GI & RDBMS 安装PSU3补丁(18706472)34七、调整GI资源35八、调整ASM参数35九、安装ONE-OFF补丁35十、创建数据库36十一、加载one-off 补丁SQL 语句41十二、调整数据库参数41十三、调整11g 默认 profile42十四、Oracle11g RAC常用命令4214.1.数据库集群源状态查看4214.2.Oracle11g GI 启停命令4314.3.维护注意点43一、安操作系统检查1.检查软件条件二、安装装备工作2.1.创建Grid Infrastructure和oracle用户和组创建/oracle目录,大小为80G[2个节点]mkgroup -'A' id='1000' adms='root' oinstallmkgroup -'A' id='1100' adms='root' asmadminmkgroup -'A' id='1200' adms='root' dbamkgroup -'A' id='1300' adms='root' asmdbamkgroup -'A' id='1301' adms='root' asmopermkgroup -'A' id='1302' adms='root' operuseradd -u '1100' -g 'oinstall' -G 'asmadmin,asmdba,asmoper' -m griduseradd -u '1101' -g 'oinstall' -G 'dba,asmdba,oper' -m oraclepasswd gridpasswd oracle说明:并需要使用该用户重新登录一次。
RAC以及ASM安装全过程整理RAC以及ASM安装全过程整理更改主机名第一步:#hostname oratest第二步:修改/etc/sysconfig/network中的hostname第三步:修改/etc/hosts文件设置hosts文件可参考:[root@amdocs01 mapper]# cat /etc/hosts# Do not remove the following line, or various programs# that require network functionality will fail.localhostamdocs02amdocs02-vipamdocs02-priv设置IPeth01eth1绑定裸设备先在逻辑卷组上分出逻辑卷,全部为裸设备,必须包含:逻辑盘:ocrlv ,votelv 是必须的,因为后面安装ASM要用到data01,data02,data03,data04,data05,softlv,oralv可选1、裸设备定义:一块没有分区的硬盘,称为原始设备(RAWDEVICE)或者是一个分区,但是没有用EXT3,OCFS等文件系统格式化,称为原始分区(RAWPARTITION)以上两者都是裸设备2、裸设备的绑定有文件系统的分区是采用mount的方式挂载到某一个挂载点的(目录)而裸设备不能mount,只能绑定到/dev/raw/下的某一个设备名比如/dev/raw/raw13、裸设备的绑定方法先介绍第一种方法:修改/etc/sysconfig/rawdevices,添加以下内容,这里sdd1和sdd2是原始分区名或者原始设备(硬盘)名,raw1和raw2是/dev目录下的原始设备名,编号从raw1到raw8191,也就是最多可以绑定255个裸设备/dev/raw/raw1/dev/sdd1/dev/raw/raw2/dev/sdd2然后修改裸设备的属主和访问权限chown oracle:dba /dev/raw/raw1chown oracle:dba /dev/raw/raw2chmod 660 /dev/raw/raw1chmod 660 /dev/raw/raw2最后使得裸设备生效,并且在机器启动的时候就自动加载执行/sbin/chkconfig rawdevices on保证机器启动的时候裸设备能够加载,这一步很重要裸设备的绑定方法第二种方法,修改文件的方法#!/bin/sh## This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff.touch /var/lock/subsys/localraw /dev/raw/raw1 /dev/mapper/vg00-ocrlvraw /dev/raw/raw2 /dev/mapper/vg00-votelvraw /dev/raw/raw3 /dev/mapper/vg00-data01raw /dev/raw/raw4 /dev/mapper/vg00-data02raw /dev/raw/raw5 /dev/mapper/vg00-data03raw /dev/raw/raw6 /dev/mapper/vg00-data04chmod 775 /dev/raw/raw1chmod 775 /dev/raw/raw2chmod 775 /dev/raw/raw3chmod 775 /dev/raw/raw4chmod 775 /dev/raw/raw5chmod 775 /dev/raw/raw6chown oracle:dba /dev/raw/raw1chown oracle:dba /dev/raw/raw2chown oracle:dba /dev/raw/raw3chown oracle:dba /dev/raw/raw4chown oracle:dba /dev/raw/raw5chown oracle:dba /dev/raw/raw6chown oracle:dba /dev/raw/raw7modprobe hangcheck-timer hangcheck-tick=30 hangcheck_margin=1804、裸设备的读写不能用cp等命令操作,写入内容用dd命令,可以参阅相关资料5、清空裸设备相当于格式化啦bs是快的大小,blocksizecount是快的数量,这两者相乘大于裸设备的容量即可ddif=/dev/zeroof=/dev/raw/raw1bs=8192count=12800ddif=/dev/zeroof=/dev/raw/raw2bs=8192count=12800-------另外,注意:rhel4使用udev来管理设备手动修改/dev/raw/raw1不能永久生效要想使得权限持久生效raw/*:root:disk:0660改成raw/*:oracle:dba:0660重启机器如果/dev/下没有/raw/目录,可以自己手工建立。
aix6.1+grid+rac+oracle 11g安装配置手册系统架构摘要aix 6100-004+oracle grid 11gr2+oracle rac+oracle database hostname oradb1 oradb2IP address:10.1.1.71 oradb210.1.1.73 oradb2vip192.168.101.71 oradb2priv10.1.1.70 oradb1192.168.101.70 oradb1priv10.1.1.72 oradb1vip10.1.1.74 oracrs用户:root/rootgrid/gridoracle/oracle安装前准备工作/usr/sbin/lsattr -E -l sys0 -a realmem/usr/sbin/lsps -alsattr -El rhdisk3 -a size_mb编辑/etc/hostsvi /etc/hosts加入:10.1.1.71 oradb210.1.1.73 oradb2vip192.168.101.71 oradb2priv10.1.1.70 oradb1192.168.101.70 oradb1priv10.1.1.72 oradb1vip10.1.1.74 oracrs----安装SSH的软件包及配置在AIX Toolbox for Linux Applications盘上,安装SSH的前提包openssl-0.9.7gopenssl-devel-0.9.7gopenssl-doc-0.9.7g在扩展盘上openssh.baseopenssh.licenseopenssh.man.en_USFIXPACKIZ39665IZ29348IZ55160调整用户的SHELL Limit修改/etc/security/limits文件中root oracle用户相关部分如下:root:fsize = -1core = -1cpu = -1data = -1rss = -1stack = -1nofiles = -1oracle:fsize = -1core = -1cpu = -1data = -1rss = -1stack = -1nofiles = -1调整系统参数lsattr -E -l sys0 -a maxuproc/usr/sbin/chdev -l sys0 -a maxuproc=16384/usr/sbin/no -r -o ipqmaxlen=512/usr/sbin/no -p -o udp_sendspace=65536/usr/sbin/no -p -o udp_recvspace=655360/usr/sbin/no -p -o tcp_sendspace=65536/usr/sbin/no -p -o tcp_recvspace=65536/usr/sbin/no -p -o rfc1323=1/usr/sbin/no -p -o sb_max=1301720创建oinstall、dbamkgroup -A id=1000 oinstallmkgroup -A id=1200 dbamkuser id=1100 pgrp=oinstall groups=dba home='/home/grid' gridmkuser id=1101 pgrp=oinstall groups=dba home='/home/oracle' oraclemkdir -p /oracle/gridchown -R grid:oinstall /oraclemkdir /oracle/appchown oracle:oinstall /oracle/appchmod -R 775 /oracle/passwd gridpasswd oraclelsuser -a capabilities gridchuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGA TE grid验证oracle用户# id oracleuid=500(oracle) gid=202(oinstall) groups=203(dba)保证各节点都一样,并设置密码passwd oracleGrid安装配置1.设置ASM设备/usr/sbin/chdev -l hdisk2 -a pv=clear/usr/sbin/chdev -l hdisk3 -a pv=clear/usr/sbin/chdev -l hdisk4 -a pv=clear/usr/sbin/chdev -l hdisk5 -a pv=clearchdev -l hdisk2 -a pv=yeschdev -l hdisk3 -a pv=yeschdev -l hdisk4 -a pv=yeschdev -l hdisk5 -a pv=yes准备asm磁盘chown grid:oinstall /dev/rhdisk3chmod 660 /dev/rhdisk3chdev -l hdisk3 -a reserve_policy=no_reserve2.用户配置文件vi .profilePATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.if [ -s "$MAIL" ] # This is at Shell startup. In normalthen echo "$MAILMSG" # operation, the Shell checksfi # periodically.export ORACLE_BASE=/oracle/appexport ORACLE_HOME=/oracle/gridumask 022PATH=$PATH:/oracle/grid/binexport PATHexport TEMP=/tmpexport TMPDIR=/tmp3.分别在两台主机上运行如下命令配置grid用户的ssh互信,注意只能一条条运行,不能一次全部复制运行mkdir ~/.sshchmod 700 ~/.ssh/usr/bin/ssh-keygen -t rsa/usr/bin/ssh-keygen -t dsa在oradb1上,一条条分别依次执行。
Oracle 10g RAC for AIX6.1硬件环境:2台IBM P520 服务器,2台光纤交换机,1台IBM DS4700磁盘阵列(1.2TB)软件环境: Oracle 10g R2 (需升级到10.2.0.4), AIX6.11.IP规划:racdb1 公网:172.16.28.31VIP:172.16.28.33心跳: 192.168.100.1racdb2 公网: 172.16.28.32VIP: 172.16.28.34心跳:192.168.100.22.需要的软件包:bos.adt.libbos.adt.libmbos.perf.libperfstatbos.perf.perfstatbos.perf.proctoolsxlC.aix61.rte:9.0.0.1xlC.rte:9.0.0.1通过lslpp -l bos.adt.base 查看软件包是否已安装# lslpp -l bos.adt.baseFileset Level State Description---------------------------------------------------------------------------- Path: /usr/lib/objreposbos.adt.base 6.1.3.0 COMMITTED Base Application Development Toolkit需要的补丁:由于此次项目的系统是最新的AIX6.1,无法在网上确切查询到所需要的正确补丁,所以在安装过程中遇到很多问题,最终安装了p6613550_10203_AIX64-5L、p8705958_10204_AIX5L(CRS PSU补丁,解决升级后VIP起不来BUG)3.通过命令“# chfs -a size=30G /”调整文件系统,结果如下:# df -gFilesystem GB blocks Free %Used Iused %Iused Mounted on/dev/hd4 30.00 29.79 1% 13717 1% //dev/hd2 10.00 8.03 20% 45904 3% /usr/dev/hd9var 10.00 9.76 3% 7296 1% /var/dev/hd3 8.00 7.67 5% 608 1% /tmp/dev/fwdump 5.00 5.00 1% 4 1% /var/adm/ras/platform /dev/hd1 30.00 11.28 63% 53931 2% /home/dev/hd11admin 2.00 2.00 1% 5 1% /admin/proc - - - - - /proc/dev/hd10opt 10.00 9.54 5% 9610 1% /opt/dev/livedump 5.00 5.00 1% 4 1% /var/adm/ras/livedump /dev/lv00 0.25 0.24 4% 18 1% /var/adm/csd/dev/fslv00 0.25 0.25 1% 8 1% /audit4.在2个节点上分别建立oinstall、dba、hagsuser组,Oracle用户,并保持组ID和用户ID相同# smitty group# smitty user:Primary GROUP 为“oinstall”,Group SET 为“dba”和“hagsuser”改为“-1”(无限制)5.编辑/etc/hosts文件,添加以下内容:172.16.28.31 racdb1192.168.100.1 racdb1-priv172.16.28.33 racdb1-vip172.16.28.32 racdb2192.168.100.2 racdb2-priv172.16.28.34 racdb2-vip6.配置系统参数,配置每用户的最大进程数,调整water mark# smitty chgsys7.存储划分asm 400G hdisk4vote 2G hdisk5ocr 2G hdisk6arch 200G hdisk7# lspvhdisk0 00cbc154bde9ce42 rootvg active hdisk1 00cbc1a4cb987b44 rootvg active hdisk2 none Nonehdisk3 none Nonehdisk4 none Nonehdisk5 none Nonehdisk6 none Nonehdisk7 none None# cd /dev# chown root:oinstall hdisk6# chown oracle:oinstall hdisk4 hdisk5# chmod 664 hdisk4 hdisk5 hdisk6# chown root:oinstall rhdisk6# chown oracle:oinstall rhdisk4 rhdisk5# chmod 664 rhdisk4 rhdisk5 rhdisk68.配置.rhosts文件,用于节点间的通信认证和加密# cd /home/oracle# mkdir .rhosts# vi .rhostsracdb1 oracleracdb2 oracleracdb1-priv oracleracdb2-priv oracleracdb1-vip oracleracdb2-vip oracle9.配置NTP服务,用于2个节点的时间同步用date命令调整时间至1000秒以内,如果2个节点时间相差多于1000秒,NTP服务会失败如date 0508013030 表示 5月8日1点30分30秒racdb1# vi /etc/ntp.confbroadcastclientserver 127.127.1.0 (添加)driftfile /etc/ntp.drifttracefile /etc/ntp.trace# startsrc -s xntpdracdb2# vi/etc/ntp.confbroadcastclientserver 172.16.28.31 (添加)driftfile /etc/ntp.drifttracefile /etc/ntp.trace# startsrc -s xntpd10.配置环境变量# cd /home/oracle# vi .profileORACLE_BASE=/home/oracleexport ORACLE_BASEORACLE_HOME=$ORACLE_BASE/product/10.2.0.1/racdbexport ORACLE_HOMEORACLE_CRS_HOME=$ORACLE_BASE/product/10.2.0.1/crsexport ORACLE_CRS_HOMEORACLE_SID=rac1export ORACLE_SIDORACLE_TERM=xtermexport ORACLE_TERMNLS_LANG=AMERICAN_AMERICA.ZHS16GBKexport NLS_LANGLD_LIBRARY_PATH=$ORACLE_CRS_HOME/lib:$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/l ibexport LD_LIBRARY_PATHPATH=$PATH:$ORACLE_HOME/bin:$ORACLE_CRS_HOME/bin:/usr/bin:/etc:/usr/sbin:/usr/uc b:$HOME/bin:/usr/bin/X11:/sbin:.export PATH11.打补丁p6718715_10203_AIX64-5L.zip,然后以root用户执行补丁6718715里面的rootpre.sh文件;再用oracle用户执行CRS安装程序./runInstaller点击Next点击Next选择CRS安装的目录/home/oracle/product/10.2.0.1/crs,点击Nextoracle进行系统参数检查无问题,点击Next添加Cluster中两个节点及相关的公网、私网、VIP网络名称,点击Next选择ocr文件的位置为/dev/hdisk6,点击Next选择voting disk的位置为/dev/hdisk5,点击Next执行安装至完成。
AIX 7.1 Oracle 11g RAC ASM如何查看rootvg镜像组成# bootlist -o -m normalhdisk0 blv=hd5 pathid=0hdisk0 blv=hd5 pathid=1hdisk1 blv=hd5 pathid=0hdisk1 blv=hd5 pathid=1cd0如何查看cpu核数?# lsdev -Cc processorproc0 Available 00-00 Processorproc4 Available 00-04 Processorproc8 Available 00-08 Processorproc12 Available 00-12 Processorproc16 Available 00-16 Processorproc20 Available 00-20 Processorproc24 Available 00-24 Processorproc28 Available 00-28 Processorproc32 Available 00-32 Processorproc36 Available 00-36 Processorproc40 Available 00-40 Processorproc44 Available 00-44 Processorproc48 Available 00-48 Processorproc52 Available 00-52 Processorproc56 Available 00-56 Processorproc60 Available 00-60 Processor# bindprocessor -qThe available processors are: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63## prtconf|grep ProcessorsNumber Of Processors: 16#如何查看WWN号?# lsdev -Cc adapter -S a | grep fcsfcs0 Available 04-00 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)fcs1 Available 04-01 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)fcs2 Available 05-00 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)fcs3 Available 05-01 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)# lscfg -vpl fcs0fcs0 U78AA.001.WZSKJYT-P1-C2-T1 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)Part Number.................00E0806Serial Number...............1A4080061CManufacturer................001AEC Level.................... D77161Customer Card ID Number.....577DFRU Number..................00E0806Device Specific.(ZM) (3)Network Address.............10000090FA67C1CAROS Level and ID............027820B7Device Specific.(Z0) (31004549)Device Specific.(Z1) (00000000)Device Specific.(Z2) (00000000)Device Specific.(Z3) (09030909)Device Specific.(Z4)........FF781150Device Specific.(Z5)........027820B7Device Specific.(Z6)........077320B7Device Specific.(Z7)........0B7C20B7Device Specific.(Z8)........20000120FA67C1CADevice Specific.(Z9)2.02X7Device Specific.(ZA)........U2D2.02X7Device Specific.(ZB)........U3K2.02X7Device Specific.(ZC) (00000000)PLATFORM SPECIFICName: fibre-channelModel: 00E0806Node: fibre-channel@0Device Type: fcp# lscfg -vpl fcs1fcs1 U78AA.001.WZSKJYT-P1-C2-T2 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)Part Number.................00E0806Serial Number...............1A4080061CManufacturer................001AEC Level.................... D77161Customer Card ID Number.....577DFRU Number..................00E0806Device Specific.(ZM) (3)Network Address.............10000090FA67C1CBROS Level and ID............027820B7Device Specific.(Z0) (31004549)Device Specific.(Z1) (00000000)Device Specific.(Z2) (00000000)Device Specific.(Z3) (09030909)Device Specific.(Z4)........FF781150Device Specific.(Z5)........027820B7Device Specific.(Z6)........077320B7Device Specific.(Z7)........0B7C20B7Device Specific.(Z8)........20000120FA67C1CBDevice Specific.(Z9)2.02X7Device Specific.(ZA)........U2D2.02X7Device Specific.(ZB)........U3K2.02X7Device Specific.(ZC) (00000000)PLATFORM SPECIFICName: fibre-channelModel: 00E0806Node: fibre-channel@0,1Device Type: fcp#查看内存大小:# lsattr -El mem0ent_mem_cap I/O memory entitlement in Kbytes Falsegoodsize 63232 Amount of usable physical memory in Mbytes Falsemem_exp_factor Memory expansion factor Falsesize 63232 Total amount of physical memory in Mbytes Falsevar_mem_weight Variable memory capacity weight False#lsdev -Cc memoryselect GROUP_NUMBER,DISK_NUMBER,TOTAL_MB,FREE_MB,path from V$ASM_DISK;GROUP_NUMBER DISK_NUMBER TOTAL_MB FREE_MB PATH------------ ----------- ---------- ---------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------1 0 112640 108634 /dev/rhdiskpower11 1 153600 148149 /dev/rhdiskpower101 2 163840 158019 /dev/rhdiskpower111 3 174080 167900 /dev/rhdiskpower121 4 184320 177772 /dev/rhdiskpower131 5 194560 187652 /dev/rhdiskpower141 6 204800 197531 /dev/rhdiskpower151 7 225280 217274 /dev/rhdiskpower161 8 235520 227154 /dev/rhdiskpower171 9 143360 138261 /dev/rhdiskpower21 10 215040 207395 /dev/rhdiskpower32 0 5120 4724 /dev/rhdiskpower51 11 102400 98761 /dev/rhdiskpower71 12 122880 118515 /dev/rhdiskpower81 13 133120 128382 /dev/rhdiskpower9选定了17 行powermt display dev=all# powermt display dev=all# powermt display dev=allPseudo name=hdiskpower0VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF038007138E28678FEE311 [CRS2]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk4 SP A1 active alive 0 01 fscsi2 hdisk28 SP B1 active alive 0 0Pseudo name=hdiskpower1VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF03800EA9BE11779FEE311 [DATA2]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4==============================================================================--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk5 SP A1 active alive 0 01 fscsi2 hdisk29 SP B1 active alive 0 0Pseudo name=hdiskpower2VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF0380008A51F3079FEE311 [DATA5]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk6 SP A1 active alive 0 01 fscsi2 hdisk30 SP B1 active alive 0 0Pseudo name=hdiskpower3VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF0380080A25E6779FEE311 [DATA12]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk7 SP A1 active alive 0 01 fscsi2 hdisk31 SP B1 active alive 0 0Pseudo name=hdiskpower4VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF03800FA516CC278FEE311 [ARCH2]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk8 SP A1 active alive 0 01 fscsi2 hdisk32 SP B1 active alive 0 0Pseudo name=hdiskpower5VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF038007038E28678FEE311 [CRS1]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk9 SP A1 active alive 0 01 fscsi2 hdisk33 SP B1 active alive 0 0Pseudo name=hdiskpower6VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF038007238E28678FEE311 [CRS3]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk10 SP A1 active alive 0 01 fscsi2 hdisk34 SP B1 active alive 0 0Pseudo name=hdiskpower7VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF038004A0E320E79FEE311 [DATA1]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk11 SP A1 active alive 0 01 fscsi2 hdisk35 SP B1 active alive 0 0Pseudo name=hdiskpower8VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF03800A64A6E2079FEE311 [DATA3]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk12 SP A1 active alive 0 01 fscsi2 hdisk36 SP B1 active alive 0 0Pseudo name=hdiskpower9VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF03800F601412879FEE311 [DA TA4]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk13 SP A1 active alive 0 01 fscsi2 hdisk37 SP B1 active alive 0 0Pseudo name=hdiskpower10VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF038004C99743979FEE311 [DATA6]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk14 SP A1 active alive 0 01 fscsi2 hdisk38 SP B1 active alive 0 0Pseudo name=hdiskpower11VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF03800228CE84279FEE311 [DATA7]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk15 SP A1 active alive 0 01 fscsi2 hdisk39 SP B1 active alive 0 0Pseudo name=hdiskpower12VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF0380024FE984B79FEE311 [DATA8]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk16 SP A1 active alive 0 01 fscsi2 hdisk40 SP B1 active alive 0 0Pseudo name=hdiskpower13VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF038001A53695379FEE311 [DATA9]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk17 SP A1 active alive 0 01 fscsi2 hdisk41 SP B1 active alive 0 0Pseudo name=hdiskpower14VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF038003C27765A79FEE311 [DATA10]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk18 SP A1 active alive 0 01 fscsi2 hdisk42 SP B1 active alive 0 0Pseudo name=hdiskpower15VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF03800F203C96079FEE311 [DATA11]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk19 SP A1 active alive 0 01 fscsi2 hdisk43 SP B1 active alive 0 0Pseudo name=hdiskpower16VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF0380010784C6E79FEE311 [DATA13]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk20 SP A1 active alive 0 01 fscsi2 hdisk44 SP B1 active alive 0 0Pseudo name=hdiskpower17VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=600601607BF038006A15A77679FEE311 [DATA14]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk21 SP A1 active alive 0 01 fscsi2 hdisk45 SP B1 active alive 0 0Pseudo name=hdiskpower19VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=6006016090B03800484C45D6C5FEE311 [ARCH1]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk3 SP A1 active alive 0 01 fscsi2 hdisk27 SP B1 active alive 0 0Pseudo name=hdiskpower20VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=6006016090B0380076D3B562F300E411 [Backup1]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk46 SP A1 active alive 0 01 fscsi2 hdisk48 SP B1 active alive 0 0Pseudo name=hdiskpower21VNX ID=FCN00141200036 [IBM POWER740]Logical device ID=6006016090B038002EC8566FF300E411 [Backup2]state=alive; policy=CLAROpt; queued-IOs=0Owner: default=SP A, current=SP A Array failover mode: 4============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================0 fscsi0 hdisk47 SP A1 active alive 0 01 fscsi2 hdisk49 SP B1 active alive 0 0#crs1 hdiskpower5crs2 hdiskpower0crs3 hdiskpower6data1 hdiskpower7data2 hdiskpower1data3 hdiskpower8data4 hdiskpower9data5 hdiskpower2data6 hdiskpower10data7 hdiskpower11data8 hdiskpower12data9 hdiskpower13data10 hdiskpower14data11 hdiskpower15data12 hdiskpower3data13 hdiskpower16data14 hdiskpower17arch1 hdiskpower19arch2 hdiskpower4backup1 hdiskpower20backup2 hdiskpower21lsattr -E -l hdiskpower0rmdev -dl hdiskpower19chdev -l hdiskpower0 -a pv=yes chdev -l hdiskpower1 -a pv=yes chdev -l hdiskpower2 -a pv=yes chdev -l hdiskpower3 -a pv=yes chdev -l hdiskpower4 -a pv=yes chdev -l hdiskpower5 -a pv=yes chdev -l hdiskpower6 -a pv=yes chdev -l hdiskpower7 -a pv=yes chdev -l hdiskpower8 -a pv=yes chdev -l hdiskpower9 -a pv=yes chdev -l hdiskpower10 -a pv=yeschdev -l hdiskpower11 -a pv=yeschdev -l hdiskpower12 -a pv=yeschdev -l hdiskpower13 -a pv=yeschdev -l hdiskpower14 -a pv=yeschdev -l hdiskpower15 -a pv=yeschdev -l hdiskpower16 -a pv=yeschdev -l hdiskpower17 -a pv=yeschdev -l hdiskpower19 -a pv=yeschdev -l hdiskpower20 -a pv=yeschdev -l hdiskpower21 -a pv=yeschdev -l hdiskpower19 -a pv=yeschdev -l hdiskpower19 -a pv=clearchdev -l hdiskpower0 -a reserve_policy=no_reserve chdev -l hdiskpower1 -a reserve_policy=no_reserve chdev -l hdiskpower2 -a reserve_policy=no_reserve chdev -l hdiskpower3 -a reserve_policy=no_reserve chdev -l hdiskpower4 -a reserve_policy=no_reserve chdev -l hdiskpower5 -a reserve_policy=no_reserve chdev -l hdiskpower6 -a reserve_policy=no_reserve chdev -l hdiskpower7 -a reserve_policy=no_reserve chdev -l hdiskpower8 -a reserve_policy=no_reserve chdev -l hdiskpower9 -a reserve_policy=no_reserve chdev -l hdiskpower10 -a reserve_policy=no_reserve chdev -l hdiskpower11 -a reserve_policy=no_reserve chdev -l hdiskpower12 -a reserve_policy=no_reserve chdev -l hdiskpower13 -a reserve_policy=no_reserve chdev -l hdiskpower14 -a reserve_policy=no_reserve chdev -l hdiskpower15 -a reserve_policy=no_reserve chdev -l hdiskpower16 -a reserve_policy=no_reserve chdev -l hdiskpower17 -a reserve_policy=no_reserve chdev -l hdiskpower19 -a reserve_policy=no_reserve chdev -l hdiskpower20 -a reserve_policy=no_reserve chdev -l hdiskpower21 -a reserve_policy=no_reservechown grid:asmadmin /dev/rhdiskpower0chown grid:asmadmin /dev/rhdiskpower1chown grid:asmadmin /dev/rhdiskpower2chown grid:asmadmin /dev/rhdiskpower3chown grid:asmadmin /dev/rhdiskpower5chown grid:asmadmin /dev/rhdiskpower6chown grid:asmadmin /dev/rhdiskpower7chown grid:asmadmin /dev/rhdiskpower8chown grid:asmadmin /dev/rhdiskpower9chown grid:asmadmin /dev/rhdiskpower10chown grid:asmadmin /dev/rhdiskpower11chown grid:asmadmin /dev/rhdiskpower12chown grid:asmadmin /dev/rhdiskpower13chown grid:asmadmin /dev/rhdiskpower14chown grid:asmadmin /dev/rhdiskpower15chown grid:asmadmin /dev/rhdiskpower16chown grid:asmadmin /dev/rhdiskpower17chmod 660 /dev/rhdiskpower0chmod 660 /dev/rhdiskpower1chmod 660 /dev/rhdiskpower2chmod 660 /dev/rhdiskpower3chmod 660 /dev/rhdiskpower5chmod 660 /dev/rhdiskpower6chmod 660 /dev/rhdiskpower7chmod 660 /dev/rhdiskpower8chmod 660 /dev/rhdiskpower9chmod 660 /dev/rhdiskpower10chmod 660 /dev/rhdiskpower11chmod 660 /dev/rhdiskpower12chmod 660 /dev/rhdiskpower13chmod 660 /dev/rhdiskpower14chmod 660 /dev/rhdiskpower15chmod 660 /dev/rhdiskpower16chmod 660 /dev/rhdiskpower17chdev -l hdiskpower5 -a reserve_policy=no_reserve/usr/sbin/lsattr -E -l hdiskpower5# /usr/sbin/lsattr -E -l hdiskpower5PR_key_value none Reserve Key. Trueclr_q no Clear Queue (RS/6000) Truelocation Location Truelun_id 0x6000000000000 LUN ID Falselun_reset_spt yes FC Forced Open LUN Truemax_coalesce 0x100000 Maximum coalesce size Truemax_retries 5 Maximum Retries Truemax_transfer 0x100000 Maximum transfer size Truepvid 00f933f9c497fa560000000000000000 Physical volume identifier False pvid_takeover yes Takeover PVIDs from hdisks Trueq_err yes Use QERR bit Trueq_type simple Queue TYPE Falsequeue_depth 32 Queue DEPTH Truereassign_to 120 REASSIGN time out value Truereserve_policy no_reserve Reserve Policy used to reserve device on open. True reset_delay 2 Reset Delay Truerw_timeout 30 READ/WRITE time out Truescsi_id 0x10600 SCSI ID Falsestart_timeout 60 START unit time out Trueww_name 0x500601693ee0648b World Wide Name Falsereserve_policy no_reservelsattr -E -l /dev/rhdiskpower0lsattr -E -l /dev/rhdisk3查看:lsattr -E -l rhdiskpower0|grep reservelsattr -El sys0 -a realmem# lsattr -El sys0 -a realmemrealmem 64749568 Amount of usable physical memory in Kbytes False# lsps -aPage Space Physical V olume V olume Group Size %Used Active Auto Type Chksumhd6 hdisk0 rootvg 65536MB 0 yes yes lv 0### lsvg rootvgVOLUME GROUP: rootvg VG IDENTIFIER: 00f933f900004c0000000146e2225531 VG STA TE: active PP SIZE: 512 megabyte(s)VG PERMISSION: read/write TOTAL PPs: 1116 (571392 megabytes)MAX LVs: 256 FREE PPs: 808 (413696 megabytes)LVs: 13 USED PPs: 308 (157696 megabytes)OPEN LVs: 12 QUORUM: 1 (Disabled)TOTAL PVs: 2 VG DESCRIPTORS: 3STALE PVs: 0 STALE PPs: 0ACTIVE PVs: 2 AUTO ON: yesMAX PPs per VG: 32512MAX PPs per PV: 1016 MAX PVs: 32LTG size (Dynamic): 1024 kilobyte(s) AUTO SYNC: noHOT SPARE: no BB POLICY: relocatablePV RESTRICTION: none INFINITE RETRY: noDISK BLOCK SIZE: 512# lsvg -l rootvgrootvg:LV NAME TYPE LPs PPs PVs LV STA TE MOUNT POINThd5 boot 1 2 2 closed/syncd N/Ahd6 paging 128 256 2 open/syncd N/Ahd8 jfs2log 1 2 2 open/syncd N/Ahd4 jfs2 2 4 2 open/syncd /hd2 jfs2 6 12 2 open/syncd /usrhd9var jfs2 1 2 2 open/syncd /varhd3 jfs2 4 8 2 open/syncd /tmphd1 jfs2 1 2 2 open/syncd /homehd10opt jfs2 1 2 2 open/syncd /opthd11admin jfs2 1 2 2 open/syncd /adminfwdump jfs2 3 6 2 open/syncd /var/adm/ras/platform lg_dumplv sysdump 8 8 1 open/syncd N/Alivedump jfs2 1 2 2 open/syncd /var/adm/ras/livedump #chps -s 32 hdisk0chfs -a size=30G /tmpchfs -a size=10G /homechfs -a size=5G /chfs -a size=+10G /u01mklv -t jfs2 -y u01lv rootvg 200crfs -v jfs2 -d /dev/u01lv -m /u01mount /u01lsattr -El /dev/hdiskpower0lsattr -El /dev/rhdiskpower0ioo -o aio_maxreqsvmo -p -o minperm%=3vmo -p -o maxperm%=90vmo -p -o maxclient%=90vmo -p -o lru_file_repage=0vmo -p -o strict_maxclient=1vmo -p -o strict_maxperm=0lru_file_repagevmo -p -o lru_file_repage=0vi + /etc/security/limitsfsize = -1core = 2097151cpu = -1data = -1rss = -1stack = -1nofiles = -1no -r -o ipqmaxlen=521no -p -o rfc1323=1no -p -o sb_max=1500000no -p -o tcp_recvspace=65536no -p -o tcp_sendspace=65536no -p -o udp_recvspace=1351680no -p -o udp_sendspace=13516/usr/sbin/no -r -o ipqmaxlen=512/usr/sbin/no -po rfc1323=1/usr/sbin/no -po sb_max=131072/usr/sbin/no -po tcp_recvspace=65536/usr/sbin/no -po tcp_sendspace=65536/usr/sbin/no -po udp_recvspace=65530/usr/sbin/no -po udp_sendspace=65536#public ip192.168.100.103 sqwsjdb01192.168.100.104 sqwsjdb02#Private ip10.10.10.1 sqwsjdb01-priv10.10.10.2 sqwsjdb02-priv#Virtual ip192.168.100.101 sqwsjdb01-vip192.168.100.102 sqwsjdb02-vip#Scan ip192.168.100.100 rac-scanlsdev -Cc adapter查看ssh服务状态# lssrc -s sshdSubsystem Group PID Statussshd ssh 3866742 active停止和启动ssh 服务# stopsrc -s sshd# startsrc -s sshd建议通过console连接,不然停止ssh后网络连接就断开连不上了。
Oracle RAC增加ASM盘,创建表空间实验环境说明虚拟机软件Oracle VirtualBox 4.3.8数据库软件Clusterware 10.2.0.1+database 10.2.0.1数据库名称OracleRAC节点SID OracleRA1 OracleRA2节点主机名rac1 rac2对应虚拟机名称CentOS_Oracle_2 CentOS_Oracle_3一、共享盘设置1、在一个虚拟机上创建虚拟盘(两台虚拟机都要处于关机状态)(本例选择在CentOS_Oracle_2上创建虚拟盘)在虚拟机列表上选中CentOS_Oracle_2,单击右键,选择设置选择创建新的磁盘由于是选择固定大小,单击创建后会立刻分配硬盘空间如果设置的虚拟盘比较大的话,下面的创建过程可能会需要较多时间创建完成后单击OK退出设置界面2、设置虚拟盘为共享盘设置完成后关闭虚拟介质管理界面3、为第二台虚拟机添加虚拟盘。
选中CentOS_ORACLE_3,单击鼠标右键,选择设置。
确认已添加虚拟盘单击OK,完成添加共享磁盘设置二、创建ASM磁盘1、启动一台虚拟机,创建分区打开xshell软件,连接到rac1.查看添加的磁盘并分区新添加的磁盘路径为:/dev/sdg 对其进行分区建立根据需要,分成需要的的分区数量和大小,这里只分一个区并且使用全部可用空间启动第二台虚拟机,xshell连接进去,查看硬盘分区2、创建ASM磁盘在一个节点上创建ASM磁盘,另一个节点刷新ASM磁盘列表即可在rac1节点创建ASM磁盘。
先查看已存在的ASM磁盘名称。
Root用户执行:/etc/init.d/oracleasm listdisks查到已存在三个asm磁盘,新建的asm磁盘不能和已存在的重名。
所以新建的asm磁盘定名为VOL4(建议全大写字母+数字)Root用户执行:/etc/init.d/oracleasm createdisk VOL4 /dev/sdg1然后查看ASM磁盘列表。
Oracle RAC+ASM+DataGuard配置实验记录+常见问题Oracle RAC+ASM+DataGuard配置实验记录+常见问题1、环境规划:---RAC环境介绍(primary database)rac1 rac2______________________________________________________public ip 192.168.110.11 192.168.110.12______________________________________________________virtual ip 192.168.110.21 192.168.110.22_____________________________________________________instance racdb1 racdb2______________________________________________________db_name racdb_______________________________________________________storage mode ASM__________________________________________________---单机环境介绍(standby database)数据文件可放至本地,也可以放至ASM上,本实验中先放至本地实验_____________________________________________________________________ _____ip 192.168.110.11 192.168.110.12_____________________________________________________________________ ______instance 192.168.110.13(rac3)_____________________________________________________________________ ______storage mode /oradata/racdb_____________________________________________________________________ ______----hosts文件#Public Network - (eth0)192.168.110.11 rac1192.168.110.12 rac2192.168.110.13 rac3#Private Interconnect - (eth1)10.10.10.11 rac1priv10.10.10.12 rac2priv#Public Virtual IP (VIP) addresses - (eth0)192.168.110.21 rac1vip192.168.110.22 rac2vip--检查环境1)、启动archivelog归档模式SQL> archive log list;Database log mode Archive ModeAutomatic archival EnabledArchive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 54Next log sequence to archive 56Current log sequence 56SQL> show parameter RECOVERYNAME TYPE VALUE------------------------------------ -----------------------------------------db_recovery_file_dest string +DG_RECOVERY db_recovery_file_dest_size big integer 2Grecovery_parallelism integer 02)、启动FORCE_LOGGING模式SQL> alter database FORCE LOGGING;Database altered.SQL> select FORCE_LOGGING from v$database;FOR---YES2、首先配置两个数据库的tnsnames.ora和listener.oratnsnames.ora(两台主机相同)racdb_rac1 =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.110.21)(PORT = 1521)) (CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = racdb_s)(SERVICE_NAME = racdb1)))racdb_rac2 =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.110.22)(PORT = 1521)) (CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = racdb_s)(SERVICE_NAME = racdb2)))racdb_standby =(DESCRIPTION =(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.110.13)(PORT = 1521)) )(CONNECT_DATA =(SERVICE_NAME = racdb)))standby主机上的listener.oraSID_LIST_LISTENER =(SID_LIST =(SID_DESC =(GLOBAL_DBNAME = racdb)(ORACLE_HOME = /oracle/app/product/10.2.0/db_1)(SID_NAME = racdb))(SID_DESC =(GLOBAL_DBNAME = PLSExtProc)(ORACLE_HOME = /oracle/app/product/10.2.0/db_1)(SID_NAME = PLSExtProc)))LISTENER =(DESCRIPTION_LIST =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.110.13)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))))3、准备参数文件RAC环境下的参数变化增加如下:RAC主库:(注意使用ASM的时候,不要改变db_unique_name参数,否则之后创建的asm文件就会放入至新的db_unique_name目录下面,导致DB_FILE_NAME_CONVERT失效。
Suse11SP1_ Oracle11R2安装ASM+RAC配置步骤1运行环境1.1硬件环境IBM 3755AMD Processr 8380 2.5GHz*168G 双网卡IBM X366Xeon CPU 3.00GHz*88G 双网卡IBM FAS600阵列146.8*13 双HBA卡组网方式:1.2软件环境操作系统:Suse11SP1 x86_64核:2.6.32.12-0.7-default数据库版本:Oracle Database Enterprise Edition 11.2.0.1 for Linux x86_64 集群软件版本:Oracle Grid 11.2.0.1 for Linuxx86_64注意:1.ORACLE 11G只能安装在SLES 10以上的版本中。
2.ORACLE集群软件的版本不能低于ORACLE数据库的版本,不推荐使用其它第三方集群软件,如VCS做ORACLE数据库的集群。
11G的集群软件不能管理9i的数据库3.推荐安装64位数据库,这样可以使用大存(SGA),显著提高性能4.11g后可考虑使用ORACLE ASM代替LINUX LVM管理存储,它是一个单独的数据库实例,一个操作系统只能有一个。
2环境检查2.1检查硬件环境检查存:存>=1G,# grep MemTotal /proc/meminfo检查swap:交换空间swap为8G(存为1~2G时,swap为起1.5倍,如果大于2G,swap等于存大小# grep SwapTotal /proc/meminfo检查/tmp:/tmp> 500M 最好不少于1G# df -k /tmp检查/home:/home> 500M 最好不少于1G# df -k /home检查系统空间:系统磁盘空间还有不少于6G空间,# df –h安装中不使用防火墙和selinux私有网卡不可以用反线连接2.2检查软件环境对于Suse linux要求核至少为2.6.16.21# rpm -q package_name删除多余的软件包(如果不把它们删除,将严重干扰后续ORACLE的安装和配置)# rpm -qa ora*orarun-1.9-21.15# rpm -qa sap*sapinit-2.0.1-1.10# rpm -e orarun-1.9-21.15# rpm -e sapinit-2.0.1-1.10# rm –i /etc/oraInst.loc对于Suse linux要求至少具备以下安装包,版本不低于以下各包:binutils-2.17.50.0.6compat-libstdc++-33-3.2.3compat-libstdc++-33-3.2.3(32 位)elfutils-libelf-0.125elfutils-libelf-devel-0.125elfutils-libelf-devel-static-0.125gcc-4.1.2gcc-c++-4.1.2glibc-2.5-24glibc-2.5-24(32 位)glibc-common-2.5glibc-devel-2.5glibc-devel-2.5(32 位)glibc-headers-2.5libaio-0.3.106libaio-0.3.106(32 位)libaio-devel-0.3.106libaio-devel-0.3.106(32 位)libgcc-4.1.2libgcc-4.1.2(32 位)libstdc++-4.1.2libstdc++-4.1.2(32 位)libstdc++-devel 4.1.2make-3.81sysstat-7.0.2unixODBC-2.2.11unixODBC-2.2.11(32 位)unixODBC-devel-2.2.11unixODBC-devel-2.2.11(32 位)以下三个包需要从官网下载,是必须安装的oracleasm-2.6.16.60-0.21-bigsmp-2.0.4-1.SLE10.i586.rpm oracleasm-support-2.1.3-1.SLE10.i386.rpmoracleasmlib-2.0.4-1.SLE10.i386.rpm3安装前准备工作3.1准备安装包:oracle官网上有下载linux.x64_11gR2_grid.ziplinux.x64_11gR2_database_1of2.ziplinux.x64_11gR2_database_2of2.zip3.2磁盘规划磁盘的划分后,两节点都能看到,能正确读写。
Oracle 10g RAC 在IBM AIX上实施ASM(初稿)
Oracle 10g RAC中HACMP不再是必需的Oracle提供自己的群集件CRS(Oracle Cluster Ready Service) 10g RAC必需要用CRS
10g RAC实现方式主要为:
1. 裸设备+ASM (AutoMated Storage Management)
2. GPFS (General Parallel File System)
3. 裸设备+HACMP
在局域网中实现方式
2到多个节点
硬件结构
AIX5.2/5.3操作系统被oracle认证通过,如果安装了HACMP而没有选择裸设备+HACMP 实现方式,必须要卸载HACMP
在实际的产品库环境中,千兆交换机是必须的,被用来处理缓存融合(cache fusion)
网络结构的例子:
各节点上的公用(public )网卡必须采用一样的名字,例如:en0
各节点上的私用(private)网卡必须采用一样的名字,例如:en1
AIX VIRTUAL I/O DISKS可用于安装$CRS_HOME 或$ORACLE_HOME,不能用于ocr vote disk datafile
存储设备方面IBM DS4000, DS6000 and DS8000 series 被10gRAC支持
具体可查/servers/storage/product/products_pseries.html
还支持EMC 等产品
RAC整体安装步骤:
具体建库步骤:
硬件需求
至少1G物理内存lsattr –El sys0 –a realmem
交换分区400M-2G lsps –a
临时空间不少于400M df –m /tmp
操作系统AIX5204或更晚版本AIX5302或更晚版本oslevel –r
操作系统要安装好需要的filesets和APAR
确认PTF是否安装,如果没装从这里下载安装: /eserver/support/fixes/ ASM需要的filesets和APAR:
需要的Oracle安装介质:
解压方法:
检查oracle用户限制是否已取消,
ulimit -f 文件大小限制
ulimit -a 全部限制
可以检查或修改/etc/security/limits,以取消对新用户的限制
用passwd oracle修改各节点oracle用户口令为一致
在各节点配置内核参数和shell限制
可以使用smit chuser
修改用户最大进程数
smit chgsys 把Maximum number of PROCESSES allowed per user改为2048或更大
配置网络
Public网络的节点名(这里是node1 node2 node3)必须是hostname显示的结果检查更新各节点hosts文件内容是否如下:
# Public Network
10.3.25.81 node1
10.3.25.82 node2
10.3.25.83 node3
# Virtual IP address
10.3.25.181 node1_vip
10.3.25.182 node2_vip
10.3.25.183 node3_vip
# Interconnect RAC
10.10.25.81 node1_rac
10.10.25.82 node2_rac
10.10.25.83 node3_rac
在公用网络接口(这里是en0)上设置默认网关
smitty chinet 设置BROADCAST ADDRESS为10.3.25.254
检查配置各节点/etc/hosts.equiv 和root、oracle用户主目录下的$HOME/.rhosts如下:node1 root
node2 root
node3 root
node1 oracle
node2 oracle
node3 oracle
出于安全考虑,不建议把+放在hosts.equiv 或.rhosts文件中
注意节点名是以hostname显示的结果,命令示例:
rsh node2 date
echo aa>1.tmp
rcp 1.tmp node2:/tmp
rlogin node2
在每个节点上配置oracle用户环境,以oracle用户登录后执行vi $HOME/.profile,加入以下内容:
#oracle envionment
export ORACLE_BASE=/u01/app/oracle
export ORACLE_CRS=$ORACLE_BASE/crs
export ORACLE_CRS_HOME=$ORACLE_BASE/crs
export ORACLE_HOME=$ORACLE_BASE/10.2.0
export
LD_LIBRARY_PA TH=$ORACLE_CRS/lib:$ORACLE_CRS/lib32:$ORACLE_HOME/lib:/usr/li b:$ORACLE_HOME/lib32
export PA TH=$ORACLE_CRS/bin:$ORACLE_HOME/bin:$PA TH
export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/ctx/lib
export ORA_DB=$ORACLE_HOME/dbs
export CLASSPATH=$ORACLE_HOME/jlib
export ORACLE_SID=asmdb1 或者asmdb2
export ORACLE_TERM=vt100
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
export AIXTHREAD_SCOPE=S
umask 022
export TEMP=/tmp
export TMPDIR=/tmp
export PS1=`whoami`@`hostname`:`$PWD`$
#set -o vi
#export DISPLAY=localhost:0.0
其中的S的含义:S for system-wide thread scope
以6G为例,参考命令:
lsdev –Ccdisk | grep SCSI
mkvg -f -y'oraclevg' -s'32' hdisk1
crfs -v jfs2 -a bf=true -g'oraclevg' -a size='8388608' -m'/u01' -A'yes' -p'rw' -t'no' -a nbpi='8192' -a ag='64'
mount /u01
chown oracle:dba /u01
ASM实现方式:
存储划分,需要根据目前应用情况,与硬件工程师具体商定。
配置各节点系统时钟同步功能:
smit xntpd
上传安装介质到节点1的/backup,并解包
安装前使用CVU验证,可能会出很多错误
开始安装CLUSTER READY SERVICE(CRS)
装Oracle10g RAC 10.2.0.1软件升级到10.2.0.2
dbca建库
进行优化
启停及其他测试
提供连接方式。