-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when deploying RAC #3
Comments
I've just destroyed an instance and I'm trying it again now, in case an update to the 'bento/oracle-7.5' box has altered anything. It's going to take a couple of hours to complete, but I should get to this point quite quickly. Note. When you destroy the current setup, make sure none of the shared disks are left behind. If they are, they will be reused, and that would be a problem. :) |
Just checked. I got this, which was expected.
Just expected warnings. It does seem like there is something wrong with your shared disks. I would suggest:
Also I would check:
Cheers Tim... PS. I'm going to update the box to 'bento/oracle-7.6' and try it. The box only came out recently, so I'm not sure if it works yet. |
Got, let me double check to make sure the shared disks are destroyed as well. Thanks for the quick feedback! |
What OS are you using as the host? |
I'm using a MAC OS. I just found out there indeed some leftover shared disks. I just removed them and will be rerun the script soon. |
OK. Let me know how you get on. I've check added a clarification to the README.txt about a message at the end of the node2 build and I'll put one in about the disk cleanup. The build with the 'bento/oracle-7.6' box is at the ASM config stage (post root scripts), but I won't commit that change until I've seen it complete. |
18c build using 'bento/oracle-7.6' completed successfully. Change committed. Trying now with 12.2. |
Tim,
|
I just saw your notes that It completed successfully, but I getting ORA-12547: TNS:lost contact error. |
How much RAM do you have on your Mac? I'm wondering of there is a resource problem on you kit. Things get really slow if there is a lack of memory. The build expects 21G just for the 3 VMs, not counting some left over for the host OS, so it's really only possible if you have 32G or RAM. Host is swapping, things aren't going to go well. I've done this with 32G on my Windows 8 Laptop. 24G on a Linux Server and I'm going to try now on a 16G MBP. I'll reduce the memory size of the VMs for that though. Cheers Tim... |
I have 16G in total, but have allocated 3.2G for each node. Let me give 6G per node and see. |
OK. That's not going to work. I think you should try: DNS: 1024 That's 14G and leave 2 for the host. You need a little extra on node 1 as it is running the installation. I have no idea if this will work. It's not a lot of memory for a RAC installation. |
Got it. Let me give that a try. Thanks again! |
The 12.2 build went fine too. |
I'm trying an 18.3 build on a MBP with 16G RAM now, using the settings I suggested to you. Fingers crossed. :) |
Just completed the build on a 2014 MBP running macOS Mojave with 16G RAM. Worked fine. Actually a lot quicker than I expected. |
My build just completed. I tried as you recommended, but the machine froze as soon as the Grid installation begins. `dns: node1: node2: Output Thanks a lot Tim. |
OK. Great. I'll close this issue. |
Hello Tim,
I hope you are having a great day.
Quick question, how do I deploy two databases with the same version? Let
say I need two 12.2.0.1. I trying to play with GoldenGate and need a second
database.
Thanks in advance,
Kwa
…On Wed, Jan 2, 2019 at 9:27 AM Tim Hall ***@***.***> wrote:
I've just destroyed an instance and I'm trying it again now, in case an
update to the 'bento/oracle-7.5' box has altered anything. It's going to
take a couple of hours to complete, but I should get to this point quite
quickly.
Note. When you destroy the current setup, make sure none of the shared
disks are left behind. If they are, they will be reused, and that would be
a problem. :)
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AJ2VQJHgsOZV8bGSLpi5iGLzzjckhHYkks5u_MHSgaJpZM4Zmthk>
.
|
A single directory is only for deploying a single server, so I would expect you to do something like this.
You can see the sort of thing I do under the dataguard directory, where I have two nodes. If you are talking golden gate between two RAC databases, then you will need a duplicate of the whole RAC setup, allowing you to create a second RAC. |
Thanks a lot, Tim for the prompt response. I actually tried the data
guard option but was having all kinds of problems. I may need to update the
vagrant boxes from the source.
Thanks again!!
…On Thu, Apr 30, 2020 at 10:10 AM Tim Hall ***@***.***> wrote:
A single directory is only for deploying a single server, so I would
expect you to do something like this.
1. You need to copy the whole directory to create a new on.
2. Remember to remove the ".vagrant" directory from the new copy.
3. Edit the config, making sure there aren't port clashes etc.
You can see the sort of thing I do under the dataguard directory, where I
have two nodes.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACOZKQHP72J4I2FBI6KS27LRPGBDLANCNFSM4GM23BSA>
.
|
Hello Tim,
I have run this script multiple time. I single instances or dataguard are deployed without any issues. However, I have not been able to create RAC on 12 or 18c. These error below are just driving me crazy.
The dns and node2 work perfectly, but when I start node1, I get the error that the disk is not unique even after I destroy and redeploy the script. I think the rest of the error are just chain reaction of the first failure. Please advice.
default: default: Installed: default: cvuqdisk.x86_64 0:1.0.10-1 default: default: Complete! default: ****************************************************************************** default: Do grid software-only installation. Sun Dec 30 06:21:30 UTC 2018 default: ****************************************************************************** default: Launching Oracle Grid Infrastructure Setup Wizard... **default: [FATAL] [INS-30516] Please specify unique disk groups. default: CAUSE: Installer has detected that the diskgroup name provided already exists on the system. default: ACTION: Specify different disk group. default: [FATAL] [INS-30530] Following specified disks have invalid header status: [/dev/oracleasm/asm-disk1, /dev/oracleasm/asm-disk3, /dev/oracleasm/asm-disk4] default: ACTION: Ensure only Candidate or Provisioned disks are specified.** default: ****************************************************************************** default: Run grid root scripts. Sun Dec 30 06:22:23 UTC 2018 default: ****************************************************************************** **default: sh: /u01/app/oraInventory/orainstRoot.sh: No such file or directory default: sh: /u01/app/oraInventory/orainstRoot.sh: No such file or directory** default: Check /u01/app/18.0.0/grid/install/root_ol7-183-rac1.localdomain_2018-12-30_06-22-23-942200686.log for the output of root script **default: sh: /u01/app/18.0.0/grid/root.sh: No such file or directory** default: ****************************************************************************** default: Do grid configuration. Sun Dec 30 06:22:24 UTC 2018 default: ****************************************************************************** default: Launching Oracle Grid Infrastructure Setup Wizard... **default: [FATAL] [INS-32603] The central inventory was not detected.** default: ACTION: The -executeConfigTools flag can only be used for an Oracle home software that has been already installed using the configure or upgrade options. Ensure that the orainstRoot.sh script, from the inventory location, has been executed. default: ****************************************************************************** default: Check cluster configuration. Sun Dec 30 06:22:27 UTC 2018 default: ****************************************************************************** default: /vagrant/scripts/oracle_grid_software_config.sh: line 45: /u01/app/18.0.0/grid/bin/crsctl: No such file or directory default: ****************************************************************************** default: Unzip database software. Sun Dec 30 06:22:27 UTC 2018 default: ****************************************************************************** default: ****************************************************************************** default: Do database software-only installation. Sun Dec 30 06:24:40 UTC 2018 default: ****************************************************************************** default: Launching Oracle Database Setup Wizard... **default: [FATAL] [INS-35354] The system on which you are attempting to install Oracle RAC is not part of a valid cluster. default: CAUSE: Before you can install Oracle RAC, you must install Oracle Grid Infrastructure (Oracle Clusterware and Oracle ASM) on all servers to create a cluster. default: ACTION: Oracle Grid Infrastructure for Clusterware is not installed.** Install it either from the separate installation media included in your media pack, or install it by downloading it from Electronic Product Delivery (EPD) or the Oracle Technology Network (OTN). Oracle Grid Infrastructure normally is installed by a different operating system user than the one used for Oracle Database. It may need to be installed by your system administrator. See the installation guide for more details. default: ****************************************************************************** default: Run DB root scripts. Sun Dec 30 06:24:47 UTC 2018 default: ****************************************************************************** default: Check /u01/app/oracle/product/18.0.0/dbhome_1/install/root_ol7-183-rac1.localdomain_2018-12-30_06-24-47-612572414.log for the output of root script default: sh: /u01/app/oracle/product/18.0.0/dbhome_1/root.sh: No such file or directory default: ****************************************************************************** default: Create database. Sun Dec 30 06:24:47 UTC 2018 default: ****************************************************************************** default: [FATAL] java.lang.NullPointerException default: ****************************************************************************** default: Check cluster configuration. Sun Dec 30 06:24:50 UTC 2018 default: ****************************************************************************** default: ****************************************************************************** default: Output from crsctl stat res -t Sun Dec 30 06:24:50 UTC 2018 default: ****************************************************************************** default: /vagrant/scripts/oracle_create_database.sh: line 35: /u01/app/18.0.0/grid/bin/crsctl: No such file or directory default: ****************************************************************************** default: Output from srvctl config database -d cdbrac Sun Dec 30 06:24:50 UTC 2018 default: ****************************************************************************** default: /u01/app/oracle/product/18.0.0/dbhome_1/bin/srvctl: line 255: /u01/app/oracle/product/18.0.0/dbhome_1/srvm/admin/getcrshome: No such file or directory default: PRCD-1027 : Failed to retrieve database cdbrac default: PRCR-1070 : Failed to check if resource ora.cdbrac.db is registered default: CRS-0184 : Cannot communicate with the CRS daemon. default: ****************************************************************************** default: Output from srvctl status database -d cdbrac Sun Dec 30 06:24:51 UTC 2018 default: ****************************************************************************** default: /u01/app/oracle/product/18.0.0/dbhome_1/bin/srvctl: line 255: /u01/app/oracle/product/18.0.0/dbhome_1/srvm/admin/getcrshome: No such file or directory default: PRCD-1027 : Failed to retrieve database cdbrac default: PRCR-1070 : Failed to check if resource ora.cdbrac.db is registered default: CRS-0184 : Cannot communicate with the CRS daemon. default: ****************************************************************************** default: Output from v$active_instances Sun Dec 30 06:24:51 UTC 2018 default: ****************************************************************************** default: /vagrant/scripts/oracle_create_database.sh: line 50: /u01/app/oracle/product/18.0.0/dbhome_1/bin/sqlplus: Permission denied The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong.
Thanks
The text was updated successfully, but these errors were encountered: