Recently did an Exadata Expansion project on x3 half rack and making it a full rack by adding x5.
this setup is a full exadata rack with two different flavours of engineered system hardware four x3 compute nodes and four x5-2 compute nodes along with seven x3 cellnodes with seven x5 cell nodes.
This topology is supported but there were a lot of work behind.
- First getting the x3 on the supported release 12c , this involved 4 node grid upgrade from 11g to 12c and x3 compute/cell node image to 12c from 11g.
- Next physical racking of x5 hardware into x3 rack space
- Then upgrade x5 to the latest software stack 12c same as x3
- Interconnect both x3 and x5 to co-exisits
- Precheck of the full exadata rack before making them into one cluster
- Next add the four x5 node to the x3 rac cluster using addnode.sh
- Now the storage - x3 had 3TB disk and x5 has 4TB disk , so grid disk has to be the same size.
- following doc "How to Add Exadata Storage Servers Using 3TB/4TB Disks to an Existing Database Machine (Doc ID 1476336.1)" will help.
- the additional 1TB space from x5 cell storage was created into RECO2 for future recovery storage space as the current setup is 80:20 (DATA:RECO)
- Completed cluster verification and return the rack to service
This whole exercises was so cool and went as documented.
For documentation the below was used (you may have to refer to the latest document for extending exadata)
Oracle® Exadata Database Machine Extending and Multi-Rack Cabling Guide
12c Release 1 (12.1)