Home > 11G R2 New Features > Oracle’s GPnP

Oracle’s GPnP

September 15, 2010 Leave a comment Go to comments

 Grid Plug and Play

This project was implemented to make it easier to have a dynamic grid environment that can grow as the workload grows easily in your environment.

GPnP Makes it easy to add, replace, or remove nodes in a cluster

One of the ways it makes it easy for us to managed a growing cluster is that we can have a network admin delegate a sub domain of the network to the cluster. Then the cluster will manage the its IP requirements. So if you add another node we don’t go back to the admins to request another IP. We will manage it automatically with in the cluster.

We also see new cluster components for the GpnP.

There is an XML profile that gives the personality of the cluster. It provides configuration information for when the nodes boots to join the cluster. IE storage, network addresses…… This is sometimes called the GpnP profile for global Configuration.

We also have the IP Mulitcast Discovery Demean that is used by the cluster to discover other node that resolve names to addresses in the cluster. So we do not need to rely on the /etc/hosts for the Domain Name configurations.

We have the Grid naming Service. It lets the cluster manage it’s own network. We utilize DHCP to manage the IP addresses. So when we turn on the GPnP it lets the cluster manage it’s on network. This allows us to manage the network as the cluster grows.

Now from the Oracle clusterware perspective it is friendly to GPnP by the addition to server pool concept. If you want to know more about this read my post on this topic.

In 11GR2 Oracle has removed the requirement to explicitly to allocate on different nodes on the cluster different instances. Oracle will automatically allocate redo threads and Undo tablespaces. Now this assumes that we are using Oracle Managed files. Our best practice is to use ASM.

If we want to manage these as we already have we can do so as administrator managed databases.

GPnP profile:

Provides cluster configuration information to allow a node to join the cluster.

XML defining node personality including:

  • Cluster name Network classifications (public/private)
  • Storage to be used for ASM and CSS
  • Secured with digital signatures
  • Automatically updated when things change in the cluster

Created during install

  • Grid_home/gpnp/$hostname/profile/peer/profile.xml
  • Grid_home/gpnp/profile/peer/profile.xml (global backup)

Replicated by gpnpd during install, system boot, or when updated using standard cluster tools

  •  oifcfg – changing network information 
  • crsctl – changing css devices 
  • ASM – storage addition

Now for the network portion:

Resolving names and discovering services lets us eliminate static network configuration.

  • Addresses on public (routable) networks can be DHCP.
  • Addresses on private (non-routable) can be DHCP

No need to ask Network Administrator for IP addresses for nodes or VIPs.

Create entries in the corporate DNS servers that will delegate the authority for a portion of the network to the cluster. Network Admins do this all the time and know how to do it.

First we want to tell the network that we are delegating a sub-domain to the cluster. So in this example my sub-domain is mycluster.bob.com. So anything that goes to mycluster.bob.com is resolved by going to this IP address: gns-vip.bob.com.

So how does this work?

 

When we have a GPnP set up: The client has the scan and goes to the DNS for resolution. DNS says oh this is the sub-domain I am redirecting to the GNS. The GNS will then respond back to the DNS. Now at this point the DNS should cache the GNS information. The next entry that will come in will be resolved by the DNS cache. Now is the GNS fails or the node on the GNS fails then cluster will bring it up on another node in the cluster. So it is very highly available within the cluster but it may not even be utilized because of the DNS resolution. The DNS forwards back to the client and the GNS will redirect it to the scan listener and the scan listener will redirect it back to the least loaded local_listener to create the connection on the instance. (Note: Oracle does not provide a DHCP service)

As you can see there some more moving part to the cluster but increases flexibility. Now if you have a cluster of 2,3, or 4 nodes then this may not be feasible. This was designed for oracle clients that have large clusters with nodes coming in and out of the cluster.

Advertisements
Categories: 11G R2 New Features
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: