Canalblog
Suivre ce blog Administration + Créer mon blog

Oracle Apps

16 décembre 2013

Map Disk Block devices on a Linux host back to the SAN Lun

There was a question posted in Oracle-L - How can one map Disk Devices (On Linux) back to the SAN LUNs? This is a no brainer on Solaris virtue of persistent binding capabilities on the HBA along with hard-coding the luns in sd.conf, but Linux is a different ball game.

Most of you would wonder – why do I care to know about this mapping? You need to - because as a DBA if you are seeing high latencies for a specific data file(s), you would need to know to which LUN , the block device (containing the data file) ties back to on the array.  SAN admins work with LUNs, not with system block devices.

In Linux, the block devices are named starting with "sd" with little information as to

1. Specific Adaptor/Port(HBA) through which it is mapped.

2. Array on which the LUN is carved out.

3. The specific LUN number on the Array

While it is not impossible to find out the above information, there is no standard interface which can help you determine the above information (I may be mistaken - there may be a tool out there which I am not aware of).

On Redhat 4.x and higher versions using the Kernel 2.6.x and higher, the sysfs implementation is standard. The /sys filesystem holds the key to identifying the above information.

Unfortunately, the /sys filesystem is not well documented and is full of shortcuts pointing back
and forth. In short it is a maze. But it is Linux and so we are not surprised. So we have to devise our own methods to overcome these shortcomings.

Let us use an example to illustrate how it can be done. Let us assume that our database is showing high latencies for datafiles residing on the block device sdf.

First you need to identify how many HBAs you have on the system and how many ports/HBA
(HBAs come in single port or dual port configurations).

You can identify this in multiple ways - I prefer to use lspci

$ /sbin/lspci |grep Fibre


0f:00.0 Fibre Channel: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter (rev 02) -------------- > HBA 1 Port 1

0f:00.1 Fibre Channel: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter (rev 02)
-------------- >    HBA 1 Port 2

11:00.0 Fibre Channel: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter (rev 02)
-------------- >    HBA 2 Port 1

11:00.1 Fibre Channel: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter (rev 02)
-------------- >    HBA 2 Port 2

As you can see, it shows that there are 2 Dual Port Emulex HBAs installed in the system. LUNs
can be visible to Linux through any or all of the 4 available ports.

So the block device sdf as seen by the OS is visible via one of these 4 available ports (paths). Next, we need to identify the specific port through which this LUN sdf is visible to the OS.

For that, we change directories to /sys/block and run a find.

$ cd /sys/block

$ find . -name device -exec ls -l {} \; |grep sdf

lrwxrwxrwx 1 root root 0 Jul 21 17:25 ./sdf/device -> ../../devices/pci0000:00/0000:00:04.0/0000:0f:00.0/host1/target1:0:0/1:0:0:4


or another way is to

$ cd /sys/block/sdf

$ ls -l device
lrwxrwxrwx 1 root root 0 Jul 21 17:25 device ->
../../devices/pci0000:00/0000:00:04.0/0000:0f:00.0/host1/target1:0:0/1:0:0:4

As you can see from the above, we can identify that sdf is visible to the OS via the HBA Port 0f:00.0 to the OS. 0f:00.0 as we have seen above is HBA 1 Port 1. This output has several more interesting tid-bits of information.

lrwxrwxrwx 1 root root 0 Jul 21 17:25 device -> ../../devices/pci0000:00/0000:00:04.0/0000:0f:00.0/host1/target1:0:0/1:0:0:4

The target is 0 from 1:0:0:4 or from target1:0:0. We can also learn from the above output that this is LUN 4 from 1:0:0:4.

All the remains now is to identify the array which is presenting LUN 4 to the OS. This
information is available from the /proc interface.


$ cd /proc/scsi/lpfc

$ ls
1 2 3 4

Here the numbers 1-4 correspond to the specific HBA ports 1 - 4.  We have identified that LUN sdf   is on HBA 1 Port 1 and so we look into the contents of "1". We also know it is Target 0.

$ more 1
lpfc0t01 DID 610413 WWPN 50:06:0e:80:00:c3:bd:40 WWNN 50:06:0e:80:00:c3:bd:40
lpfc0t02 DID 610a13 WWPN 50:06:0e:80:10:09:d0:02 WWNN 50:06:0e:80:10:09:d0:02
lpfc0t00 DID 612413 WWPN 50:06:0e:80:10:09:d0:07 WWNN 50:06:0e:80:10:09:d0:07

We see that Target 0 is mapped to WWPN ending in 07. This is the Array WWPN from which the Storage Admin can identify the specific Array which is presenting LUN 4 (sdf) to the host.

So we have identified that the block device sdf is

1. Visible to the OS via HBA 1 Port 1.

2. Target 0 for the HBA.

3. LUN 4 as presented to the OS by the Array.

4. The array that is presenting LUN 4 to the OS is 50:06:0e:80:10:09:d0:07.

With this above information, your Storage Admin will get a headstart on fixing the problem for
you.

Publicité
Publicité
16 décembre 2013

11gr2 - Analyzing Compression capabilities

With 11gR2, Oracle has come a long way with regards to compression capabilities. In almost all benchmarks I have performed, compression is probably one of most significant factors influencing performance of queries. Compression beats any other database optimization feature hands-down.


While fast returning queries are alone not enough to meet Business requirements, in 11gR2, it does appear that compression performs adequately for all operations when compared to default. It does have some caveats, though overall, the results are quite good. It may make sense to evaluate using Compression as the default for user data.

To check the viability of compression for a typical use case, I conducted 5 tests -

  • CTAS
  • Updates
  • Conventional Inserts
  • Deletes
  • Queries

The variables were

  • Time to complete each activity
  • Object Size after each activity
  • Query performance


I did not measure system load during each of these activities. I assume that CPU and Memory are sufficient.

As always, I used realistic example with real data (not generated). The tests were conducted on a single instance 11gr2 database on Linux X86-64. I was running the tests on a Quarter Rack Exadata with 3 storage cells. However, all cell offloading was disabled.

The table in question had 148 million rows and was 36 columns wide. The columns were a mix of varchar(2), number and date with not null constraints. The table is not partitioned.

Table Creation using CTAS
The first test was the object creation using CTAS. It was done in parallel and with the defaults for extent sizing (auto allocate).

                       


I also generated a flat file for the table and compressed using Gzip and Bzip to get an idea as to how it compares to Database compression.

As you can see, database compression ranges from 2.5x (OLTP) to 9x (HCC Archive) which is more or less comparable to what is normally seen in the real world. If you had historical read only data, then storing as External Tables (compressed flat files) would probably be a better idea than storing in the database as a regular uncompressed table. With 11gr2, external tables have come a long way.


Updating rows in the table

Compression has always received a bad review due to poor performance during updates. But how bad of a performance hit is there with an update? When talking about performance, I would be referring to time taken to update, growth in size of the object and followed by query performance.

Generally I would assume that if you are planning on updating > 10% of a big table, it would be better to rewrite the update as a CTAS rather than do an update. In order to simulate a worst case scenario, I updated 11% of the table (15.5 Million rows) - 2 columns in order to gauge the effect of the update.

Deleting rows in the table

The same can be said of deleting rows too. In order to guage the impact of deletion, I deleted 6% (5 Million rows) from the table.


Inserting rows into the table

Direct path loads may not be feasible at all times. So I inserted 1 million rows into the table using buffered path writes.

Query performance

After each of the tests, I ran a query which required a full tablescan to see the impact on query performance.

Results

 


For a CTAS and Update, the uncompressed version of the table outperformed the compressed versions. There was a difference of approx 2x. However for Deletes and Inserts, the performance was either the same or slightly better with the compressed versions.

 



As regards Query performance, compressed tables always outbeat the non-compressed version. Higher compression gives better query performance.

 


And finally the table size after each DML operation. A bulk Update has resulted in growth of the table, however not anywhere as close to the uncompressed versions.Inserts have re-used space from the Deletion.

Conclusions

  1. With compression,      space savings can be significant.
  2. DML does grow the      object, however the size is still considerably smaller than the      uncompressed version.
  3. Bulk Updates still      perform slower than uncompressed.
  4. Deletes and      conventional inserts perform about the same as the uncompressed version.
  5. Query performance - Compression improves      performance significantly.
  6. For historical or      archived read-only data, External Tables as compressed flat files may be a      viable option rather than storing in the Database.
16 décembre 2013

Understanding CPU Time as an Oracle Wait event

We have all seen "CPU Time" as a Top 5 wait event at one point or the other when troubleshooting performance issues. From an administrator (DBA/SA) perspective, conventional thinking would be that CPU is a bottleneck.

But what if the stats from the system show that CPU Utilization (% Util and Run queue) are well within thresholds and show plenty of available capacity, but Oracle continues to report CPU time as a Top 5 wait event?

We are also seeing high degree of involuntary context switching (using mpstat in Solaris) or context switches (using vmstat in Linux). Obviously, something is not computing right.


CPU Time could mean that the process is either

  • On a CPU run queue      waiting to be scheduled
  • Or currently running on      a CPU.

Obviously, we are interested in

  • Minimizing the wait time      on the run queue so that the session can run on the CPU as soon as      possible. This is determined by the      priority of the process.
  • And once running on the      CPU, be allowed to run on the CPU to complete its tasks. The amount of      time available for the process on the CPU is defined as the Time Quanta.

Both of the above aspects are controlled by the OS Scheduler/Dispatcher.


"Scheduling is a key concept in computer multitasking, multiprocessing operating system and real-time operating system designs. In modern operating systems, there are typically many more processes running than there are CPUs available to run them. Scheduling refers to the way processes are assigned to run on the available CPUs. This assignment is carried out by software known as a scheduler or is sometimes referred to as a dispatcher"

Understanding how the scheduler shares CPU resources is key to understanding and influencing the wait event "CPU Time".

In any Unix platform, there are processes which take higher priority than others. Labeling a process as higher priority can be done through the implementation of Scheduling classes and with the nice command. Both can have different effects on the process.

An easy method to identify the scheduling class and current priority for a process is to use the ps command. Used with the "-flycae" arguments, it shows both the scheduling class and current priority. However it does not show the CPU time quanta associated with a process.

dbrac{714}$ ps -flycae |egrep "ora_" |more

S UID PID PPID CLS PRI CMD
S oracle 931 1 TS 24 ora_p000_DW
S oracle 933 1 TS 24 ora_p001_DW


In the above example, you would be interested in the CLS and PRI column. The above example shows that the oracle background processes as running under the TS Scheduling class with a priority of 24. The higher the number reported in the PRI column, the higher the priority.

The default Scheduler for user processes is TS or Time Share and is common across Solaris and Linux. The TS scheduler changes priorities and CPU time quantas for processes based on recent processor usage.

Since we appear to have plenty of CPU resources, we could draw the conclusion that the default (TS) scheduling class does not appear to be good enough for us. Either the scheduler is not allocating sufficient CPU time quanta (resulting in involuntary context switching) or not giving the process a sufficiently higher priority so that it can be scheduled earlier than other processes.

So how do we change it? Obviously we would want to

  • set a fixed priority for      Oracle processes so that they are able to run on the CPU ahead of other      competing processes.
  • set a fixed time quanta      for Oracle processes so that they can run to completion on the CPU.

With either Solaris or Linux, the easiest way to implement this is to change the Scheduling class for the oracle processes. Both the Operating systems offer a Scheduling class with Fixed Priorities and Fixed CPU Time Quantas - Fixed in the sense it is fixed throughout the lifetime of the process, but can be changed to suit your requirements at any time.

In Linux, it is the RR class and in Solaris it is the FX class. The simplest way to change the scheduling class is to use the priocntl tool. While it is a native binary on Solaris, it is available on Linux through the Heirloom Project.

On Linux, you would need to use the renice command to change the CPU time quantas and on Solaris, priocntl does both - scheduling class and time quanta.

27 novembre 2013

Oracle Apps Cloning

Oracle Apps Cloning

Refreshing single instance Test database with Production Data

  

 What is Oracle Applications/Oracle E-Business Suite?

 To facilitate big businesses, Oracle Corporation have created collection of software in the category of ERP (Enterprise Resource Planning) known as modules, that are integrated to talk to each other and known as Oracle Applications or E-Business Suite.

  

Oracle EBS R12 Architecture

 Oracle EBS R12 Applications is a three-tiered architecture.

1. Desktop Tier

2. Application Tier

3. Database Tier

 The Oracle Applications Architecture is a framework for multi-tiered, distributed computing that supports Oracle Applications products. In this model, various servers or services are distributed among three levels, or tiers.

First we need to know the important points like SERVER, NODE, MACHINE and TIER.

 Server: Server is a process or group of processes that run on a single machine and provide a particular functionality.

 Tier: A tier is a logical group of services that spread across more than one physical machine.

 Machine or node: A machine is referred to as a node, particularly in the context of a group of computers that work closely together in a cluster.

 Desktop Tier:

 

The Forms client applet must be run within a Java Virtual Machine (JVM) on the desktop client. The Sun J2SE Plug-in component allows use of the Oracle JVM on web clients, instead of the browser’s own JVM. This is implemented as a browser plug-in.

 

The Application Tier:

 

In Release 12, the application tier contains Oracle Application Server 10g (OAS10g). Three servers or service groups comprise the basic application tier for Oracle Applications:

 

Web services

 The Web services component of Oracle Application Server processes requests received over the network from the desktop clients.

 Forms services

 Forms services in Oracle Applications Release 12 are provided by the Forms listener servlet or Form Socket mode, which facilitates the use of firewalls, load balancing, proxies, and other networking options.

 Concurrent Processing server

 Processes that run on the Concurrent Processing server are called concurrent requests. When you submit such a request, either through HTML-based or Forms-based Applications, a row is inserted into a database table specifying the program to be run. A concurrent manager then reads the applicable requests in the table, and starts the associated concurrent program.

 Database Tier :

 The database tier contains the Oracle database server, which stores all the data maintained by Oracle Applications.

  

R12 EBS Directory Structure

Techstack Components:

 

DB_TIER

11.2.0.1.0 RDBMS ORACLE_HOME

 

APPL-TIER

10.1.2 C ORACLE_HOME / FORMS ORACLE_HOME

 10.1.3 Java ORACLE_HOME/OC4J ORACLE_HOME 

 INSTANCE_TOP: Each application tier has a unique Instance Home file system associated

 Application Directory Structure

 

Oracle Apps cloning:

 Need to refresh the Test database periodically to synchronize it with changes to the production DB.

 Step 1:

Copy production backup to test server

 

Rename and open the database

Add the temp files to new database

Now go to $ORACLE_HOME/appsutil/install/<CONTEXT_NAME> directory:

 Now, go to $ORACLE_HOME/appsutil/clone/bin/ and run the following command. Please note that the Database and the Listener should be up when you run this command.

  

 Step 2:

 Run autoconfig on Apps Tier

1 septembre 2011

E-Business Suite Timeout Parameters and Profiles

Un PC sans surveillance  avec une longue session Oracle E-business suite poserait un risque pourla sécurité. Oracle Ebiz  fournit de nombreux paramètres de configuration et des options de profil de contrôle des sessions utilisateur. Voici quelque recommandation :

 

 

  • ICX Timeout Profile Values

Les options de profil qui contrôle la validité de la session utilisateur pour les formulaires (ecran forms), ainsi que les pages web Self Service. Cela peut être déroutant car l'Inter-Cartouche Exchange (ICX) est souvent associée à des applications Self Service.

 

Parameter

Default

Recommendation

ICX:Session Timeout

None

30 (minutes)

ICX: Limit Time

4 (hours)

4 (hours)

ICX: Limit Connect

1000

2000

 

·        ICX:Session Timeout  

 

Cette option de profil détermine la durée (en minutes) d'inactivité d'une session utilisateur avant que la session soit désactivée. Notez que cette session n'est pas tuée. L'utilisateur peut se ré-authentifier et  réactiver la session timed-out. Si la ré-authentification est réussie, la session désactivée est réactivée et aucun travail n'est perdu. Sinon, la session est terminée sans enregistrer le travail en attente.

NB: le réglage de la valeur du profil à plus de 30 minutes peut épuiser les ressources JVM et causer l'erreur "out of memory '

 

        ICX: Limit time

 

Cette option définit le temps de connexion maximum pour une session - indépendamment de l'activité des utilisateurs. Si «ICX: Session Timeout" est  à NULL, alors la session durera aussi longtemps que «ICX: Temps Limite».

 

       ICX: Limit connect

 

Cette option profil définit le nombre maximal de demandes de connexion qu'un utilisateur peut faire en une seule séssion. Notez que d'autres contrôles internes Oracle apps peuvent générer des demandes de connexion au cours d'une session utilisateur.

  • Jserv (Java) Timeout Settings

Parameter

Recommendation

disco4iviewer.properties:session.timeout

5400000 (milliseconds)

formservlet.ini:FORMS60_TIMEOUT

55 (minutes)

formservlet.properties:session.timeout

5400000 (milliseconds)

jserv.conf:ApJServVMTimeout

360  (seconds)

mobile.properties:session.timeout

5400000 (milliseconds)

zone.properties:session.timeout

5400000 (milliseconds)

zone.properties:servlet.framework.initArgs

5400000 (milliseconds)

 

  • Apache HTTP Timeout Settings

Parameter

Recommendation

httpd.conf:Timeout

300 (seconds)

httpd.conf:KeepAliveTimeout

15 (seconds)

httpd.conf:SSLSessionCacheTimeout

300 (seconds)

Publicité
Publicité
Oracle Apps
Publicité
Archives
Publicité