Thursday, December 1, 2011

Rotating *.out files in SOA 11g Weblogic Server

A very common issue in SOA 11g servers (Weblogic) is the inability to rotate *.out file using out of the box tools or scripts. We can rotate *.err and *.log files but not *.out files . As a result the server keeps hitting the 100% space utilization issue as the *.out file keeps grows to Gigabytes if not monitored correctly.

By default Weblogic doesn't have options to rotate the *.out file, so if we are running on Linux boxes, we can add the following snippet under /etc/logrotate.conf file and append a function to handle *.out files
(location of logs directory)/*.out {

copytruncate

rotate 5

size=10M

}
Basically the above function rotates the .out file for 5 times,once each file reaches the 10MB size and then it truncates the file.
For eg. *.out -> *.out.1 -> *.out.2 -> *.out.3 -> *.out.4 ->*.out.5 and finally *.out.5 is truncated and made to 0 bytes.

To schedule the above cleanup you can write a cron job as per your desired schedule which would call “/usr/sbin/logrotate /etc/logrotate.conf” command.

That's it !! No more 100% space utilization alerts or server crashing due to space being full.

Sunday, October 23, 2011

Fusion Applications : General Availability and In The Cloud

I know this is already there across the web now, but thought of mentioning this bit of Oracle news...

Oracle finally took the wraps off the Oracle Fusion Applications (next generation ERP solution) by making it generally available to customers during the Oracle Open World (OOW 2011) conference. They also announced that Oracle Fusion Application modules like HCM, CRM etc will be available in the cloud (SAAS offering).

Below diagram depicts a high level overview of what Oracle Fusion Applications is about...as can be seen below it is best of breed applications taking features from Oracle Ebusiness Suite, Peoplesoft, Siebel and JD Edwards along with number of other product acquisitions done by Oracle across industries and business functionalities. It has embedded analytics/BI across the UI which is empowered by technologies from Hyperion and it is built on top of Oracle Fusion Middleware stack which provides the webservices and security framework.

















For more details on the cloud solutions visit http://cloud.oracle.com













If you are an Oracle Partner, you can visit http://www.oracle.com/partners/secure/campaign/eblasts/fusion-application-455396.html to get more in-depth details about Fusion Applications via the Fusion Learning Center.

Monday, October 17, 2011

SOA 11g: Weblogic Admin Server Down with Error "java.lang.NumberFormatException: null"

Weblogic Admin server isn't starting and below error is seen in log file:

<BEA-000386> Server subsystem failed. Reason: java.lang.NumberFormatException: null
java.lang.NumberFormatException: null
        at java.lang.Integer.parseInt(Integer.java:417)
        at java.lang.Integer.parseInt(Integer.java:499)
        at weblogic.ldap.EmbeddedLDAP.validateVDEDirectories(EmbeddedLDAP.java:1097)
        at weblogic.ldap.EmbeddedLDAP.start(EmbeddedLDAP.java:242)
        at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:207)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:176)

This mostly happens when LDAP files are corrupted under the ../domain-name/server/AdminServer/data/ldap/ directory. A possible cause of corruption is when space on server is full. When the associated volume is full (100%) weblogic server will corrupt these files.

To fix the above error tried the below:
Remove the ../domain-name/server/AdminServer/data/ldap/conf/replicas.prop file and restart the Admin server. It should work now.

Thursday, October 6, 2011

Using Out-of-the-box Purge Scripts In Oracle SOA 11.1.1.4

Purging the SOA Infra tables (aka dehyrdation store) is a very important task for SOA Suite administrators. In case you have production environments where transactional volume is high it can fill up your SOA Infra audit tables fast and if the data growth in tables is not controlled, it can lead to major performance issues or nightmares rather.

SOA 11.1.1.4 has 2 purging techniques available :
1. Either use the database partitioning concept where the SOA Infra tables are partitioned based on date range or other criteria and you can drop the partitions. This is faster way of doing and comes handy when you have to deal with huge volume of data. This however requires some advanced DBA skills to perform.

2. We also have some purge scripts available which can come handy. Oracle SOA Suite installations across versions have come with out-of-the-box purge scripts but most of these had performance issues. The 11.1.1.4(aka PS3) version of SOA Suite purge scripts have many performance improvements and are easy to use as well. This post explains the simple steps required to execute the purge scripts in your environment.

Step1: Connect to the DB with SQL*Plus as SYSDBA to grant privilages to the SOA Infra user (say DEV_SOAINFRA) that executes the scripts:
SQL> GRANT EXECUTE ON DBMS_LOCK TO DEV_SOAINFRA; 
SQL> GRANT CREATE ANY JOB TO DEV_SOAINFRA;
Step2: The Purge Scripts location is $RCU_HOME/rcu/integration/soainfra/sql/soa_purge/ Connect to the DB with SQL*Plus as the DEV_SOAINFRA user and load the scripts:
SQL> @soa_purge_scripts.sql
This should create some procedures, functions, types and packages under DEV_SOAINFRA schema.
Step3: Before running the purge check how many records there are to be purged using below SQL. Please note cube_instance is not the only table which gets purged, there are lot of child tables which get purged as well.
SQL> select state, count(*) from cube_instance group by state;
Step4: If you want to spool the PLSQL program's output to a log file then set serveroutput on. This would help you understand which are the tables getting purged and also what are the eligible records getting purged.
SQL> SET SERVEROUTPUT ON;
SQL> spool '/tmp/spool.log'
Then run the script mentioned in next step and once finished turn off the spooling.
SQL> spool off 
Step5: Please note there are 2 modes of running purge either loop purge or parallel purge. In loop purge it iterates through the set of eligible records and purges it. Parallel purge is similar to loop purge with additional flexibility of spawning parallel threads to do the purging (faster, multi threaded approach if dealing with huge number of records). Below is a sample of loop purge, for parallel purge the procedure name is delete_instance_in_parallel
SQL> DECLARE
max_creation_date timestamp;
min_creation_date timestamp;
retention_period timestamp;

BEGIN
min_creation_date := to_timestamp('2011-10-01','YYYY-MM-DD');
max_creation_date := to_timestamp('2011-10-05','YYYY-MM-DD');
retention_period := to_timestamp('2011-10-05','YYYY-MM-DD');

soa.delete_instances(
min_creation_date => min_creation_date,
max_creation_date => max_creation_date,
batch_size => 10000,
max_runtime => 60,
retention_period => retention_period,
purge_partitioned_component => false);

END;
/
You can then use the SQL in Step3 to check how many records were purged once the script completes and also open the spool.log to see the data purged from child tables.

Thursday, September 29, 2011

SAP IDoc Data With Japanese Kanji or Chinese Characters Overflows

Recently came across an issue while integrating SAP R3 (4.2) version with SOA 11g (11.1.1.4). When idocs are received by SAP Adapter from the SAP system containing Japanese Kanji/Chinese special characters, it was found that the data overflows on to next segments/xml tags and is distorted. This causes mapping issues as incorrect data is being mapped to. The root cause behind this behaviour is that SAP R/3(4.2) is a non-unicode system while SOA 11.1.1.4 is Unicode.

The issue is covered in below Oracle documentation as well
http://download.oracle.com/docs/cd/E14571_01/relnotes.1111/e10132/adapters_iway.htm#CIHBCICF

Here is the explanations provided by Oracle

"This issue only occurs on non-Unicode SAP MDMP environments, where one character can be two or more bytes. As an example of this issue, when using Japanese, the SAP field length is four characters. The English word "ball" fits correctly into the field because one character equals one byte. The Japanese word for ball in Shift-Jis encoding is three characters, but two bytes per character, so the last character is truncated and the last character appears in the next field. Since IDocs are positional delimited, this can cause errors in processing. This occurs because SAP uses character length, not byte length for all non-Unicode field lengths. There is no work around on this issue other than using Unicode or using shorter text in IDocs in DBCS."

None of the workaround suggested above were feasible options. So tried the below approach to fix this...

From SAP before sending the idoc, convert all Japanese strings to hexadecimal characters. Once SOA suite receives the idoc, the BPEL uses a java embed/custom XSLT function (java code) to convert the hexadecimal back to Japanese characters.There is lot of sample code available for doing these kind of hex to string conversion. Just make sure you are using the correct charset (like shift-jis for japanese kanji) while doing the conversion.
Now you should be able to see the idocs properly and use the converted japanese,chinese special characters correctly in your mappings.

Thursday, September 22, 2011

SSO (SAML 1.1) Setup In SOA 11g

This post covers the steps required to configure SSO (SAML 1.1) with SOA 11g. Having Single Sign On(SSO) enabled helps the end users as they don't have to remember different username/password combinations for different applications. When tied with a LDAP provider (Like Microsoft Active Directory or Oracle Internet Directory), SSO helps in providing a robust authentication mechanism along with a seamless user experience.

SOA 11g and weblogic have made the SSO configuration very easy for administrators. Its all done on the Weblogic Admin console and doesnt require running any backend scripts or changing files on windows/linux. So lets take a look at some screenshots which show this easy setup.

Create a new Authentication provider (SAMLIdentity Asserter) and reorder to make sure it looks as below.

 Create a new Asserting party and specify properties as shown below

Create a new trusted certificate (same alias as above screenshot) and import the certifcate(.der) file.

      Finally under Managed Server -> Federation Services configure your SAML 1.1 Destination as shown
      below
That's it. Restart the Admin and Managed servers and you should be able to see the SSO redirection happening correctly. Basically whenever you hit your URL (bpm workspace in above screenshot) you should be redirected to your SSO site which should then pull up your user credentails from LDAP provider (lets says NT login) and authenticate you so that you dont have to login explicitly to your URL.

Incase you want to turn on Debug for SSO/SAML to troubleshoot issues with redirection or other errors, follow the below steps in your Weblogic Admin Console. Select Lock & Edit and click on your managed server and under the Debug tab, expand Weblogic->Security and select SAML and click on Enable and save. That's it. You should be able to see the Debug messages related to SSO/SAML in your Managed Server log file now.


Thursday, September 1, 2011

Oracle WebTier (11g) Installation hangs

Oracle WebTier provides components like HTTP Server and WebCache which help in routing http requests from external users to the application server. It provides lot of flexibility to Oracle SOA Suite installations by having security, clustering, load balancing features built into it.

I have been using Oracle Webtier for most of my SOA 11g Cluster installations and the installation/configuration has been pretty straight forward. For a WebTier 11.1.1.4 installation basically install the 11.1.1.2 via Installer and apply the 11.1.1.4 patchset on top of it. This is followed by some post install configurations incase you want to setup cluster i.e adding load balancer (VIP) url in httpd.conf and mod_wls_ohs.conf files.

Tried to capture a few installation screenshots below:

Choose the installation type. Default is to Install and Configure in one shot.




Specify the admin server fully qualified domain host name. Eg. abc.mycompany.com



This is where the issue happened. Usually in all my previous installations the post install config step below was successful. But somehow on this specific server it was always getting stuck at 0% and I couldn't find any errors in the install log files for this weird behaviour.
After couple of attempts I decided to take an alternate route of Installing Webtier only (Choose the option Install Software- Do Not Configure) and configuring it in a later step by running (WT_HOME/bin/config.sh). This approach worked and I was able to continue with my installation after scratching my head for few hours.

Tuesday, August 30, 2011

LDAP Authentication (Active Directory) setup in SOA 11g

This is a short post explaining how to do AD (Active directory) setup in SOA 11g weblogic admin console. AD helps to authenticate users trying to access BPM Worklist or BPM Workspace.

In Weblogic Admin console go to Home >Security Realms >myrealm >Providers









Once done Click on New and provide Name (say ADProvider) and Type as ActiveDirectoryAuthenticator



















You can Reorder the Authentication Providers and make sure ADProvider is the topmost one.











Provide the AD specific configuration details on below screen. You may get these details/credentials from your LDAP administrator.












Once all changes are done, save and Activate changes. Then restart the servers and test the LDAP authentication by logging into BPM Workspace or Worklist and ensure that only authenticated users are allowed to login.

Thursday, August 18, 2011

HTTP Binding Adapter in SOA 11g - Continued

This is a continuation of my earlier post on HTTP Binding Adapter support in SOA 11g.
HTTP Binding Adapter in SOA 11g
In this post have captured some screenshots and other tips for getting this to work on SOA 11.1.1.4. First the basic configuration screenshots. Drag the HTTP Binding adapter icon from component palette to the External References section of composite.


 Specify the EndPoint URL which you want to call.
Provide details about the Request and Response schemas (xsd files) here. Based on this the XML gets posted

Finally finish the adapter configuration and move on to configuring the Invoke activity to call this adapter.

Incase you want to make the endpointURI dynamic you can leverage the endpointURI property. You also need to specify the username and password properties. Now the javax.xml.ws.security.auth.username property is available on the UI (properties tab), but the javax.xml.ws.security.auth.password isn't. So you can add this to the code directly. Once added the code will look something as below in your .bpel file

<invoke>
..
..
<bpelx:inputProperty name="endpointURI" expression="....."/>
<bpelx:inputProperty name="javax.xml.ws.security.auth.username" expression="....."/>
<bpelx:inputProperty name="javax.xml.ws.security.auth.password" expression="...."/>
</invoke>

You can either set the username password as BPEL preferences or fetch from DB or some other sources and set the expression above accordingly. That's it ! The code is ready to be deployed and tested. You should now be able to successfully invoke the http binding service.

Tuesday, August 16, 2011

SOA 11g: Managed server startup fails with "Persistency service internal error"

While starting SOA managed server ran into below error.

[soa_server1] [ERROR] [] [oracle.soa.services.common] [tid: [ACTIVE].ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [APP: soa-infra] <.> Persistency service internal error.[[
Persistency service internal error.
Check the underlying exception and correct the error. If the error persists, contact Oracle Support Services.
 ORABPEL-9732
Persistency service internal error.
Persistency service internal error.
Check the underlying exception and correct the error. If the error persists, contact Oracle Support Services.
        at oracle.bpel.services.workflow.repos.PersistencyDriver.initNonTransactionDataSource(PersistencyDriver.java:271)
        at oracle.bpel.services.workflow.repos.PersistencyDriver.getNonTransactionConnection(PersistencyDriver.java:297)
...
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'FabricConfigManager' defined in ServletContext resource [/WEB-INF/fabric-config.xml]: Cannot resolve reference to bean 'MediatorServiceEngine' while setting bean property 'configurables' with key [2]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'MediatorServiceEngine' defined in ServletContext resource [/WEB-INF/fabric-config-mediator.xml]: Cannot resolve reference to bean 'FaultRecoveryManager' while setting bean property 'faultRecoveryManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'FaultRecoveryManager' defined in ServletContext resource [/WEB-INF/fabric-config.xml]: Cannot resolve reference to bean 'BPELServiceEngine' while setting bean property 'serviceEngines' with key [1]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'BPELServiceEngine' defined in ServletContext resource [/WEB-INF/fabric-config-bpel.xml]: Invocation of init method failed; nested exception is java.lang.RuntimeException:
ORABPEL START-UP ERROR!!!!!!!!
OraBPEL run-time system failed to start due to exception:


Restart the SOA Infra database to get rid of this error. Once done restart the admin and managed servers and  the managed server should come up fine now.

Thursday, August 4, 2011

Using the JRockit Mission Control with SOA 11g

In one of my earlier posts I had covered about "VisualGC" which was a performance monitoring tool with SunJDK.
VisualGC: Performance Monitoring tool for Oracle SOA Suite

In this post I would like to cover a similar powerful tool called "JRockit Mission Control" which gets shipped along with JRockit JVM and can be used for performance monitoring and JVM profiling.

For enabling this tool, you first have to modify the setDomainEnv.sh file and add the java properties as mentioned below.

EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -Xmanagement:ssl=false,authenticate=false,autodiscovery=true"
export EXTRA_JAVA_PROPERTIES

This enables the client machine to connect to the WLS server and pull the JVM stats. I have not specified the port argument above. By default it is 7091, incase a different port is to be used the same can be appended to the comma separated argument list above.

Now you can go to the Jrockit installation folder on your windows/linux machine and navigate to the bin folder where you can find the jrmc file. Run the same and after starting JRMC the following screen appears


 Create a new connection to your Weblogic Server
Next you can start monitoring the JVM/CPU usage and drill down into other JVM options in real time.

I will be covering deep dive details about JRMC in a later post. For now lets enjoy the cockpit styled UI :)

Wednesday, July 27, 2011

ORABPEL-05207 Error deploying BPEL archive:Premature end of file

Recently I came across an issue on a SOA 10g server where one of the BPEL processes wasn't loading after server restart. On checking the bpel domain.log file found the below error message
Error while loading process 'XXXX, rev '1.0': Error deploying BPEL archive.An error occurred while attempting to deploy the BPEL archive file "[ domain = default, process = XXXX, revision = 1.0, state = 0, lifecycle = 0 ]"; the exception reported is: Premature end of file.
ORABPEL-05207
Error deploying BPEL archive
An error occurred while attempting to deploy the BPEL archive file "[ domain = default, process = XXXX, revision = 1.0, state = 0, lifecycle = 0 ]"; the exception reported is: Premature end of file.

When Fusion server restart happens the BPEL archive files are loaded from the corresponding temp directories. Apparently the server had reached 100% space utilization and the subsequent server restart caused the bpel.xml for this process to get corrupted (0 KB as shown in highlighted section below). As a result this process was not getting loaded now and gave the error Premature end of file.

$pwd
/soa/OracleAS_1/bpel/domains/defaut/tmp/.bpel_XXXX_1.  0_9f89464f4c3e38.tmp
$ ls -lrt
..
-rw-r----- 1 soauser soauser   0   Jul 27 13:45 bpel.xml

To fix the issue had to redeploy the BPEL process. But the question was how did the server reach 100% space utilization. On digging further, I found that someone had turned on the DEBUG mode for loggers and left it that way for few days....this had generated 40-50 GB log files and filled up the space.

This is the highest mode of logging and should only be turned on if troubleshooting any issues and should be turned off or switched to a lower logging level immediately (especially on Production Servers).

Lessons learnt the hard way :)

Saturday, July 23, 2011

Enabling the BPMN Service Engine on EM Console


In order to enable the BPMN Service engine in EM Console, below steps need to be followed:
1. set bpm.enabled=false in setDomainEnv.sh

        EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES}  -Dbpm.enabled=true"
        export  EXTRA_JAVA_PROPERTIES

2. Restart the Admin Server for the change to take effect.

Tuesday, July 19, 2011

How to specify different heap settings for Weblogic Admin Server and Managed Server

It is a general requirement in Dev/QA/PROD environments to have different heap size settings for Admin Server and Managed Server. The usual practice for server start up is as below:

1. Startup the Managed Server from command line
   $nohup ./startWeblogic.sh &
2. Start the Node Manager from command line
  $nohup ./startNodeManager.sh &
3. Start Manager server from Admin console.

Now if we don't specify separate start up parameters for Admin & Managed server, both start with same heap size settings and that is an over kill for Admin server which doesn't require huge heap.

So to set the heap size of the managed server which is managed by NodeManager do the following:

1. You can specify your start up parameters in the "Arguments" field in the console so that they are used when you start the Managed server through the admin console.
2. Modify the nodemanager.properties file and set the StartScriptEnabled value to false. Without this the managed server won't take the changed heap size into effect after restart. It will still taking the values set in setDomainenv.sh script (same as admin server).

3. Restart Node Manager and Managed server for the new parameters to take affect.

Saturday, June 25, 2011

Installing SOA 11g Cluster on Weblogic

SOA 11g cluster setup on Weblogic is a lot easier than SOA 10g clustering on OC4J for sure..Tried to document the steps sequentially, will add screenshots later.

This is a 2 Node cluster with Admin server running on 1 node and Managed server running on both nodes.

1. Download all the installables from OTN for the SOA version 11.1.1.4.
       jrockit-jdk1.6.0_24-R28.1.3-4.0.1-linux-x64.bin
       wls1034_generic.jar
       ofm_rcu_linux_11.1.1.4.0_disk1_1of1.zip
       ofm_soa_generic_11.1.1.4.0_disk1_1of2.zip
       ofm_soa_generic_11.1.1.4.0_disk1_2of2.zip

2. Run the RCU and create the schemas for the SOA Cluster. (xxx_SOAINFRA, xxx_MDS etc..)

3. Install Jrockit on both nodes.

4. Install weblogic 10.3.4 on both nodes
        java -Xms1024M -Dspace.detection=false -jar wls1034_generic.jar

5. Install SOA 11.1.1.4 on both nodes.
       a. specify oraInventory path
       b. skip software updates
       c. specify installation directory
       d. specify the DB details

6. Create the domain on Node1. Run config.sh in $ORACLE_HOME/common/bin
      a. Under "Select Optional Configuration" select 
               Managed Servers, Clusters and Machines & 
               Deployments and Services
     b. Add the 2 managed server names and their listen ports
     c. Configure cluster
     d. Add managed servers to the cluster
     e. Configure Machines (Add the 2 server names under Unix Machine tab)
     f. Assign servers to machines.
     g. Complete the domain creation....

7. Pack the SOA domain from Node1 using below command
cd $WL_HOME/common/bin
./pack.sh -managed=true -domain={path to SOA domain} -template=soadomaintemplate.jar -template_name=soa_domain_template

8. Copy the jar file to Node2 and run the unpack command there.
cd $WL_HOME/common/bin
./unpack.sh -domain={path to SOA domain} -template=soadomaintemplate.jar

9. On each host create boot properties.
cd {path to SOA domain}
vi boot.poperties
username=weblogic
passsword=welcome1
cp boot.properties servers/AdminServer/security

10. Start Admin Server on host1 and disable the host name verification for admin and managed servers. (SSL tab->Advanced). Restart the Admin Server.

11. Start Node manager on both nodes to create initial script file. Then stop the node manager, edit the $WL_HOME/common/nodemanager/nodemanager.properties to set below and restart Node managers.
StartScriptEnabled=true
            StopScriptEnabled=true

12. Now you can login to the Admin console and start the managed servers.

13. Incase you have a Load balancer, configure the same to route request to the 2 nodes. The composites being deployed on the cluster can point to the LBR URL. (make sure end point URLs in composite.xml point to this). Also set the Server URL and Server Callback URL as shown in below screenshot.



14. Coherence comes as part of the SOA suite and SOA Clusters in 11g use coherence for communicating between nodes (similar to JGroups in 10g). Without coherence setup the deployment will not get distributed across all servers.

For unicast communication mode, we need additional coherence properties to be set. These have been explained in below Oracle doc.
http://download.oracle.com/docs/cd/E15523_01/core.1111/e12036/extend_soa.htm#CHDEAFJH

The startWeblogic.sh script on Node1 will have below properties set for Coherence to work.

EXTRA_JAVA_PROPERTIES="-Dtangosol.coherence.wka1=Node1 hostname -Dtangosol.coherence.wka2=Node2 hostname -Dtangosol.coherence.localhost=Node1 hostname"

Similarily for Node2...

That's it ! The WLS cluster should be ready to use now. 

Tuesday, June 21, 2011

Nice book on Oracle Fusion Applications

Oracle Fusion Applications are the next generation ERP offering from Oracle, which packs best-of-the-breed features from Oracle Ebusiness, Peoplesoft, Siebel, JD Edwards and offers a complete standards based enterprise solution. This has been under development for few years now and is completely based on the Oracle Fusion Middleware stack (Oracle SOA Suite).

Check out the book on "Oracle Fusion Applications" in Amazon...
http://amzn.com/0071750339


An extract from the book which details the Technical Overview of Fusion Applications.
http://www.oracle.com/technetwork/articles/managing-fusion-apps-418611.pdf

Installing SAP Adapter on SOA 11gR1 PS3 (11.1.1.4)

First and foremost to clarify the naming conventions set by Oracle for SOA 11g versioning...
11g R1 PS1 - 11.1.1.2 
11g R1 PS2 - 11.1.1.3
11g R1 PS3 - 11.1.1.4

Now 11.1.1.4 version of SOA comes with a lot of bug fixes, performance enhancements (JRockit JVM instead of Sun JDK) and additional features. "Seeing is believing !" so decided to install this and check the performance gains. Now the tricky part was with the installation of SAP adapter on top of this.

Oracle documentation is again messed up with regards to installing this. Couldn't locate a direct URL from OTN to download the installer file. It seems the 11.1.1.4 version for the adapter installer isn't released yet and the SAP adapter updates for 11gR1 PS3, have been released as an OPatch. 

So incase you are looking for installing the iway adapters (SAP, Siebel, PSFT, JDE adapters) follow the below approach:

1. Install the 11g PS2 version of adapter. For this you need to apply the patch 10207507 which is nothing but copying the ApplicationAdapter.zip inside thirdparty folder and unzipping it.


2. Download the latest OPatch version i.e  p6880880_112000_Linux-x86-64


3Download the patch 11880221 from metalink to upgrade the exisiting PS2 install to PS3 version. Steps for applying this patch are as under:  
               a.  set MW_HOME 
               b.  set ORACLE_HOME
               c.  set OPatch in PATH
               d.  Apply the OPatch by running below command:
opatch apply -jre {path of Jrockit jre} -invPtrLoc $ORACLE_HOME/oraInst.loc

Wednesday, May 18, 2011

Siebel to SAP R/3 Integration Approaches

I was recently evaluating various approaches to integrate Siebel and SAP R/3 systems especially using Oracle SOA Suite. Siebel already provides a Siebel EAI Connector for SAP R/3. This is a tight coupling between Siebel and SAP R/3. If you are planning to use Oracle SOA Suite, it provides the iway adapter for both Siebel and SAP R/3 end systems which makes it possible to decouple the 2 end systems and integrate them. Both approaches are explained below.

Siebel EAI Connector for SAP R/3

The Siebel EAI Connector for SAP R/3 provides connectivity using BAPI and IDOC transport adapters, and predefined business processes. Using the connectors, you can exchange customer, order, and product information between a Siebel application and SAP. This leverages Siebel Workflow and Business Service data maps which are the transformations for the data entities like (Account in Siebel to Customer in SAP or Product in Siebel to Material in SAP). There are some pre-built integrations between common business processes and it also allows custom integrations to be built.














More details can be found in below link:
http://download.oracle.com/docs/cd/B31104_02/books/PDF/ConnSAP.pdf

Integrating using Oracle Siebel Adapter

We can also leverage Oracle Fusion Middleware and the available iway adapters for Siebel and SAP end systems. Ensure that the Siebel Business Services (WSDL) or Business Objects (XSD) are available from Application Explorer for WSDL/XSD generation. Similarily ensure that the SAP BAPIs/IDocs are available in Application Explorer for WSDL/XSD generation. After that you can create the fusion processes to integrate the 2 end systems.

Below is a screenshot from Application Explorer once you create a target and connect to Siebel system.











More details about this approach can be found in below link:
http://download.oracle.com/docs/cd/E14571_01/doc.1111/e17056/intro.htm#i1013615

I have covered details about SAP integration with SOA Suite in my earlier posts
Receiving idocs in BPEL
Invoking BAPIs from BPEL

Sunday, May 15, 2011

JMS Messages lost after server restart

Recently I came across an issue where JMS messages(unread) were lost from the queue when Weblogic server was restarted. This was kind of strange because Weblogic JMS provides a pretty stable solution around message delivery. Did some more analysis around this and below are the findings.

Weblogic JMS supports message retention in form of both Oracle AQ or an out-of-the-box DB persistence (file based persistence is default). In either case it should not lose message. The messages can be removed from the queues ONLY if:
a.      They are consumed by some process.
b.      If the messages gets expired, the queue handler will remove them
c.      If the delivery mode of messages coming from JMS Provider is Non-Persistent.

Which brings to the question of "What is JMSDeliveryMode ?" JMSDeliveryMode specifies PERSISTENT or NON_PERSISTENT messaging.
  • When a persistent message is sent, WebLogic JMS stores it in the JMS file or JDBC Store (database).
  • WebLogic JMS does not store non-persistent messages in the JMS database (prefix_wlstore). These messages are guaranteed to be delivered at least once unless there is a system failure, in which case messages may be lost.
You can check the JMSDeliverMode on incoming messages by looking at the JMS Header section.

<mes:WLJMSMessage xmlns:mes="http://www.bea.com/WLS/JMS/Message">
<mes:Header>
<mes:JMSMessageID>ID:xxxxxxx</mes:JMSMessageID>
<mes:JMSDeliveryMode>NON_PERSISTENT</mes:JMSDeliveryMode>
<mes:JMSExpiration>0</mes:JMSExpiration>
<mes:JMSPriority>4</mes:JMSPriority>
<mes:JMSRedelivered>false</mes:JMSRedelivered>
..
..
</mes:Header>
The JMS Provider should set this JMS Delivery Persistence to PERSISTENT mode and send it to consumer (JMS Queues in Fusion). No additional configuration is needed on Fusion– BY DEFAULT. Incase the messages are still being sent as NON-PERSISTENT then Weblogic provides a property for the Queue called "Delivery Mode Override" which can be used to override the delivery mode of incoming messages.


Friday, May 13, 2011

Controlling the Size and Number of OPMN Debug logs generated

Incase you want to control the number and size of the files generated under SOA_HOME/opmn/logs directory, you can do that by adding few start up parameters in opmn.xml. Usually if these are not added, the debug files will grow over a period of time and will run out of space. (use du -csh * under logs directory to see space occupied by each sub-directory).

Usually the directory OC4J_SOA_xxxx which holds the *.out and *.err files consumes the most space. So to control that you can add below startup parameters for automatic recycling of these files.

<ias-component id="SOA" status="enabled">
            <process-type id="OC4J_SOA" module-id="OC4J" status="enabled">
               <module-data>
                  <category id="start-parameters">
                     <data id="java-options" value="-server -XX:MaxPermSize=2048M -ms4096M -mx8192M -XX:AppendRatio=3 -Djava.security.policy=$ORACLE_HOME/j2ee/OC4J_SOA/config/java2.policy -Djava.awt.headless=true -Dhttp.webdir.enable=false -Doc4j.userThreads=true -Doracle.mdb.fastUndeploy=60 -Doc4j.formauth.redirect=true -Djava.net.preferIPv4Stack=true -Dorabpel.home=/soa/OracleAS_1/bpel -Xbootclasspath^/p:/soa/OracleAS_1/bpel/lib/orabpel-boot.jar -Dhttp.proxySet=false -Doraesb.home=/soa/OracleAS_1/integration/esb -DHTTPClient.disableKeepAlives=true -Dhttp.session.debug=false -Dfile.encoding=UTF-8 -Dstdstream.filesize=10 -Dstdstream.filenumber=10"/>
                     <data id="oc4j-options" value="-out /soa/OracleAS_1/opmn/logs/OC4J_SOA.out -err /soa/OracleAS_1/opmn/logs/OC4J_SOA.err "/>
                  </category>

These 2 highlighted parameters control the size of each file as 10MB and number to 10. Older files are overwritten. Restart the server for changes to take affect.

Tuesday, May 10, 2011

Weblogic Admin Server Unable to Start after ip change of host.

After changing the ip address of the App Server host (say from xx.xx.xx.xx to yy.yy.yy.yy), the weblogic admin server is unable to start. Following error message is seen in log file.

<Error> <Server> <AdminServer> <DynamicListenThread[Default]> <<WLS Kernel>> <> <> <1305039521705> <BEA-002606> <Unable to create a server socket for listening on channel "Default". The address xx.xx.xx.xx might be incorrect or another process is using port 7001: java.net.BindException: Cannot assign requested address.>

After ip change make sure you have changed references to the ip address in below places:
  • If you have used the IP address, instead of the hostname, as the listen address of the WebLogic Server Administration Server. Make sure you change it in config.xml under $MW_HOME/user_projects/domains/domain_name/config directory.
  • Also ensure that the /etc/hosts or C:\Windows\system32\drivers\etc\hosts file is modified to point to the new ip address.
Restart the Admin Server and it should start up successfully now.

Tuesday, April 26, 2011

Using SFTP with Oracle SOA

A common requirement in integration projects is to transfer files in/out of a system in a secured manner. FTP is the usual protocol for transferring files and if additional security is required then SFTP (Secured FTP) is the way to go. In this post I would cover some of the ways FTP adapter can be configured in SOA 10g and 11g to make use of SFTP.

For additional details on FTP adapter configuration you can refer to the Oracle link
http://download.oracle.com/docs/cd/B31017_01/integrate.1013/b28994/adptr_file.htm

SFTP supports couple of authentication mechanisms to ensure additional security on top of the FTP protocol.
  • Password authentication
  • Public Key authentication
In password authentication, the external site/vendor which hosts the FTP server shares a username/password combination which has to be configured on the SOA server. At runtime when a SFTP connection is attempted, the username/password is made use of for establishing the connection.

Similarly in Public key authentication, a private-public key pair is generated. The public key is shared with the external site/vendor which hosts the FTP server. At runtime when a SFTP connection is attempted, Fusion process will try to match the private key stored locally on SOA server with the public key on Remote FTP Server and do the authorization first before sending/posting the files.

The configuration information in either case is stored in SOA_HOME/j2ee/OC4J_SOA/application-deployments/default/FtpAdapter/oc4j-ra.xml  (SOA 10g) or MW_HOME/Oracle_SOA1/soa/connectors/FtpAdapter.rar/weblogic-ra.xml (SOA 11g).

For Password authentication below are the properties which you need to set (oc4j-ra.xml sample shown below)
<config-property name="host" value="XXXXX"/>
<config-property name="port" value="22"/>
<config-property name="username" value="xxxxx"/>
<config-property name="password" value="xxxxx"/>
<config-property name="useSftp" value="true"/>
<config-property name="authenticationType" value="password"/>

For Public Key authentication below are the properties which you need to set (weblogic-ra.xml sample shown below)
<wls:property>
<wls:name>host</wls:name>
<wls:value>XXXX</wls:value>
</wls:property>

<wls:property>
<wls:name>port</wls:name>
<wls:value>22</wls:value>
</wls:property>

<wls:property>
<wls:name>useSftp</wls:name>
<wls:value>true</wls:value>
</wls:property>

<wls:property>
<wls:name>authenticationType</wls:name>
<wls:value>publickey</wls:value>
</wls:property>

<wls:property>
<wls:name>privateKeyFile</wls:name>
<wls:value>path of private key file</wls:value>
</wls:property>

Apart from above configuration steps incase of Public Key authentication we can follow below additional steps to generate the private-public key pair and do corresponding setup for that.

1. On Remote FTP Server ensure that /etc/ssh/sshd_config has below parameters set
              RSA Authentication Yes
              PubKey Authentication Yes
2. On SOA server, generate the Public/Private Key pair using below command
            ssh -keygen –t  rsa
3. Once the public and private key are generated make a note of file path, file name etc.
4. Then copy the public key content to the Remote FTP Server . Login as the account with which the FTP has to be performed and put the public key content into file ~/.ssh/authorized_keys.
5. For modifying the weblogic-ra.xml in SOA 11g, extract the file from the FtpAdapter.rar file and after making changes repackage it using command (jar cvf FtpAdapter.rar .)

Incase you run into errors like below work with your network administrator to unblock the port 22 at firewall.
sftp xxxxx
Connecting to xxxxx...
ssh: connect to host xxxx port 22: Connection refused
Couldn't read packet: Connection reset by peer


Thats it ! Now you should be able to securely transfer files..

Monday, April 18, 2011

Configuring FTP Adapter in SOA 10g Cluster for High Availability

If you have a 2 Node cluster environment for SOA 10g, it is essential that the FTP adapter is configured in an Active/Passive manner. Otherwise you may run into situations where both nodes are trying to read the same file from the remote FTP server and this will lead to duplicate files entering the system.

The reason for duplicates is because FTP adapter on each node will maintain its own control file(locally) where it stores the last read time of the file. So if a file has already been read by Node1, there are chances that after server restart the same file may be picked by Node2 as well because the time maintained in control files is out-of-sync on both nodes.

There are couple of good articles published by Oracle for this specific configuration. The base idea of the solution is to have the control file on a shared folder which is accessible by both nodes.

Step1:
=====
Make sure the FTP adapters are configured in Singleton mode i.e BPEL clusterName value specified in $ORACLE_HOME/bpel/system/config/collaxa-config.xml should be different from the Adapters clusterGroupId property set inside bpel.xml of the BPEL project.

Also the Multicast host and port in jgroups-properties.xml file should be same on both nodes.

Step2:
=====
Once step1 has been ensured, you should create a folder on a shared file system. Either use an external share storage or you can create a shared directory on one node and use NFS share to mount it on the other node. Either ways make sure the folder has write permissions from both nodes. This folder will store the control files.

Next backup and edit the $ORACLE_HOME/bpel/system/service/config/pc.properties file on each node and set the below property oracle.tip.adapter.file.controldirpath to the shared folder name

Restart the servers for the change to take effect and test the scenario.

Tuesday, April 5, 2011

ORABPEL-05215 Error while deploying BPEL processes

Was working on some SOA 10g deployments recently and came across this error.

ORABPEL-05215
Error while loading process. The process domain encountered the following errors while loading the process "XXXXXX" (revision "1.0"): null.
If you have installed a patch to the server, please check that the bpelcClasspath domain property includes the patch classes.

Tried looking into the opmn logs and nothing additional was mentioned there. This was a simple BPEL process and didn't have any embedded java code. So definitely wasn't an issue with classpath settings.

On further checking found that the ORACLE_HOME and PATH used by my build scripts were pointing to Jdev 10.1.3.5 directory. However the SOA Suite version was 10.1.3.4. 

Downloading 10.1.3.4 Jdev and pointing ORACLE_HOME and PATH to this directory worked ! So when you encounter this error ensure that SOA Suite version matches your JDeveloper version.

Friday, March 25, 2011

java.sql.SQLSyntaxErrorException: ORA-02089: COMMIT is not allowed in a subordinate session

While trying to call a stored procedure from Oracle SOA 11g BPEL it throws below error
"java.sql.SQLSyntaxErrorException: ORA-02089: COMMIT is not allowed in a subordinate session".

If you are using Global Transactions(XA) in your DB Adapter then the commit will happen only after the BPEL process completes. To avoid the above error :

1. Make sure you don't have explicit commits within the stored procedure as BPEL tries to manage the transaction commit and there is a conflict if the stored proc. has an explicit commit inside it.

2. Also you can use local transactions(non-XA) in your DB Adapter if you don't want to wait till the process is over for the commit to happen.

Monday, February 14, 2011

flowN for parallel processing in BPEL

Recently I came across a requirement where I had to implement parallel processing in my BPEL. Basically had to post some data to SAP system and was asked to work with multiple connections to increase the throughput. FlowN activity is ideal for this scenario as I had to perform similar processing on different messages/payload. Also this gave me the flexibility to increase the number of parallel flows incase a higher throughput was desired later without making any change to code (set the number of flows desired as a BPEL preference and assign the same to the FlowN variable).

However I did one mistake, which was not creating a Scope inside FlowN and have the variables defined locally inside that scope. When FlowN gets executed this scope should be processed parallely but with different message/payload based on the IndexVariable. Since I had declared the variables globally so at runtime it wasn't executing the parallel flows correctly and always used the payload/message of 1st flow.

<assign name="Set_Counter">
  <copy>
     <from expression="ora:getPreference('NumberOfFlows')"/>
        <to variable="NumOfFlowsToBeProcessed"/>
    </copy>
</assign>

<bpelx:flowN name="Parallel_Flow"
  N="bpws:getVariableData('NumOfFlowsToBeProcessed')"
  indexVariable="Parallel_Flow_Variable">
   <scope name="FlowN_Scope">
       <variables>
           <variable name="Invoke_WS_InputVariable"/>
              ...
       </variables>       
  <sequence name="Sequence_1">
           <assign name="Assign_Input">
               <copy>
                 <from variable="Fetch_Variable"
                   part="part1"
                   query="/ns1:ListOfData/ns1:Data[$Parallel_Flow_Variable]/ns1:name"/>
                  <to variable="Invoke_WS_InputVariable"
                   part="payload"
                   query="/ns2:DataList/ns2:Data/ns2:name"/>
                </copy>

Correct usage is as shown in above code snippet. Using local variables inside the FlowN scope should allow the parallel flows to execute correctly.

A nice read about True Parallelism in BPEL FlowN activity.
true-parallellism-of-the-oracle-bpel-pm-flow-activity