Tuesday, August 25, 2015

Sending Error Notifications in MFT 12c

Oracle SOA Suite 12c introduced a new product called Managed File Transfer (MFT). Basically MFT is a centralized file transfer solution for enterprises which addresses several pain points wrt. file transfers
  • It provides secure access to file transfers, tracking each and every step of the end to end file transmission.
  • Easy to use UI for designing, monitoring and administering the file transfer solution which can be used by non-technical staff as well
  • Extensive reporting capability with detailed status tracking and options to resubmit failed transfers. 
  • Built-in support for many pre and post processing actions like compression/decompression, encryption/decryption(PGP etc.) which in turn means no need to write custom code for these actions.
  • Out of the box support for many technologies including SOA and OSB which in turn leads to multiple integration pattern possibilities. Example: for a large payload processing MFT can simply consume the source file and pass it as a reference to SOA target for chunk read and transformation.
  • Integration with ESS (Enterprise Scheduler Service) which is another new product feature in 12c allowing to create flexible schedules for the file transfers without having to write any custom code (quartz, control M scripts, cronjobs etc).

Even though the centralized monitoring dashboard in MFT console is robust and provides granular details of each phase of file transfer (eg: if the target systems are SOA it has embedded links to the EM Console for tracing the status there as well.) there maybe a need to send out email alerts for reporting file transfer failures to a support distribution list in a typical production environment (cannot expect people to keep monitoring the dashboard all the time).

Below are high level steps to enable email notifications in MFT 12c.

Step1:
Run the below WLST commands in sequence to enable the event and add contact for email notification: 
o   cd ${MW_HOME}/mft/common/bin
o   ./wlst.sh
o   connect("weblogic","xxxx","t3://hostname:port")
o   updateEvent('RUNTIME_ERROR_EVENT', true)
o   createContact('Email', 'abc@xyz.com' )
o   addContactToNotification('RUNTIME_ERROR_EVENT', 'Email', 'abc@xyz.com')
o   disconnect()
o   exit()

Step2: 
Configure the Email Driver for notifications. Basically set the Outgoing Mail Server and the port (mandatory ones)


 You have to restart the MFT managed server after changing these settings.

Step3: 
Try replicating a file transfer error and monitor the error both in Monitoring Dashboard as well as check for the error email in outlook.



NOTE:

If event listed above is not enabled or no contacts are specified, then notification messages are sent to JMS “MFTExceptionQueue”. 

Description: cid:image007.jpg@01D0DE76.5CC3AFE0

Friday, August 21, 2015

SOA Governance in 12c: OER vs. OAC

As important it is for an enterprise to move towards Service Oriented Architecture to bring the necessary agility to its business, equally important it becomes for it to implement a SOA Governance Framework. Services left ungoverned lead to redundancy/duplication, non-standards based development and faster time to market. SOA Governance controls the service lifecycle right from inception, design, build till the decommissioning of service. It helps increase service re-use and also provides tools/reports for tracking the usage of services across enterprise thereby provide an accurate measure of the ROI.

Oracle's 12c fusion middleware stack provides two SOA Governance products:

  • Oracle Enterprise Repository (OER)
  • Oracle API Catalog (OAC)
OER provides robust design-time as well as run-time governance support for the service lifecycle, enabling storage and management of extensible metadata for composites, service, business processes and other IT related artifacts.

OAC on the other hand is a lightweight SOA governance tool, allowing you to build a catalog of your enterprise's APIs. It provides a simplified metadata model for API assets along with easy search capabilities on the console.

I won't be covering in-depth details about each product as there are some great content regarding that online like In-Depth look of API Catalog 12c. Having evaluated both products what I intent to do with this blogspot is to try and do a feature comparison between the 2 products in the important areas. Hope this gives a high level overview of the features available as well as helps in making right decision of choosing the correct tool.

User-Role Mapping
OER :
There are several roles available in OER and based on the organization need all of these roles can be leveraged at various phases of the Service Lifecycle. The OER Admin screen provides the ability to control the Access setting for each of this role. The primary role for reviewing/approving assets is Registrar. Apart from that we have User role which has read only privilege, project architects/advanced submitters who can edit the asset, there are several administrator roles which can perform different level of admin tasks (granular segregation of duties).

















OAC:
    OAC has a simplified role structure with just 3 roles available for various functions. 
    Developer – Can search APIs (SOAP/REST) and use those, 
    Curator – Reviews API, can edit API to enrich it with additional metadata, Publishes the API
    Admin – Can harvest the API from command line and perform other admin tasks.











Projects-Departments
OER :
Projects are the primary means of gathering metrics in OER. OER tracks assets produced by projects, as well as assets consumed by projects. In order for a user to access any files associated with assets in OER, the user must be assigned to a project. Both Users and Projects are assigned to Departments. This is convenient from a reporting standpoint, as organizations can then track the production and consumption of reusable assets to a specific department. OER allows creation of both projects and departments apart from the Default Department and Common Project.

OAC:
OAC doesn't allow creation of custom projects and all API assets are published/tracked under default "Common Project". It however allows creation of custom Departments.

Asset Data Model
OER :
OER provides a base set of data and option to import harvester solution pack which includes asset types specific for SOA artifacts (BPEL/OSB). Most of the SOA code artifacts when harvested get mapped to one of these asset types underneath. Example: a SCA project upon harvesting will create several OER assets like Composite, Business Process:BPEL, ComponentType, XSD, XSLT, WSDL, JCA, FaultPolicies , Adapter, Interface, Endpoint, Service etc.


There is also an option to create custom Asset types if the need arises. For example: we can create document artifacts to capture Requirements Spec, Technical Design Document etc. to map to various stages of the Service Lifecycle (SDLC).


















OAC:
OAC follows a simplified meta data model for the API assets. It classifies the API assets as either REST or SOAP and every code artifact harvested (BPEL or OSB) get categorized into either of this service type as shown below.

Also OAC doesn’t provide an option to create any other custom asset type from the console/command line.







Asset Approval Workflow
OER :
Each phase of service lifecycle has assets which follow the various states as they move through the Governance approval workflow.

Basically asset harvested from command line/console goes to submitted-pending review state. Once the Registrar Accepts and Approves it upon reviewing, it moves to the submitted-under review state. As further approval happens and finally Registrar Registers it the asset state moves to Registered state. Only Registered assets are available for consumption to Users/Developers.











OAC:
OAC follows a simplified approval workflow for the API assets. Every API asset harvested from command line is in Draft state in OAC. The Curator will login to OAC and can review the asset, edit it with additional metadata or documentation and finally change the API status to Published. Only published APIs are available for consumption to Developers.














Harvesting Assets
OER :
OER allows asset submission/harvesting from both OER console as well as command line tool.

From command line, we need to edit the HarvesterSettings.xml file to ensure OER connection credentials and SOA/OSB project details are specified correctly and then execute either harvester.sh for SOA or osb-harvester.sh for OSB artifacts.


OAC :
OAC doesn’t allow any harvesting from its Console. You can however use the same harvester scripts of OER to do command line harvesting for API assets into OAC.

NOTE:  Both OAC and OER use the same installer and command line tools.






System Settings
OER :
OER Admin screen provides a great set of administration tools, covering lot of areas like Reporting, Notifications, SFID for usage tracking, LDAP/SSO integrations, Policy Management etc.



















OAC:
OAC Admin screen has comparatively limited set of administration functions with no support for Notification Emails or Reporting. However you do have support for LDAP/Active Directory integration.












Usage Tracking & Reporting
OER :
You can either use the OER Console to Use-Download an asset and its usage gets tracked against corresponding project. It prompts you to select the project on screen.















The other way to consume an asset is from Jdeveloper/IDE by downloading the OER plugin. Once you enable SFID, the automated usage tracking is detected by OER.




















OER provides out-of-the-box reports which can be used for measuring the overall productivity of project teams, show the quality, status, and value of the asset portfolio. This requires a separate installation and integration with Oracle BI Publisher 11g as the reports get rendered in BIP.























OAC:
OAC also provides similar options to track usage of API assets from console as well as Jdeveloper. From console the Developer has to click on link Add to MyAPI for the tracking to begin for any Published API asset.

However there is no Reporting feature available in OAC.


Monday, April 20, 2015

Service Monitoring in SOA 11g: Using Oracle BTM 12c

Oracle Business Transaction Management (OBTM) is a Service monitoring tool for Oracle SOA environments. Oracle acquired it from Amber point and has rebranded it along with adding serveral functionalities to make it compatible with the Oracle FMW stack.  Oracle Enterprise Manager (OEM) which provides Server Monitoring capabilities can be used with BTM to provide an end to end Monitoring Capability at Enterprise level.

SOA developers usually rely on either the EM console for monitoring/debugging SOA instances or Oracle Service Bus console (if reporting is enabled) for Service bus flows or log files incase of java components. If the landscape contains all these several components tied together it becomes a nightmare for people supporting/maintaining the system to debug it. 

BTM comes handy in these scenarios:
  • It's a centralized service monitoring tool which gives lot of insights into service transactions.
  • It allows Automatic data collection through BTM Observers, which are non-intrusive in nature (unlike BAM it doesn't require any modifications to existing service code) 
  • It provides an end to end visibility to a service transaction spanning different type of components. (BPEL processes + OSB flows + Java web services+ Database calls etc.)
 At the time of writing this article, I have used SOA Suite/OSB 11.1.1.7 with BTM 12.1.0.6.7 on a 11g R2 DB. I won't cover the various installation/configuration steps for setting up the tool as its well documented in below Oracle link.


I would like to touch upon some of the features the tool provides and a typical topology which will be used in most SOA deployments.

Service Endpoint Monitoring
By default the BTM observers will monitor most of the service end points like BPEL endpoints, OSB Proxy endpoints or Business Services, database adapter calls (create connection, close connection, execute etc), POJOs, JMS queues etc. Even without any transaction creation, you can analyze the data related to each service end point and this data is retained for the configured amount of time (based on logger system policy). You can create custom views of specific service end points related to a particular domain/container which are frequently viewed.

Transactions
Transactions are very useful when you have multiple components in the service interface like OSB, BPEL, Java, DB adapter calls etc. If there are service interactions which are asynchronous in nature, they can also be correlated using Message Fingerprint (a unique key identified by BTM), ECID or some custom message property to link them up. Transactions provide an end to end visibility on the service message flow through the various components and give analysis data like timings and fault details at individual message level if logging is enabled.

SLA Policies/Alerts
SLA policies can be defined on various service end points and email alerts triggered to the concerned stake holders if they are breached. Typical examples would be Avg. response time on a service exceeding a threshold value for a given period of time, Max response time for a service breaching some threshold, number of faults on a critical business service exceeding the high water mark etc.

Based on my experience with the tool:
  • Out of the 5 JVM components in a typical BTM installation, the most important ones are the btmMonitor and btmMain. If the transaction load on servers are high, it would be recommended to horizontally scale the btmMonitor across cluster nodes, as its the one interacting with the Observers installed on all the monitored domains. btmPerformance and btmTransaction are other 2 JVMs...
  • I would say this is a very nice tool for technical people/developers and not something which would be used by Business users. The UI provides very detailed information at a granular level for each service transaction capturing information like throughput, Avg./Max response times, faults etc which can be very useful while doing performance testing of services.
  • The SLA alert emails are very helpful for support team while debugging production issues when end systems aren't responding in a timely fashion.
  • There is a BTM CLI (Command Line Interface) utility provided by the product which can be used to extract service/transaction information based on a time range. This can be used for reporting purposes for use cases like how many transactions ran for more than 30 seconds in a day for a specific service interface.
  • The underlying database tables of BTM is highly de-normalized (perhaps to optimize read performance and render data on the UI faster) but it makes it highly difficult to query specific data. It would be an understatement to say these DB tables are not straightforward to interpret.