Friday, 17 May 2013

Introduction to Oracle E-Business Suite (EBS)


Oracle E-Business Suite (EBS) version 12 is an internet enabled product that can be managed from a single site. A company can operate a single data center with a single database, similar to other ERP products. This release was launched in February 2007 and contains a number of product lines which users can implement for their business. Oracle EBS includes the company’s enterprise resource planning (ERP) product as well as supply chain management (SCM) and customer relationship management (CRM) applications. Each application is licensed separately so companies can select the combination that is suitable for their business processes.
The applications found in the Oracle EBS include:
  • Oracle CRM
  • Oracle Financials
  • Oracle Human Resource Management System (HRMS)
  • Oracle Logistics
  • Oracle Supply Chain Applications
  • Oracle Order Management
  • Oracle Transportation Management
  • Oracle Warehouse Management System

Oracle CRM

The Oracle CRM application provides the "front office" functions which help a business to increase customers and customer loyalty and satisfaction. The basic functionality includes marketing, order capture, contracts, field service, spares management and the call center functionality. The CRM application also includes internet focused products such as catalogs, content management, quote and order management.

Oracle Financials

The Financials applications include General Ledger, Cash Management, Payables, Receivables, Fixed Assets, Treasury, Property Management, Financial Analyzer and a self-service expenses function.

Oracle Human Resource Management System (HRMS)

The HRMS application helps companies manage the recruit-to-retire process. The application gives users a real-time view of all the HR activities, including recruiting, time management, training, compensation, benefits and payroll. The HRMS suite integrates fully with the other EBS applications and supplies the users with an analytics package that allows the extraction of HR data with ease.

Oracle Logistics

The logistics module allows users to plan, manage, and control the flow and storage of products and services within a business. It provides information to plan future demand and safety stock within the warehouse. The application can create detailed, constraint-based production schedules and material plans.

Oracle Supply Chain Applications

Supply chain applications powers a businesses information-driven supply chains. Companies can predict market requirements, innovate in response to volatile market conditions, and align operations across global networks. Oracle offers industry-specific solutions that includes product development, demand management, sales and operations planning, transportation management, and supply management.

Oracle Order Management

Order management applications can streamline and automate a business’s entire sales order management process, from order promising and order capture to transportation and shipment. Order management also includes EDI, XML, telesales and web storefronts. Some of the business benefits that can be achieved include reduced fulfillment costs, reduced order fulfillment cycle time, increased order accuracy and greater on-time delivery.

Oracle Transportation Management

Transportation management (TMS) provides transportation planning and execution capabilities to shippers and third party logistics providers. It integrates and streamlines transportation planning, execution and freight payment. The TMS function delivers functionality for all modes of transportation, from full truckload to complex air, ocean, and rail shipments. The benefits of the TMS function include reduced transportation costs, improved customer service and greater asset utilization.

Oracle Warehouse Management System

Oracle’s Warehouse Management System allows the coordinated movement of goods and information throughout the extended distribution process. The module provides business processes that can deliver efficient utilization of employees, equipment, and space in the distribution process. Benefits include an acceleration of the flow of products through the supply chain while reducing lead times and releasing working capital, real time inventory management, cross-docking, pick-by-line, advanced ship notices (ASN), inbound planning and yard management.


Oracle’s ERP product is second only to SAP in sales and its best of breed solution can be found in thousands of companies across the world. The applications that are included in the E-Business Suite cover the wide range of business processes that are found in any company. The industry-specific solutions supplied by Oracle can easily reduce time and resources required to implement the solution and provide businesses with configured business

Wednesday, 8 May 2013

Domain-Value Maps and Cross References

Requirement -

When an object flows from one system to another system, both using their own entities to represent the same type of object, transformation is required to map the entity from one system to another. For example, when a new customer is created in a SAP application, you might want to create a new entry for the same customer in your Oracle E-Business Suite application named as EBS.

So the functionality is required to map the entities from different domains. Even if in future, another domain gets added, again is the requirement to represent that entity in some common entity format. For example when an object gets created in system A with unique identifier A001, the same object is propagated in system B with identifier B001. The decode functionality is required to map B from A.

  Solution -
Oracle ESB provides two solutions for this problem -

1 Domain-value map –
A domain-value map can be created and populated using Oracle ESB Control. It can than be used with the Oracle JDeveloper Mapper tool while developing XSLT data transformations during design time. Then, at runtime the lookups for application-specific values occur. It uses xml file to store the mapping values.

For example, suppose we want to use a domain-value map to perform a runtime lookup to convert source id to the target id. Using Oracle Service Bus, the source id is passed and the target id returned using lookup function in transformation.

1.1 Architecture -
Oracle ESB DVM uses xml to store the mapping values. The response time of the first call of lookup-dvm is high, as internal cache manager loads the xml document into the memory. If the data is frequently changing than the response time increases, because internal cache manager has to reload it again and again. The search algorithm used by lookup-dvm is sequential search. If the value to be found exists in the last row of dvm file, than the response time increases. The DVM functionality of Oracle ESB is memory intensive if it has large set of mapping values.

DVM is best when having small set of mapping values. Internal cache manager increases the performance tremendously in this case, as xml document always resides in runtime memory.

2.1.2 How to use –
Creating and populating DVM – There are two ways of creating and populating data value maps in Oracle ESB. Either to manually edit the dvm using Oracle ESB Console or import values using import command provided in Oracle ESB console. DVM does not allow to enter two rows having same set of values. Manual edit functionality should be used, when we don’t have large set of mapping values and which are frequently changing. Importing dvm files is the other options, provided the dvm files have specific format.

Looking up – To lookup the values we have lookup-dvm function. This function can be used in both Oracle ESB and Oracle BPEL through transformation. If the lookup target domain has multiple values for specific source domain value then it returns the first value, as this search is sequential. If lookup fails to find the target domain value, then it returns the default value passed in lookup-dvm function.

2.2 Cross References –
A cross reference table consists of following two parts, metadata and the actual data. The metadata is created by using the cross reference command line utilities and is stored in the repository as an XML file. The actual data is stored in the database.

You can use a cross reference table to look up column values at run time. However, before using a cross reference to look up a particular value, you need to populate it at run time. This can de done by using the cross reference XPath functions. The XPath functions enable you to populate a cross reference, perform lookups, and delete a column value. These XPath functions can be used in the Expression builder dialog box to create an expression or in the XSLT Mapper dialog box to create transformations.

2.2.1 How it works -
This functionality of finding the cross domain mapping uses both xml and database. XML is used to store the metadata of the xref table and actual data get stored in database. Creation of xref tables is done using xreftool command line utility. Default datasource used by cross reference has jndi name jdbc/xref. Other datasources can be used are jdbc/esb and jdbc/BPELServerDataSource. To start working with cross reference, it is required to create a table in any of the above datasources. Following sql query is used to create XREF_DATA table -


So here for each lookup, a jdbc call is made to select target domain value. So this type of referencing is not memory intensive. This proved to be best if we have large set of mapping values, and is dynamic. The performance is totally depends on the type and the location of the datasource. It also supports 1:M mapping of domain values.

2.2.2 How to use – XPath functions provided can be used in transformation and assign activity of Oracle BPEL. These functions have better exception handling capability. To import and export the cross reference tables, command line utilities such as xrefexport and xrefimport are provided.

Tuesday, 7 May 2013

Basic Unix Commands

1) login to the unix/solaris server
 username: cts
 passwd : *******

You will log in to the server. It will take you to default home_directory.

2) pwd
it shows/prints the present working directory

3)ls -l
gives listing of the files in present directory

4) cd ..
takes you to previous directory

5)mkdir <directory>
will create directory

6) mkdir -p /home/jb/j1/j2/j3
will create all the non-existing directories at a stretch

7)vi <file_name>
opens file for reading/editing
8)cat <file_name>
display contents of file
9)more <file_name>
displays page by page contents of file
10) tail <file_name>
shows last 10 lines of file

use tail -f for continous update of file_name
head <file_name>
shows first 10 lines of file_name

11) touch <file_name>
creates a zero/dummy file

12)ln file1 file2
creates link of file1 to file2

13) file <file_name>
shows what type of file it is like
$ file *
acrawley.html:  ascii text
admin:          directory
afiedt.buf:     ascii text
autosys_env_IBKNYR1:    commands text

14) cd /home/<directory_name>
takes you to /home/<directory_name> directory
likewise you can give and directory
Note: remember to give from the beginning like '/' is very important. Its called the root directory. In Unix/solaris/<any_falvor_of_unix> path is specified from the root i.e., '/'
If you have root privileges you can go to any directory.
For normal users its not possible.

clears the screen

16) cd /usr/bin
this directory has all the commands of unix. you can say ls -l or ls for listing all the commands in that directory and can try executing each of the commands.
if u have any doubt on any command just say "man <command_name> like if you want to know what ls command will do give "man ls", it displays man page.

Similarly /usr/sbin has administrative related commands
/usr/lib has libraries

/etc consists of system administrative and tuning files
will display which users are logged into system.
will  display more info abt the users logged in
19)once you login to the system your home directory will be set. in between if you navigate/go to other directories and after that if you give "cd " it will take you back to your home direcory.
20) ps -ef
shows process status of various active processes.(use more/other options to get more info)
21) rm <file_name>
will delete file specified

22)rm *
will delete all the files in the present directory (BE CAREFUL WHILE GIVING THIS COMMAND)
23) grep <pattern> file_name
checks pattern/word in file name specified
24) chmod 777 <file_name>
changes file_name/directory permissions
chmod -R 777 /<directory_name>
changes permissions recursively all the files and direcories under parent directory

25) chown owner:group <file_name>
changes owner and group for the file_name

Similarly  chown -R owner:group /<directory>
changes ownership/group recursively all the files and direcories under parent directory

26)rsh <server_name>
rsh -l <login_name> <server_name>
rcp file1 file2

accessing remote servers (This requires pre-configuration on remote servers like .rhosts and hosts.equiv)
27) gunzip <file_name>
unzips file name

gzip <file_name>
zips file_name

compress <file_name>
compresses file_name  (gzip/compress uses different algorithm for (compression)

uncompress <file_name>
uncompresses file_name

pack <file_name>
unpack <file_name>

packs/unpacks file_name
All the above can be used on directory for compression
28) which <file_name>
shows if the file_name/command exists and if exists where its path is

29)bc -l
bench calculator

shows the size of file, time, memory etc available for current shell

31)man <command_name>
gives help/man pages of command given

32)write <user_name>
you can write messages to the logged in users on the server

this command writes/sends messages to all users logged in (useful while shutting down m/c)

34)fuser -k /dev/pts/2
kills terminal pts/2 and closes its connection
35)nohup <command_name> &
nohup is very useful command. it runs the command even the telnet connection is closed/broken.
& is used for running command in background.

36)crontab -l
shows the cron jobs running/scheduled for the current user.
you can copy/redirect the jobs to a ascii file and edit/add jobs and resubmit to cron as

-->$crontab -l > present_cronjobs
-->edit/add entries to present_cronjobs
-->$crontab present_cronjobs  (This will submit/resubmit the jobs in file presnt_cronjobs to CRON)

at is very useful command for running jobs at later time
at <time> command/script (will run the script at specified time)

at -l will show the at jobs scheduled
38)killing an unwanted process
$ps -ef|grep <process_name> (will show the PID of the process in the 2nd field)
$kill -9 <PID>

39)who -b
shows when the system has booted
will show how long the system has been up and also shows cpu load, number of users logged in etc.
will show the users logged in/out information
last <user_name>  shows particular user logins/logouts
last reboot  shows all the system boots

shows current user's UID, username and GID and group name

shows unique identifier of host
44) more /etc/passwd
it will show all the logins, home directories of the users.
45) more /etc/shadow
shows password encryption info and other user related info (only root has access to this file)
46) more /etc/system
this file has all n/w, h/w, memory etc tunable parameters/values

47) more /etc/inittab
after the bootup checks this file for which runlevel to enter

48) find / -name <file_name> -print
for finding any file name. ( giving '/' will find files from root directory)
will give your system name.
50)uname -a
will show system name, solaris version, platform and some more information
will add user (u have to root user to do this)
it has more options for specifying home directory, shell group etc.

Similarly userdel deletes username
52)df -k
will show all the mounted filesystems.
will show all mounted file systems with additional info like large filesystem support etc

Gives/shows info about installed packages/software on system
55)showrev -p
shows all patches installed on system
56)init 0
will shutdown the system
57) init 6
will reboot the system (other init options are 1, 2, 3, 5 and S)
58)cd /var/adm
this directory has system/application logs. Please check all the files and its contents for more information.
59)cd /etc/rc.d
this directory has all startup scripts.
there will be more of this kind.
rc2.d, rc3.d, rc0.d, rc5.d, rc6.d etc...
each directory has scripts which will run in its own run level.
a run level is nothing but the init option u give while starting or stopping the system
suppose you give init 0, system will check in /etc/rc0.d for all the files to be executed.

60) /usr/sbin/ifconfig -a
will show the ip-address of the system.
lo0  : loopback interface
hme0 : hundred MBPS n/w interface
qfe0 : quad ehternet interface

61)ping <hostname>
will ping and test connectivity between your system and the hostname you give in the ping.
you can also give ping <ip-address>

62) rm -r <directory>
will delete all the contents in the directory specified recursively (BE CAREFUL WHILE GIVING THIS COMMAND)

63)alias l='ls -l'
alias dir='ls -l|grep "^d"'
alias p='pwd'
alias c='clear'

Short cuts for commonly used commands

64)tar -cvf allfile.tar /<directory_name>   copies all files under directory to allfile.tar
tar -xvf allfile.tar    /home retrieves tar files to /home directory
tar -tvf allfile.tar   reads contents of allfile.tar

65)find . -type f -print -exec grep -i <type_ur_text_here> {} \;
 this is recursive grep

66) rm - <-filename>
for deleting special files
# ls -l
total 16
-rw-r--r--   1 root     other         13 Dec 24 14:57 -k
# rm - -k
# ls -l
total 0

67) rm "<file name>"
delete file names with spaces in between

for checking sun related h/w conf.

68) top --- shows all process and memory, cpu etc utilisation

69) prtconf  -- shows h/w, cpu, memory conf

70) mount   -- will show the disks mounted and all partitions

71) cd /usr/platform/sun4u/sbin/prtdiag -v  --- shows additional configuration of memory, cpu speed etc..

72) sysdef -- shows system h/w, memory, and other internal configurable/tunable 

73) ifconfig unplumb hme0   --- will disable ehternet interface hme0

74) ifconfig plumb hme0    --- will enable hme0

for performance monitoring and diagnosing bottlenecks

75) iostat  -- disk utilisation, cpu, io wait etc (iostat -xcM gives extented statistics of disk activity, cpu etc)
76) vmstat  -- memory and virtual memory utilisation

77) sar      -- system archive report, gives total system report for cpu, memory, disk, etcc
78) netstat   --- shows network statistics, like how many connected on which services/ports
79) mpstat  -- shows multi cpu statistics like load on each cpu.

80) psrinfo   -- gives processor/s information (online/offline)

81) nfsstat --- nfs mounted filesystems statistics

82) prstat --- shows process related statistics (present from solaris 2.7 and above)

for disk configurations u need ---

83) format -- will show all the disks configuration and partitions


84) prtvtoc -- shows disk partition/geometry info

85) uadmin 2 0
stops system immediately within 5 seconds(BE CAREFUL-- has to be to root)

86) halt
halts processor and reboots machine (BECAREFUL -- has to be root)

debugging tool (for reading/debugging corefiles)

88) mkfile 60m jithendra
creates a filename of size 60mb which can be used for adding to swap space

89) swap -a jithendra
attaches the 60mb file to swap space (Very useful when swap space is running out)

90)swap -l
lists the swap contents

91) sleep 5
waits for 5 seconds (useful in shell scripts)

92)cat <file_name> |awk '{print $1}'
Prints the first field of the filed ($1, $2... can be used to display more fields)

93) :1,$s/<old>/<new>/g

use the above for global replacement of text in ascii files using vi editor

94) :1,$s/^M//g

remove Ctrl M character in text files using vi editor

95) isainfo -v
shows supported platforms (32-bit, 64-bit)

96)strings <file_name>
shows printable strings in any type of file (binary, object, text etc)

97)truss -p <PID>
shows system calls and signals (useful when debugging process)

98) stty erase ^H
sets backspace for deleting typed character

99) echo $TERM
shows terminal type like vt100, vt220 etc.
($PATH, $ORACLE_HOME etc can be used with echo)

100) set -o vi
While your shell is set to KSH use this command to display history of commands you are typing
Press ESCAPE and k for showing previous commands

shows all the environmental variables set to your current session

Friday, 3 May 2013

Messaging Patterns in Service-Oriented Architecture

Introduction: Process Configuration and Flexibility Trends

The need for process flexibility is not a new trend. The trend has been evident for the last two decades. The Internet, Web, and mobile computing came along and enabled a global productivity boom, resulting in technology innovations that are constantly laying the foundation for renovating industrial-age processes.
These solutions are built primarily on proprietary or system-based messaging platforms aimed at providing a platform for integration and communication between various business components. The typical method for accessing these systems is through a wide assortment of pre-built adapters that provide a bi-directional connectivity to many types of application processes. Rather than explicitly declaring how systems will interact through low-level protocols and object-oriented architectures, Service-Oriented Architecture (SOA) makes it possible to provide an abstract interface through which processes or services can interact. It can be imagined as an interconnected process-based enterprise that exposes a set of loosely coupled, coarse-grained services.

What Is Service-Oriented Architecture?

SOA is the aggregation of components that satisfy a business need. It comprises components, services, and processes. Components are binaries that have a defined interface (usually only one), and a service is a grouping of components (executable programs) to get the job done. This higher level of application development provides a strategic advantage, facilitating more focus on the business requirement.
SOA isn't a new approach to software design; some of the notions behind SOA have been around for years. Jess Thompson, a research director at Gartner, argues that the underlying concepts date back to the early 1970s, when researchers started drawing boundaries around software and providing access to that software only through well-defined interfaces.
A service is generally implemented as a coarse-grained, discoverable software entity that exists as a single instance and interacts with applications and other services through a loosely coupled (often asynchronous), message-based communication model.
The most important aspect of SOA is that it separates the service's implementation from its interface. Service consumers view a service simply as a communication endpoint supporting a particular request format or contract. How service executes service requested by consumers is irrelevant; the only mandatory requirement is that the service sends the response back to the consumer in the agreed format, specified in contract.

SOA Entities

SOA consists of various entities configured together to support the find, bind, and execute paradigm as shown in Figure 1.
Figure 1. SOA explained

Service Consumer

The service consumer is an application, service, or some other type of software module that requires a service. It is the entity that initiates the locating of the service in the service registry, binding to the service over a transport, and executing the service function. The service consumer executes the service by sending it a request formatted according to the contract.

Service Provider

The service provider is the network-addressable entity that accepts and executes requests from consumers. It can be a mainframe system, a component, or some other type of software system that executes the service request. The service provider publishes its contract in the service registry for access by service consumers.

Service Registry

A service registry is a network-based directory that contains available services. It is an entity that accepts and stores contracts from service providers and provides those contracts to interested service consumers.

Service Contract

A contract is a specification of the way a consumer of a service will interact with the service provider. It specifies the format of the request and response from the service. A service contract may require a set of preconditions and post conditions. The preconditions and post conditions specify the state that the service must be in to execute a particular function. The contract may also specify quality of service (QoS) levels, specifications for the nonfunctional aspects of the service.

Service Lease

The lease (the time for which the state may be maintained), which the service registry grants the service consumer, is necessary for services to maintain state information about the binding between the consumer and provider. It enforces loose coupling between the service consumer and the service provider, by limiting the amount of time consumers and providers may be bound. Without a lease, a consumer could bind to a service forever and never rebind to its contract again.

Discoverability and Dynamic Binding: Messaging in SOA

SOA supports the concept of dynamic service discovery. The service consumer queries the "service registry for a service, and the service registry returns a list of all service providers that support the requested service. The consumer selects the cost-effective service provider from the list, and binds to the provider using a pointer from the service registry entry.
The consumer formats a request message based on the contract specifications, and binds the message to a communications channel that the service supports. The service provider executes the service and returns a message that conforms to the message definition in service contract.
The only dependency between provider and consumer is the contract, which the third-party service registry provides. The dependency is a runtime dependency and not a compile-time dependency. All the information the consumer needs about the service is obtained and used at runtime. The service interfaces are discovered dynamically, and messages are constructed dynamically. The service consumer does not know the format of the request message or response message or the location of the service until the service is actually needed.
The ability to transform messages has the benefit of allowing applications to be much more decoupled from each other. Messaging underpins SOA; we don't have SOA without messaging.

Messaging Patterns Catalogue Within SOA Context

Messaging patterns exist at different levels of abstraction with the SOA. Some patterns are used to represent the message itself, or attributes of a messaging transport system. Others are used to represent creation of message content or change the information content of a message. Patterns are also used to discuss complex mechanisms to direct messages. SOA messaging patterns can be divided into the following categories:
  • Message type patterns. Describe different varieties of messages that can be used in SOA.
  • Message channel patterns. Describe the fundamental attributes of a messaging transport system.
  • Routing patterns. Describe mechanisms to direct messages between Service Provider and Service Consumer.
  • Service consumer patterns. Describe the behavior of messaging system clients.
  • Contract patterns. Illustrates the behavioral specification to maintain a smooth communication between Service Provider and Consumer.
  • Message construction patterns. Describes the creation of message content that travel across the messaging system.
  • Transformation patterns. Change the information content of a message within the enterprise level messaging.
These patterns are shown in Figure 2.
Click here for larger image.
Figure 2. Messaging patterns catalogue within SOA context

Message Type Patterns

The message itself is simply some sort of data structure—such as a string, a byte array, a record, or an object. It can be interpreted simply as data, as the description of a command to be invoked on the receiver, or as the description of an event that occurred in the sender. Sender can send a Command Message, specifying a function or method on the receiver that the sender wishes to invoke. It can send a Document Message, enabling the sender to transmit one of its data structures to the receiver. Or it can send an Event Message, notifying the receiver of a change in the sender.
The following message type patterns can commonly be used in SOA.

Command Message

How to invoke a procedure in another application?
Solution: Use a command message to reliably invoke a procedure in another application as shown in Figure 3.
Figure 3. Command message
A command message controls another application, or a series of other applications, by sending a specially formatted message to that system. A command message includes intelligent instructions to perform a specific action, either through headers and attributes, or as part of the message payload. The recipient performs the appropriate action when the message is received. Command messages are closely related to the Command pattern [9].
A command message is simply a regular message that happens to contain a command. A Simple Object Access Protocol (SOAP) request is a command message.
Command messages are usually sent on a point-to-point channel so that each command will only be consumed and invoked once.

Document Message

How can you transfer data between services?
Use a document message to reliably transfer a data structure between applications. See Figure 4.
Figure 4. Document message
A document message is just a single unit of information, a single object or data structure that may decompose into smaller units. The important part of a document message is its content; the document. This content is retrieved by un-marshalling / or de-serializing data.
Document messages are usually sent using a point-to-point channel. In request-reply scenarios, the reply is usually a document message where the result value is the document.
A document message can be any kind of message in the messaging system. A Simple Object Access Protocol (SOAP) reply message is a document message.

Event Message

Several applications would like to use event notification to coordinate their actions, and would like to use messaging to communicate those events. How can messaging be used to transmit events from one service to another?
Use an event message for reliable, asynchronous event notification between applications. See Figure 5.
Figure 5. Event message
An event message extends the Observer model to a set of distributed applications. Event messages can be sent from one service to another to provide notification of lifecycle events within a service-oriented enterprise, or to announce the status of particular activities. Applications for this pattern include enterprise monitoring and centralized logging.
An important characteristic of event messages is that they do not require a reply.
An event message can be any kind of message in the messaging system. An event can be an object or data such as an XML document.
"If a message says that the Stock price for certain symbol has changed, that's an event. If the message provided information about the symbol, including its new price, that's a document."

Request-Reply Message

Messages travel into a message channel in one direction, from the sender to the receiver. This asynchronous transmission makes the delivery more reliable and decouples the sender from the receiver. The problem is that communication between components often needs to be two-way. When one component notifies another of a change, it may want to receive an acknowledgement.
How can messaging be two-way?
Send a pair of request-reply messages, each on its own channel. See Figure 6.
Figure 6. Request reply message
Request-Reply has two participants:
  • Requester (Service Consumer). Sends a request message and waits for a reply message.
  • Replier (Service Provider). Receives the request message and responds with a reply message.
The request channel can be a point-to-point channel or a publish-subscribe channel. The difference is whether the request should be broadcast to all interested parties or should only be processed by a single consumer. The reply channel, on the other hand, is almost always point-to-point, because it usually makes no sense to broadcast replies.
The request is like a method call. As such, the reply is one of three possibilities:
  • Void
  • Result value
  • Exception
The request should contain a return address to tell the replier where to send the reply. The reply should contain a correlation identifier that specifies which request this reply is for.

Messaging Channel Patterns

Channels, also known as queues, are logical pathways to transport messages. A channel behaves like a collection or array of messages, but one that is magically shared across multiple computers and can be used concurrently by multiple applications.
A service provider is a program that sends a message by writing the message to a channel. A consumer receives a message from a channel. There are different kinds of messaging channels available.

Point-to-Point Channel

The sender dispatches a message to a messaging system, which is responsible for relaying the message to a particular recipient. The messaging system might proactively deliver the message (by contacting the recipient directly), or hold the message until the recipient connects to retrieve it.
How can you ensure that exactly one consumer will receive the message?
Send the message on a point-to-point channel, which ensures that only one receiver will receive a particular message. See Figure 7.
Click here for larger image.
Figure 7. Point-to-point channel
A point-to-point channel ensures that only one consumer consumes any given message. If the channel has multiple receivers, only one of them can successfully consume a particular message. If multiple receivers try to consume a single message, the channel ensures that only one of them succeeds, so the receivers do not have to coordinate with each other. The channel can still have multiple consumers to consume multiple messages concurrently, but only a single receiver consumes any one message.

Publish-Subscribe Channel

The service provider broadcasts an event once, to all interested consumers.
Send the event on a publish-subscribe channel, which delivers a copy of a particular event to each receiver. See Figure 8.
Click here for larger image.
Figure 8. Publish-subscribe channel
A publish-subscribe channel that is developed based on Observer pattern [9], and describes a single input channel that splits into multiple output channels—one for each subscriber. After publishing an event into the publish-subscribe channel, the same message is delivered to each of the output channels. Each output channel is configured on one-to-one topology to allow only one consumer to consume a message. The event is considered consumed only when all of the consumers have been notified.
A publish-subscribe channel can be a useful for systems management, error debugging and different level of testing.

Datatype Channel

The receiver must know what type of messages it is receiving, or it won't know how to process them. For example, a sender might send different objects such as purchase orders, price quotes, and queries, but a receiver will probably take different steps to process each of these, so it has to know which is which.
Use a separate datatype channel for each data type, so that all data on a particular channel is of the same type. See Figure 9.
Click here for larger image.
Figure 9. Datatype channel
In any messaging system there are several separate datatype channels for each type of data. All of the messages on a given channel will contain the same type of data. Based on data type, the service provider sends the data to the channel and the consumer receives data from the appropriate datatype channel.

Dead Letter Channel

There are a number of reasons for message delivery to fail. Issues might be message channel configuration problem, a problem with consumers, or message expiration.
When there is any delivery issue with the message, it can be moved to a different messaging channel called a dead letter channel. See Figure 10.
Click here for larger image.
Figure 10. Dead letter channel
A dead letter channel is a separate channel dedicated for bad messages or invalid messages. From this channel messages can be rerouted to the mainstream channel or even in separate channel for special processing of the message.

Guaranteed Delivery

One of the main advantages of asynchronous messaging over RPC is that the participants don't need to be online at the same time. While the network is unavailable, the messaging system has to use a store and forward mechanism to ensure message durability. By default, the messages are stored in memory until they can be successfully forwarded to the next contract point. This mechanism works well when the messaging system is running reliably, but if the messaging system crashes, all of the stored messages are lost. As a preventative measure, applications use persistent media like files and databases to ensure recovery from system crashes.
Use a guaranteed delivery mechanism to make messages persistent. See Figure 11.
Figure 11. Guaranteed delivery
With guaranteed delivery, the messaging system uses a built-in data store (local storage disk space in a participant computer) to persist messages in each participant computer on which the messaging system is installed. The message is safely stored until it is successfully delivered. In this way, it ensures guaranteed delivery.
This guaranteed delivery mechanism increases system reliability, but at the expense of performance as it involves considerable numbers of I/O and consumes a large amount of disk space. Therefore if performance or debugging/testing is the priority try to avoid using guaranteed delivery.

Message Bus

An enterprise consists of various independent applications communicating with each other in a unified manner.We need an integration /service architecture that enables those applications to coordinate in a loosely coupled fashioned.
Structure the connecting middleware between these applications as a message bus that enables them to work together using messaging as shown in Figure 12.
Click here for larger image.
Figure 12. Message bus
A message bus is a combination of a common data model, a common command set, and a messaging infrastructure to allow different heterogeneous systems to communicate through a shared set of interfaces.
A message bus can be considered as a universal connector between the various enterprise systems, and as a universal interface for client applications that wish to communicate with each other. A message bus requires that all of the applications should use the same canonical data model. Applications adding messages to the bus may need to depend on message routers to route the messages to the appropriate final destinations.

Message Routing Patterns

Almost all messaging system uses built in router as well as customized routing. Message Routers are very important building blocks for any good integration architecture. As opposed to the different message routing design patterns, this pattern describes a hub-and-spoke architectural style with few specially embedded routing logic.

In Search of the Right Router

An important decision for an architect is to choose the appropriate routing mechanism. Patterns that will help you make the right decision are:
  • Pipes and filter
  • Content-based router
  • Content aggregator

Pipes and Filter

How can you divide a larger processing task into a sequence of smaller, independent processing steps?
Use the pipes and filters pattern to divide a larger processing task into a sequence of smaller, independent processing steps (filters) that are connected by channels (pipes). See Figure 13.
Click here for larger image.
Figure 13. Pipes and filter
Each filter exposes a very simple interface: it receives messages on the inbound pipe, processes the message, performs business transformations, and publishes the results to the outbound pipe. The pipe connects one filter to the next, sending output messages from one filter to the next. It's very similar to execution of a method call through passing parameters and getting a return value. It follows 'chain of responsibility' [11] pattern. Because all components use the same external interface they can be composed into different solutions by connecting the components to different pipes. The connection between filter and pipe is sometimes called port. In the basic form, each filter component has one input port and one output port.
The pipes and filters pattern uses abstract pipes to decouple components from each other. The pipe allows one component to send a message into the pipe so that it can be consumed later by another process that is unknown to the component. One of the potential downsides of pipes and filters architecture is the larger number of required channels that consume memory and CPU cycles. Also, publishing a message to a channel involves a certain amount of overhead because the data has to be translated from the application-internal format into the messaging infrastructure's own format.
Using pipes and filters also improves module-wise unit testing ability. It can help to prepare a testing framework. It is more efficient to test and debug each core function in isolation because we can tailor the test mechanism to the specific function.

Content-Based Router

The routing can be based on a number of criteria such as existence of fields, specific field values, and so on.
Use a content-based router to route each message to correct consumer based on message content. See Figure 14.
Click here for larger image.
Figure 14. Content-based router
The content-based router examines the message content and routes the message onto a different channel based on message data content. When implementing a content-based router, special caution should be taken to make easily maintainable routing logic. In more sophisticated integration scenarios, the content-based router can be implemented as a configurable rules engine that computes the destination channel based on a set of configurable rules.

Content Aggregator

The messaging system exchanges messages between a variety of sources. The messages have similar content but different formats, which can complicate the processing of combined messages. It would be better processing decision if we assigned different components with different responsibilities [11]. For example, if we want to select all of the transactions of a particular customer from different business zones for a particular quarter. This method is called event linking and sequencing.
Use a stateful content aggregator, to collect and store individual messages and combine those related messages to publish a single aggregated message. See Figure 15.
Click here for larger image.
Figure 15. Content aggregator
A content aggregator is a special filter that receives a stream of messages and correlates related messages. When a complete set of messages has been received, the aggregator collects information from each correlated message and publishes a single, aggregated message to the output channel for further processing. Therefore, aggregator has to be stateful, because it needs to save the message state with processing state until the complete formation of the aggregation.
When designing an aggregator, we need to specify the following items:
  • Correlation ID. An identifier that indicates messages internal relationship
  • End condition. The condition that determines when to stop processing
  • Aggregation algorithm. The algorithm used to combine the received messages into a single output message
Every time the content aggregator receives a new message, it checks whether the message is a part of already existing aggregate or a new aggregate. After adding the message, the content aggregator evaluates the process end condition for the aggregate. If the condition evaluates to true, a new aggregated message is formed from the aggregate and published to the output channel. If the process end condition evaluates to false, no message is published and the content aggregator continues processing.

Service Consumer Patterns

There are several possible types of Service Consumer. In this pattern catalogue we will present a set of consumer patterns.

Transactional Client

Transactions are important part of any messaging system and are sufficient for any participant to send or receive a single message. However, a few specific scenarios might need a broader transactional approach, which in turn may need special transactional coordination. These cases include (but are not limited to):
  • Send-receive message pairs. Receive one message and send another.
  • Batch message. Send or receive a group of related messages in a batch mode.
  • Message/database coordination. Send or receive a message combined with database update. For example, when an application receives and processes a message for ordering a product, the application will also need to update the product inventory database.
Scenarios like these require a different specification of transactional boundaries with much more complexities involving more than just a single message and may involve other transactional stores besides the messaging system.
How can you solve this kind of transactional problems?
Use a transactional client—make the client's session with the messaging system transactional and ensure that the client can specify complex transaction boundaries. See Figure 16.
Click here for larger image.
Figure 16. Transactional client sequence diagram
Both participants can be transactional. From a sender's point of view, the message isn't considered added to the channel until the sender commits the transaction. On the other hand message isn't removed from the channel until the receiver commits the transaction.
With a transactional receiver, messages can be received without actual removal of the message from the channel. The advantage of this approach is that if the application crashed at this point, the message would still be on the queue after message recovery has been performed; the message would not be lost. After the message processing is finished, and on successful transaction commit, the message is removed from the channel.

Polling Consumer

In any messaging system, the consumer needs an indication that application is ready so that it can consume the message. The best approach for the consumer is to repeatedly check the channel for message availability. If any message is available, the consumer consumes it. This checking is a continuous process known as polling.
The application should use a polling consumer, one that explicitly makes a call when it wants to receive a message. See Figure 17.
Figure 17. Polling consumer
A polling consumer is a message receiver. A polling consumer restricts the number of concurrent messages to be consumed by limiting the number of polling threads. In this way, it prevents the application from being blocked by having to process too many requests, and keeps any extra messages queued up until the receiver can process them.

Event-Driven Consumer

The problem with polling consumers is that it's a continuous process involves dedicated threads and consumes process time while polling for messages.
Instead of making the consumer poll for the message, a better idea might be to use event driven message notifications to indicate message availability. See Figure 18 and Figure 19. The application should use an event-driven consumer. Event-driven consumers automatically consume messages as they become available.
Figure 18. Event-driven consumer
Click here for larger image.
Figure 19. Event-driven consumer sequence diagram
An event-driven consumer is invoked by the messaging system at the time of message arrival on the consumer's channel. The consumer uses application-specific callback mechanism to pass the message to the application.

Durable Subscriber

In some cases you might require guaranteed message delivery where a message consumer is not connected to publish-subscribe channel or has crashed before receiving a message. In this case the messaging system needs to ensure guaranteed message delivery when the consumer reconnects to the system.
Use a durable subscriber. See Figure 20 and 21.
Click here for larger image.
Figure 20. Durable subscriber
Click here for larger image.
Figure 21. Durable subscriber sequence diagram
A durable subscription saves messages for an off-line subscriber and ensures message delivery when the subscriber reconnects. Thus it prevents published messages from getting lost and ensures guaranteed delivery. A durable subscription has no effect on the normal behavior of the online/active subscription mechanism.

Idempotent Receiver

For certain scenarios, instead of using Durable Subscription mechanism, some reliable messaging implementations can produce duplicate messages to ensure guaranteed, at-least once Delivery. In these cases, message delivery can generally only be guaranteed by resending the message until an acknowledgment is returned from the recipient. However, if the acknowledgment is lost due to an unreliable connection, the sender may resend a message that the receiver has already received.
We need to ensure that the messaging system is able to safely handle any messages that are received multiple times.
Design a receiver to be an idempotent receiver. See Figure 22.
Figure 22. Duplicate message problem
The term idempotent is originated from mathematics to describe the ability of a function that produces the same result if it is applied to itself, i.e. f(x) = f(f(x)). In messaging Environment this concept ensures safely resent of same message irrespective of receipt of same message multiple times.
In order to detect and eliminate duplicate messages based on the message identifier, the message consumer has to maintain a buffer of already received message identifiers. One of the key design issues is to decide message persisting timeout. In the simplest case, the service provider sends one message at a time, awaiting the receiver's acknowledgment after every message. In this scenario, the consumer efficiently uses the message identifier to check that the identifiers are identical. In practice, this style of communication is very inefficient, especially when significant throughput is required. In these situations, the sender might want to send a whole set of messages in a batch mode without awaiting acknowledgment for individual one. This will necessitate keeping a longer history of identifiers for already received messages, and the size of the message subscriber's buffer will grow significantly depending on the number of message the sender can send without an acknowledgment.
An alternative approach to achieving idempotency is to define the semantics of a message such that resending the message does not impact the system. For example, rather than defining a message as variable equation like 'Add 0.3% commission to the Employee code A1000 having a base salary of $10000', we could change the message to 'Set the commission amount $300.00 to the Employee code A1000 having a base salary of $10000'. Both messages achieve the same result—even if the current commission is $300. The second message is idempotent because receiving it twice will not have any effect. So whenever possible, try to send constants as message and avoid variables in messages. In this way we can efficiently achieve idempotency.

Service Factory

When designing service consumer for multiple styles of communication, it might seem necessary to define the service for each style, and this concept can be linked to the Factory Design Pattern [9]. In SOA it's a challenge to invoke the right services based on the style of communication.
Design a service factory that connects the messages on the channel to the service being accessed. See Figure 23.
Click here for larger image.
Figure 23. Service factory
A service factory may return a simple method call or even a complex remote process invocation. The service factory invokes the service just like any other method invocation and optionally can create a customized reply message.

Message Facade Pattern

Depending on business requirements, you might need to encapsulate business logic flow and complexity behind a standard facade.
A message facade can be used asynchronously and maintained independently. It acts as an interceptor between service consumer and service provider. See Figure 24.
Click here for larger image.
Figure 24. Message facade
The client creates a command message and sends it to the message facade through messaging channel. The facade receives the message (using a polling consumer or an event-driven consumer) and uses the information it contains to access business tier code to fulfill a use case. Optionally, a return message is sent to the client confirming successful completion of the use case and returning data.


So far we have understood how messaging patterns exist at different levels of abstraction in SOA. In this paper, which is the first of a two-part series on messaging patterns in Service-Oriented Architecture, Message Type patterns were used to describe different varieties of messages in SOA, Message Channel patterns explained messaging transport systems, Routing patterns explained mechanisms to route messages between the Service Provider and Service Consumer, and finally Service Consumer patterns illustrated the behavior of messaging system clients. In the next issue of JOURNAL, the final part of this paper will cover Contract patterns that illustrate the behavioral specifications required to maintain smooth communications between Service Provider and Service Consumer and Message Construction patterns that describe creation of message content that travels across the messaging system.

Copyright Declaration

G Hohpe & B Woolf, Enterprise Integration Patterns, (adapted material from pages 59-83), (c) 2004 Pearson Education, Inc. Reproduced by permission of Pearson Education, Inc. Publishing as Pearson Addison Wesley. All rights reserved.


  1. Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Gregor Hohpe and Bobby Woolf, Addison-Wesley, 2004
  2. Service-oriented architecture: A Primer, Michael S Pallos, EAI Journal, December 2001
  3. Solving Information Integration Challenges in a Service-Oriented Enterprise, ZapThink Whitepaper,
  4. SOA and EAI, De Gamma Website,
  5. Introduction to Service-Oriented Programming, Guy Bieber and Jeff Carpenter, Project Openwings, Motorola ISD, 2002
  6. Java Web Services Architecture, James McGovern, Sameer Tyagi, Michael Stevens, and Sunil Mathew, Morgan Kaufman Press, 2003
  7. Using Service-Oriented Architecture and Component-Based Development to Build Web Service Applications, Alan Brown, Simon Johnston, and Kevin Kelly, IBM, June 2003
  8. The Modular Structure of Complex Systems, Parnas D and Clements P, IEEE Journal, 1984
  9. Design Patterns: Elements of Reusable Object-Oriented Software, Gamma E, Helm R, Johnson R, and Vlissides J, Addison-Wesley, 1994
  10. Computerworld Year-End Special: 2004 Unplugged, Vol. 10, Issue No. 10, 15 December 2003—6 January 2004
  11. Applying UML and PatternsAn introduction to OOA/D and the Unified Process, Craig Larman, 2001

BPEL Persistence Properties – 11g

BPEL Persistance properties are used to control, when a process need to dehydrate. Below are the properties which we can use to control it for BPEL Component in a Composite.
This property indicates to Oracle BPEL Server that this process is a transient process and dehydration of the instance is not required. When set to true, Oracle BPEL Server keeps the instances of this process in memory only during the course of execution. This property can only be set to true for transient processes (process type does not incur any intermediate dehydration points during execution).
  • false (default): instances are persisted completely and recorded in the dehydration store database for a synchronous BPEL process.
  • true: Oracle BPEL Process Manager keeps instances in memory only.
This property controls if and when to persist instances. If an instance is not saved, it does not appear in Oracle BPEL Console. This property is applicable to transient BPEL processes (process type does not incur any intermediate dehydration points during execution).
This property is only used when inMemoryOptimization is set to true.
This parameter strongly impacts the amount of data stored in the database (in particular, the cube_instance, cube_scope, and work_item tables). It can also impact throughput.
  • on (default): The completed instance is saved normally.
  • deferred: The completed instance is saved, but with a different thread and in another transaction, If a server fails, some instances may not be saved.
  • faulted: Only the faulted instances are saved.
  • off: No instances of this process are saved.
<component name="mybpelproc">
<property name="bpel.config.completionPersistPolicy">faulted</property>
<property name="bpel.config.inMemoryOptimization">true</property>

This property controls database persistence of messages entering Oracle BPEL Server. Its used when we need to have a sync-type call based on a one way operation. This is mainly used when we need to make an adapter synchronous to the BPEL Process.
By default, incoming requests are saved in the following delivery service database tables: dlv_message
  • async.persist: Messages are persisted in the database.
  • sync.cache: Messages are stored in memory.
  • sync: Direct invocation occurs on the same thread.
<component name="UnitOfOrderConsumerBPELProcess">
<property name="bpel.config.transaction" >required</property>
<property name="bpel.config.oneWayDeliveryPolicy">sync</property>

General Recommendations:
1. If your Synchronous process exceed, say 1000 instances per hour, then its better to set inMemoryOptimization to true and completionPersistPolicy to faulted, So that we can get better throughput, only faulted instances gets dehydrated in the database, its goes easy on the purge (purging historical instance data from database)
2. Do not include any settings to persist your process such as (Dehydrate, mid process receive, wait or Onmessage)
3. Have good logging on your BPEL Process, so that you can see log messages in the diagnostic log files for troubleshooting.

SOA Suite 11g - Transaction(s) & boundaries

Usually a one way invocation (with a possible callback) is exposed in a wsdl as below

    <wsdl:operation name="process">

        <wsdl:input message="client:OrderProcessorRequestMessage"/>


This will cause the bpel engine to split the execution into two parts. First, and always inside the caller transaction, the insert into the dlv_message table (in 10.1.3.x that is into the inv_message), and secondly the transaction & new thread that executes the workitems, and creates a new instance.
This has several advantages in terms of scalability - because the engine's threadpool (invoker threads) will execute when a thread is available. However, the disadvantage is that there is no guarantee that it will execute immediately.
If one needs to have a sync-type call based on a one way operation - then they can use onewayDeliveryPolicy, which is a forward port of deliveryPersistPolicy in 10.1.3.
This property can be set by specifying bpel.config.oneWayDeliveryPolicy in a bpel component of composite.xml. Possible values are "async.persist", "async.cache" and "sync". If this value is not set in composite.xml, engine uses the oneWayDeliveryPolicy setting in bpel-config.xml
async.persist => persist in the db
async.cache => store in an in-memory hashmap
sync => direct invocation on the same thread
Below is the matrix based on the usecase described in my last post (here).
onewayDeliveryPolicy!=sync (default, callee runs in separate thread/transation)
throw any fault
caller doesn't get response because message is saved in delivery service. The callee's transaction will rollback if the fault is not handled.
throw bpelx:rollback
caller doesn't get response because message is saved in delivery service. It will rollback on unhandled fault.
onewayDeliveryPolicy=sync, txn=requriesNew (callee runs in the same thread, but different transaction)
throw any fault
caller gets FabricInvocationException. Callee transaction rolls back if the fault is not handled.
throw bpelx:rollback
caller gets FabricInvocationException. Callee transaction rolls back.
onewayDeliveryPolicy=sync, txn=required (callee runs in the same thread and the same transaction)
throw any fault
Callee faulted. Caller gets FabricInvocationException. Caller has a chance to handle the fault.
throw bpelx:rollback
whole transaction rollback.

How to Enrich Existing XML Using XSL in SOA 11g

Most of the  times you have a requirement to enrich existing xml objects in a BPEL. I have seen developers using assign activity for that. Assign activity is appropriate if you are adding few mappings(lesser mapping). It will be little bit difficult to use assign activity if
  • You need to map optional elements. Assign will fail at runtime if the element is not present in the payload. To prevent selection failure you need to add a Switch activity to check for element existence.
  • You need to add/append/update a collection of xml elements. It requires a While activity to iterate the collection.

  • You need to add mappings for multiple elements.
If you try to do it using Assign your BPEL will be flooded with Switch, Assign and While activities. Your BPEL process become unmanageable its will increase the development time as well. Generally people think the following about XSL transformations:
  • XSL transformations are very complex to write.

  • They may loose some elements if they use XSL.
So they go with Assign activity and bear all the pains I have mentioned earlier. You can pass multiple source XML objects to an XSL map in SOA 11g. See this post to learn how to pass multiple XML objects to the XSL map. Enriching existing XML using XSL maps is easy in SOA 11g, You just need to follow the steps given below:

1. Add a Transform activity and select the same xml variable as source and target.

2. Add additional variables as source. These variables will be passed as parameters to XSL transformation. See the image given below:

3. Click on Apply and it will open the XSLT map.

4. Map at least one element from each of the additional variables to the target variable. When you add mappings for additional variables, XSL designer defined parameters for additional variables in the XSL. If you want to define parameters manually you can ignore this step.

5. Go to XSL source view and remove the default template and mappings added by the map designer. You can keep XPath expressions in a separate text editor window.
6.Copy the following template in the XSL file:
<xsl:template match="@*|node()">
<xsl:apply-templates select="@*|node()"/>

This map copies all the elements from source to the target XML. As you are enriching the existing XML, this template will make sure that you don’t loose any xml element.

7. Now it’s the time for customization. You need to add one template per XML element you want to customize. See the following sample template:

sample:Line xmlns:sample="">
select="@* | *"/>
<!-- ADD you custom mappings here. You can also add new elements here--> </
sample:Line> </xsl:template>

Please note that if you add a template for an XML element it will be applied to that element and its children. You can add new elements, change element values in the XSL templates. Make changes to the above XSL template as per you requirement. Add proper namespace prefixes and declarations.

8. Once done with the XSL, test it in JDeveloper to make sure that it is working fine.

9. Deploy your composite on SOA 11g server and test it.