I was working on an integration framework for a mobile application using Apache Camel and Spring. The application had to connect to a legacy back end system's RDBMS application and to make life easier for my development team I opted to leverage Spring Data JPA for the RDBMS (with Hibernate as the JPA provider) and Spring Data Mongo for connecting to MongoDB.
The problem was with managing the dependencies for Apache Camel, Spring, Spring Data JPA, Spring Data MongoDB and Hibernate - how to get all this stuff working together?!?
I managed to figure this out, starting with the Apache Camel BOM and built things up from there - you can see the Maven dependencies below;
<dependencyManagement>
<dependencies>
<!-- Camel BOM -->
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-parent</artifactId>
<version>2.20.2</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-spring</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jackson</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-spring-javaconfig</artifactId>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-orm</artifactId>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>4.3.11.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>4.3.11.Final</version>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-jpa</artifactId>
<version>1.11.10.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb</artifactId>
<version>1.10.10.RELEASE</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-test</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-test-spring</artifactId>
</dependency>
<!-- logging -->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
</dependency>
<!-- MongoDB driver -->
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.6.3</version>
</dependency>
</dependencies>
I have omitted the Oracle RDBMS dependency, but that shouldn't cause anyone any problems if you're trying to get this stuff working together!
Bicster's Blog
Sunday, 1 April 2018
Wednesday, 28 March 2018
IBM Curam Batch Framework 101
The IBM Cúram Batch
Framework enables batch processing functionality to be written and executed
from within IBM Cúram.
Batch processing functionality is implemented in individual Batch Processes. A Batch Process represents a single job such as DetermineProductDeliveryEligibility or GenerateInstructionLineItems.
There are two types of Batch Process – Single Threaded and Streamed.
- Single Threaded Batch Processes typically process simple tasks or small workloads. All processing is done in a single transaction by a single process and a failure typically causes the whole batch to fail and the transaction to roll back.
- Streamed Batch Processes use the Chunking and Streaming features of the IBM Cúram Batch framework to enable parallel processing. Streamed Batch Processes have two components;
Batch processing functionality is implemented in individual Batch Processes. A Batch Process represents a single job such as DetermineProductDeliveryEligibility or GenerateInstructionLineItems.
There are two types of Batch Process – Single Threaded and Streamed.
- Single Threaded Batch Processes typically process simple tasks or small workloads. All processing is done in a single transaction by a single process and a failure typically causes the whole batch to fail and the transaction to roll back.
- Streamed Batch Processes use the Chunking and Streaming features of the IBM Cúram Batch framework to enable parallel processing. Streamed Batch Processes have two components;
- Chunker :
The Chunker has business logic to determine what work needs to be performed and
divides said work up into Chunks. Once all Chunks are processed the Chunker
will typically output a report to indicate what was processed and whether any
failures occurred.
- Streams[1]
: Streams are specific to a particular Chunker. The Streams have business logic
to process the Chunks created by the Chunker. Streams process each Chunk in a
single transaction. The number of Streams to use per Batch Process depends on
many factors including amount of work to process (aka number of Chunks), the
type of work being processed and available system resources.
A Chunk is a single
unit of work to be processed. Chunks contain the record or records to be
processed in a single transaction. The Chunker creates the Chunks based upon a
Chunk Size configuration (which is configurable per batch). The Chunk Size
governs how many items are contained within each Chunk.
Example : A Batch Process which reassesses
cases has a Chunk Size of 5, this means that each Chunk contains 5 case IDs representing
the 5 cases to be reassessed as part of the Chunk. Those 5 cases will be
reassessed in the same transaction when the Stream processes that Chunk.
If an exception occurs
during processing a Single Threaded Batch Process the batch fails and the
transaction will roll back[2].
If an exception occurs during processing a Chunk the Stream is smart enough to;
1.
Mark the
record as throwing an error
2.
Roll the
transaction back
3.
Restart
processing the Chunk, this time skipping the record which threw the error
The above flow repeats
itself until the entire Chunk is processed.
Example: A Stream is processing a Chunk
with a size of 5. It starts a transaction, processes record 1 and 2 and then
record 3 throws an exception. At this point the Stream rolls the transaction
back and starts processing again. This time it processes record 1, 2, skips 3
and attempts 4. If 4 were to throw an exception the Stream would roll the
transaction back for the second time and restart processing record 1, 2,
skipping 3 and 4 and then processing 5. When all records are processed the
Stream moves onto the next Chunk. It continues doing this until all Chunks are
processed.
Streams read Chunks
from the BatchProcessChunk table which contains the chunks to be processed,
whether they’ve been processed and the instanceID of the batch they relate to. InstanceIDs
are used to tie Streams to the work created by their associated Chunker. The
instanceID is what Streams use to know which Chunks to process (as it is
possible to execute multiple Streamed batches of different types in parallel).
The BatchProcessChunk
table can be used to determine how much work is remaining and how quickly the
Chunks are being processed.
The following SQL can
be used to determine how many Chunks have been processed;
select
count(*), instanceid, status from batchprocesschunk group by
instanceid, status;
BatchProcessChunk
records with a status of BPCS1 have not been processed. BatchProcessChunk
records with a status of BPCS2 have been processed. Adding the total number of
BPCS1 and BPCS2 records will give you the total number of Chunks for this
batch.
The following SQL can
be used to determine the pace of a job (aka number of chunks which are being
processed per minute);
SELECT Trunc(lastwritten) AS LastWritten,
To_char(lastwritten, 'HH24')
|| ':'
|| To_char(lastwritten, 'MI') AS TIME,
status,
Count(*)
FROM batchprocesschunk
WHERE instanceid = '<INSTANCE_ID>'
AND status = 'BPCS2'
GROUP BY Trunc(lastwritten),
To_char(lastwritten, 'HH24'),
To_char(lastwritten, 'MI'),
status
ORDER BY 1 DESC,
2 DESC,
3 DESC;
Replace <INSTANCE_ID> with the instanceID of the Batch Process of interest before executing this SQL.
SELECT Trunc(lastwritten) AS LastWritten,
To_char(lastwritten, 'HH24')
|| ':'
|| To_char(lastwritten, 'MI') AS TIME,
status,
Count(*)
FROM batchprocesschunk
WHERE instanceid = '<INSTANCE_ID>'
AND status = 'BPCS2'
GROUP BY Trunc(lastwritten),
To_char(lastwritten, 'HH24'),
To_char(lastwritten, 'MI'),
status
ORDER BY 1 DESC,
2 DESC,
3 DESC;
Replace <INSTANCE_ID> with the instanceID of the Batch Process of interest before executing this SQL.
Streams initially go
into a waiting state when launched. In this state the Stream polls the
BatchProcess table looking for records corresponding to its instanceID. The
Stream will poll indefinitely until it finds work to process. When the Stream
finds a record in the BatchProcess table corresponding to its instanceID it
will transition to a processing state. In this state the Stream will process
Chunks until there are no more Chunks to process. At that point the Stream will
terminate.
Example: A Chunker is launched with 5
Streams. Streams launches are staggered by 5 seconds to avoid database
contention when bootstrapping. By the time the 5th Stream is
launched and is ready to process the other 4 streams have already processed the
Chunks and the Chunker has cleaned everything up. In this scenario the 5th
Stream will wait indefinitely and will need to be manually terminated, unless
another run of the batch is planned.
Points to note about
Streams;
-
If a
Stream is started and the Chunker has not finished chunking the Stream will
wait
-
If the
Stream doesn’t find any Chunks when launched it will wait indefinitely
-
Streams
will continue processing Chunks until there are no more Chunks to process
-
When there
are no more Chunks to process Streams will terminate themselves
-
Streams do
not require the Chunker to run, so will continue processing if the Chunker is terminated mid-way through a job
-
If a
Stream is terminated while processing, the current Chunk it is processing will
be rolled back and will not be processed
-
If a Stream
is terminated and then restarted prior to the Chunks being processed it will
continue processing, starting with the next available Chunk
-
Streams
cache information from the database when launched so when launching multiple
streams it is best to stagger them to avoid database contention
The Chunker monitors
(polls) Chunk status during execution. When all Chunks have been processed it will
perform any post-processing required by business rules and then clean up the
BatchProcessChunk and associated tables. The Chunker may also output a report
detailing the number of records processed, failures and any skipped Chunks. The
quality of this report is largely dependent on the developer of the Batch
Process.
If the Chunker is
terminated while Streams are processing Chunks no cleanup will be performed. In
this scenario if the Chunker is restarted and no manual cleanup has occurred,
the Chunker will continue to monitor the progress of processed Chunks as if
nothing has happened. If the Chunker detects Chunks of its instanceID when
launched it will not truncate the batch control tables and re-chunk.
In order to restart a
Streamed Batch Process from the start after a mid-processing termination, the batch
control tables need to be purged using the following SQL;
DELETE FROM batchprocesschunk WHERE instanceid = '<INSTANCE_ID>';
DELETE FROM batchchunkkey WHERE instanceid = '<INSTANCE_ID>';
DELETE FROM batchprocess WHERE instanceid = '<INSTANCE_ID>';
COMMIT;
Replace <INSTANCE_ID> of the Batch Process of interest before executing this SQL.
DELETE FROM batchprocesschunk WHERE instanceid = '<INSTANCE_ID>';
DELETE FROM batchchunkkey WHERE instanceid = '<INSTANCE_ID>';
DELETE FROM batchprocess WHERE instanceid = '<INSTANCE_ID>';
COMMIT;
Replace <INSTANCE_ID> of the Batch Process of interest before executing this SQL.
DB-to-JMS is a feature
of the Cúram Batch Framework which allows batch processes access to Cúram JMS queues.
DB-to-JMS works by intercepting messages sent to the Cúram JMS messaging queues
and storing them on a database table (JMSLiteMessage). At the end of batch
processing the Batch Launcher will trigger a call to the DB-to-JMS servlet
(running in an Application Server) which will initiate a deferred process to
transfer messages stored in the JMSLiteMessage table to their JMS queue.
If the configured
Application Server is not accessible when the Batch Launcher attempts to call
the DB-to-JMS servlet an exception will occur and the batch process will appear
to fail. This failure can be misleading as the DB-to-JMS call is made in a
separate transaction from the batch processing, so in this instance the batch
processing has actually succeeded. The entries in the JMSLiteMessage table for
this batch will be processed the next time a call to the DB-to-JMS servlet is
successfully made.
DB-to-JMS
functionality is available for Single Threaded and Streamed batch processes.
More information on DB-to-JMS functionality and how to enable and configure it
can be found in the IBM Cúram Documentation Centre.
[1] Chunkers can be
configured to run as Streams, although we tend to disable this ability to avoid
confusion.
[2] This is typically by
design as catching and handling exceptions could lead to inconsistent data.
Deploy a Spring Boot web application to the Microsoft Azure App Service.
Spring Boot makes it easy create web applications quickly
and simply. From the Spring Boot website;
Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that you can "just run". We take an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration.
When you’ve created a web application with Spring Boot, you will need somewhere to deploy it. That’s where Microsoft Azure’s App Service comes in.
The
Azure App Service is a fully-managed platform designed to run and scale your
applications effortlessly on Windows or Linux. Azure takes care of the
infrastructure maintenance, load balancing and more so you get to focus your
time on developing code.
There are a few articles out there on how to deploy a Spring
Boot application to the Azure App Service, but I couldn’t find a simple run
through showing the steps – so hopefully this works for folks as they dip their
toes in the water of this stuff!
1. You will need your Spring Boot web application packaged using the mvn package target. If you don’t have a Spring Boot web application at hand, you can use a simple hello world application I created to use as an example for this article.
You can access the hello world application here: https://github.com/cbeech1980/HelloWorld
2. Once you have your application packaged you will need to create your App Service Plan and App Service. Details on how to create these can be found within the Azure documentation.
3. Configure your App within Azure to use Java 8 and the latest version of Tomcat;
4. Connect to your App via FTP in order to upload your Spring Boot application. In order to do this you need to obtain the FTP credentials. These can be found by clicking the Get Publish Profile link from the App home in the Azure Portal;
5. This will download an XML file which contains the information you need. An example of which is below;
6. The username, password and FTP location are highlighted in the example file above. Use this to connect to the App Service.
7. Once connected you yill need to upload the Spring Boot Jar created as part of the mvn package target as well as a web.config file. An example web.config file can be seen below;
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers>
<add name="httpPlatformHandler" path="*" verb="*" modules="httpPlatformHandler" resourceType="Unspecified" />
</handlers>
<httpPlatform processPath="%JAVA_HOME%\bin\java.exe"
arguments="-Djava.net.preferIPv4Stack=true -Dserver.port=%HTTP_PLATFORM_PORT% -jar "%HOME%\site\wwwroot\HelloWorld-0.0.1-SNAPSHOT.jar"">
</httpPlatform>
</system.webServer>
</configuration>
<configuration>
<system.webServer>
<handlers>
<add name="httpPlatformHandler" path="*" verb="*" modules="httpPlatformHandler" resourceType="Unspecified" />
</handlers>
<httpPlatform processPath="%JAVA_HOME%\bin\java.exe"
arguments="-Djava.net.preferIPv4Stack=true -Dserver.port=%HTTP_PLATFORM_PORT% -jar "%HOME%\site\wwwroot\HelloWorld-0.0.1-SNAPSHOT.jar"">
</httpPlatform>
</system.webServer>
</configuration>
8. FTP the HelloWorld-0.0.1-SNAPSHOT.jar and web.config file to the /site/wwwroot directory
9. Open your browser and browse to the App Home and you should see your Spring Boot application running!
Tuesday, 20 December 2011
Basmati Bingo!
Many things in life aren't guaranteed and water damaged Apple products are no exception.
This is very unfortunate as not only are they incredibly expensive to replace, but it's also ridiculously easy to find yourself in the regrettable situation of having a soggy iSomething to fix.
Once such situation arose on Sunday when I was trying to save some time and (to cut a long story short) my iPhone ended up down the loo. It could have been worse however, the loo was mid-flush at the time (after a #1) so I was able (and willing) to "dive in" after it. If it wasn't for my Ninja like reactions I'm sure I'd be $250 worse off, but I was able to save it....or so I thought.
After picking it out of the bowl and giving it some vigorous towelling down (and treatment with the hair dryer) it was all over the place. The speakers wouldn't work, I couldn't make a call, it wouldn't play music, the navigation was slow and messy. In short I thought I'd buggered it up and was going to have to admit, in the middle of the Apple Store, that I'd flushed my phone down the bog and needed a new one.
However, there was one option remaining - the rice.
It turns out that uncooked rice is very good at absorbing moisture. So, I turned my phone off and left it in a bag of uncooked Basmati rice for about 16 hours. The morning after I gingerly switched my phone on and lo and behold, it was back to normal again!
So, we can take two very valuable life lessons from this experience;
1. Don't try and Facebook while having a piss
2. Rice is amazing
This is very unfortunate as not only are they incredibly expensive to replace, but it's also ridiculously easy to find yourself in the regrettable situation of having a soggy iSomething to fix.
Once such situation arose on Sunday when I was trying to save some time and (to cut a long story short) my iPhone ended up down the loo. It could have been worse however, the loo was mid-flush at the time (after a #1) so I was able (and willing) to "dive in" after it. If it wasn't for my Ninja like reactions I'm sure I'd be $250 worse off, but I was able to save it....or so I thought.
After picking it out of the bowl and giving it some vigorous towelling down (and treatment with the hair dryer) it was all over the place. The speakers wouldn't work, I couldn't make a call, it wouldn't play music, the navigation was slow and messy. In short I thought I'd buggered it up and was going to have to admit, in the middle of the Apple Store, that I'd flushed my phone down the bog and needed a new one.
However, there was one option remaining - the rice.
It turns out that uncooked rice is very good at absorbing moisture. So, I turned my phone off and left it in a bag of uncooked Basmati rice for about 16 hours. The morning after I gingerly switched my phone on and lo and behold, it was back to normal again!
So, we can take two very valuable life lessons from this experience;
1. Don't try and Facebook while having a piss
2. Rice is amazing
Tuesday, 13 December 2011
How to start Oracle on AIX...
....after shutting it down with the Enterprise Manager console and crapping yourself as you can't get it back up!
Yes, this happened to me earlier today so I thought I'd share how to start Oracle back up on AIX. It's not quite as simple as on Windows (i.e. by starting the service) but it's not too painful.
First of all, you need a sysdba user and password...which you probably have if you've shut it down via the EM console.
Next you need to run the following command...
$ sqplus /nolog
SQL> CONNECT SYS/sys_password as SYSDBA
SQL> STARTUP
...and with a bit of luck you should be in business!
Yes, this happened to me earlier today so I thought I'd share how to start Oracle back up on AIX. It's not quite as simple as on Windows (i.e. by starting the service) but it's not too painful.
First of all, you need a sysdba user and password...which you probably have if you've shut it down via the EM console.
Next you need to run the following command...
$ sqplus /nolog
SQL> CONNECT SYS/sys_password as SYSDBA
SQL> STARTUP
...and with a bit of luck you should be in business!
Monday, 12 December 2011
Welcome one, welcome all!
Welcome to 'Bictser's Blog'; your one stop shop for all things everything!
I figure the world is lacking some thoughtful insight into the world of Bicster (aka Chris Beech) and what finer forum than the blogosphere?
You can expect some random rants, some techincal hints, some useful insights, some opinion but mostly complete randomness.
So, fasten your seatbelts and ensure your seat back and folding trays are in their fully upright position...this could be a bumpy ride!
I figure the world is lacking some thoughtful insight into the world of Bicster (aka Chris Beech) and what finer forum than the blogosphere?
You can expect some random rants, some techincal hints, some useful insights, some opinion but mostly complete randomness.
So, fasten your seatbelts and ensure your seat back and folding trays are in their fully upright position...this could be a bumpy ride!
Subscribe to:
Posts (Atom)