How to Effectively Work With a Relational Database Using Java JDBC

If you don’t want to use any of the ORM frameworks to implement database queries and feel like even the JdbcTemplate Spring Tool isn’t right for you, try the JdbcBuilder class from the UjoTools project.

Anyone who’s ever programmed SQL queries through the JDBC library has to admit the interface isn’t very user-friendly. Maybe that’s why a whole array of libraries has emerged, varying in both list of services provided and degree of complexity. In this article, I’d like to show you a convenient class from the Java UjoTools library called JdbcBuilder. Its purpose is to help with assembling and executing SQL statements — nothing more, nothing less. JdbcBuilder class doesn’t address mapping of results to JavaBeans, doesn’t address optimization, nor does it provide a database connection.

Original Link

Shortest Code and Lowest Latency in Java

Shortest Code and Lowest Latency

Who can write the shortest Java code with the lowest latency, and what tools are used?

At Oracle Code One, I promoted a code challenge during my speech. The contestants were given a specific problem and the winner would be the one with lowest possible latency multiplied with the number of code lines that was used (i.e. having low latency and, at the same time, using as few lines as possible is good. I also shared the contest on social media to get as many developers involved as possible.

Original Link

Hibernate 5: How to Persist LocalDateTime and Co With Hibernate

Do you use Java 8’s date and time API in your projects? Let’s be honest — working with java.util.Date is a pain and I would like to replace it with the new API in all of my projects.

The only problem is that JPA does not support it.

Original Link

Modules of the Spring Architecture

Spring Framework Architecture

The basic idea behind the development of Spring Framework was to make it a one-stop shop where you can integrate and use modules according to the need of your application. This modularity is due to the architecture of Spring. There are about 20 modules in the Spring framework that are being used according to the nature of the application.

Below is the architecture diagram of the Spring framework. There, you can see all the modules that are being defined over the top of Core container. This layered architecture contains all the necessary modules that a developer may require in developing an enterprise application. Also, the developer is free to choose or discard anythe  module that is of need or are not of any use according to its requirement. Due to its modular architecture, the integration of Spring framework with other frameworks is super easy.

Original Link

Connect to Cloudant Data in AWS Glue Jobs Using JDBC

AWS Glue is an ETL service from Amazon that allows you to easily prepare and load your data for storage and analytics. Using the PySpark module along with AWS Glue, you can create jobs that work with data over JDBC connectivity, loading the data directly into AWS data stores. In this article, we walk through uploading the CData JDBC Driver for Cloudant into an Amazon S3 bucket and creating and running an AWS Glue job to extract Cloudant data and store it in S3 as a CSV file.

Upload the CData JDBC Driver for Cloudant to an Amazon S3 Bucket

In order to work with the CData JDBC Driver for Cloudant in AWS Glue, you will need to store it (and any relevant license files) in a bucket in Amazon S3.

Original Link

Groovy SQL: More Groovy Goodness

Ladies and gentlemen, today, I want to share with you how, in the context of the development of my progressive web applications, the Apache Groovy language is making my life a breeze, particularly the groovy-sql module, which provides a higher-level abstraction over Java’s JDBC technology, when it is hard for me to understand why I have to use JPA and an ORM, like Hibernate, to connect to my MySQL database. Then, let us not waste any more time creating a new connection for each user, when interacting with a relational database is time-consuming, it would be a huge mistake to not use a database connection pool library, like the Apache Commons DBCP, to set up our data source.

Maven Dependencies

<dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-dbcp2</artifactId> <version>2.2.0</version>
</dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.41</version>
</dependency> <dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-sql</artifactId> <version>2.4.13</version>

Original Link

Manage User Session With Spring JDBC Session

This article will demonstrate how to configure and use the Spring Session to manage session data in a web application with Spring Boot. For a more in-depth look at the code, check out this GitHub repository.


In a web application, user session management is crucial for managing user state. Spring Session is an implementation of four approaches, storing session data in a persistent data store. Spring Session supports multiple datastores, like RDBMS, Redis, HazelCast, MongoDB, etc., to save the user session data.

Original Link

Object-Relational Mapping (ORM) With Redis Data Entities in Java

Object-relational mapping (ORM) techniques make it easier to work with relational data sources and can bridge your logical business model with your physical storage model. Follow this tutorial to integrate connectivity to Redis data into a Java-based ORM framework, Hibernate.

You can use Hibernate to map object-oriented domain models to a traditional relational database. The tutorial below shows how to use the CData JDBC Driver for Redis to generate an ORM of your Redis repository with Hibernate.

Original Link

JDBC Connection Pool for GlassFish and Payara Java Application Servers

Java application servers, such as GlassFish and Payara, provide a native support of the JDBC connection pooling mechanism to enhance database access. Such implementation allows reusing database connections to be cached in the pool.

Configuring JDBC pool connection for your application server can reduce delays and resource consumption compared to servicing each individual request. This strongly enhances the performance of a database, especially for requests made to dynamic database-driven applications.

Follow the simple steps below to configure Java Database Connectivity for GlassFish and Payara with Jelastic PaaS.

Create Environment

1.Log into your Jelastic account and click the New environment button.create jdbc connection pool2. In the topology wizard, switch to the Java tab, pick GlassFish or Payara as your application server and add the required database (as an example, we use GlassFish and MySQL pair). Next, set the resource limits for your containers and enter any preferred environment name.glassfish connection pool

Click Create and wait for a few minutes to get your new environment. Next, proceed to create JDBC connection pool.

Configure Database

1. Click the Open in browser button for your MySQL node.mysql connection pool

Use the received email with database credentials to login into the opened phpMyAdmin panel.

2. Once inside, switch to the User accounts tab and click on the Add user account link. Within the opened form, specify all of the required data and click the Create database with the same name and grant all privileges option.database connection pooling

Click Go at the bottom of the page to initiate the addition of a database and user for connection pooling.

Set Up Java Application Server

1. The JDBC MySQL connector is provided by default with the stack (located in the /opt/glassfish/glassfish/domains/domain1/lib directory on your GlassFish server or /opt/payara/glassfish/domains/domain1/lib on Payara), so you don’t need to upload one manually.jdbc connection pooling in java

2. Login to the GlassFish (or Payara) Admin panel, using credentials from the appropriate email.glassfish caching jdbc

3. Navigate to the Resources > JDBC > JDBC Connection Pools section and click the New button on the tools panel. Within the appeared form, fill in the following fields:

  • Pool Name – type any preferred name
  • Resource Type – select the javax.sql.DataSource item from the drop-down list
  • Database Driver Vendor – choose the MySQL option

jdbc connection pooling

Click the Next button to continue.

4. Find and modify the following Additional Properties:

  • User – provides your database login (pooling in our case)
  • ServerName – specifies your database host without the protocol (e.g.
  • Port – sets port number to 3306
  • DatabaseName – provides your database name (pooling in our case)
  • Password – stores a password for the specified user
  • URL and Url – sets a JDBC connection string in the jdbc:mysql://{db_host}:3306/ format; herewith, the {db_host} placeholder can be substituted with either node hostname ( or IP address (

jdbc pooling

After these properties are specified, click Finish.

5. In order to verify accessibility, select your just created connection pool and click the Ping button. If everything is OK, you should see the Ping Succeeded pop-up message.configure jdbc connection pool

6. Go to the Resources > JDBC > JDBC Resources section and click the New button to create JDBC resources for pooling. Within the opened window, provide any desired JNDI Name and choose your Pool Name from the drop-down list.jdbc pool

Confirm resources creation with the OK button at the top.

Connect From Java Code

Put the following strings into the Java class of your application code:

InitialContext ctx = new InitialContext();
DataSource ds = (DataSource)ctx.lookup(“{resources}“);
Connection conn = ds.getConnection();

Here, substitute the {resources} placeholder with the your JNDI name from the previous section (i.e. jdbc/mypool in our case).

Now, you can deploy your Java application to the created Jelastic PaaS environment and enjoy the benefits of GlassFish and Payara connection pooling!

Original Link

Spring Tips: JDBC [Video]

Hi, Spring fans! In this installment of Spring Tips, we look at the Spring support for the Java Database Connectivity (JDBC) API. Spring’s support for JDBC is one of many reasons a lot of people first started using Spring 15+ years ago! If you aren’t committed to a full-blown ORM and/or want to leverage the full power of JDBC, then this video is for you!

Speaker: Josh Long

Original Link

How to Run a Bulk INSERT .. RETURNING Statement With Oracle and JDBC

When inserting records into SQL databases, we often want to fetch back generated IDs and possibly other trigger, sequence, or default generated values. Let’s assume we have the following table:

-- DB2
); -- PostgreSQL
); -- Oracle


DB2 is the only database currently supported by jOOQ, which implements the SQL standard according to which we can SELECT from any INSERT statement, including:

FROM FINAL TABLE ( INSERT INTO x (j) VALUES ('a'), ('b'), ('c')

The above query returns:

I |J |K |
1 |a |2018-05-02 |
2 |b |2018-05-02 |
3 |c |2018-05-02 |

Pretty neat! This query can simply be run like any other query in JDBC, and you don’t have to go through any hassles.

PostgreSQL and Firebird

These databases have a vendor-specific extension that does the same thing, almost as powerful:

-- Simple INSERT .. RETURNING query
VALUES ('a'), ('b'), ('c')
RETURNING *; -- If you want to do more fancy stuff
WITH t AS ( INSERT INTO x (j) VALUES ('a'), ('b'), ('c') RETURNING *

Both syntaxes work equally well, the latter is just as powerful as DB2’s, where the result of an insertion (or update, delete, or merge) can be joined to other tables. Again, no problem with JDBC.


In Oracle, this is a bit more tricky. The Oracle SQL language doesn’t have an equivalent of DB2’s FINAL TABLE (DML statement). The Oracle PL/SQL language, however, does support the same syntax as PostgreSQL and Firebird. This is perfectly valid PL/SQL:

-- Create a few auxiliary types first
/ DECLARE -- These are the input values in_j t_j := t_j('a', 'b', 'c'); out_i t_i; out_j t_j; out_k t_k; c1 SYS_REFCURSOR; c2 SYS_REFCURSOR; c3 SYS_REFCURSOR;
BEGIN -- Use PL/SQL's FORALL command to bulk insert the -- input array type and bulk return the results FORALL i IN 1 .. in_j.COUNT INSERT INTO x (j) VALUES (in_j(i)) RETURNING i, j, k BULK COLLECT INTO out_i, out_j, out_k; -- Fetch the results and display them to the console OPEN c1 FOR SELECT * FROM TABLE(out_i); OPEN c2 FOR SELECT * FROM TABLE(out_j); OPEN c3 FOR SELECT * FROM TABLE(out_k); dbms_sql.return_result(c1); dbms_sql.return_result(c2); dbms_sql.return_result(c3);

A bit verbose, but it has the same effect. Now, from JDBC:

try (Connection con = DriverManager.getConnection(url, props); Statement s = con.createStatement(); // The statement itself is much more simple as we can // use OUT parameters to collect results into, so no // auxiliary local variables and cursors are needed CallableStatement c = con.prepareCall( "DECLARE " + " v_j t_j := ?; " + "BEGIN " + " FORALL j IN 1 .. v_j.COUNT " + " INSERT INTO x (j) VALUES (v_j(j)) " + " RETURNING i, j, k " + " BULK COLLECT INTO ?, ?, ?; " + "END;")) { try { // Create the table and the auxiliary types s.execute( "CREATE TABLE x (" + " i INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY," + " j VARCHAR2(50)," + " k DATE DEFAULT SYSDATE" + ")"); s.execute("CREATE TYPE t_i AS TABLE OF NUMBER(38)"); s.execute("CREATE TYPE t_j AS TABLE OF VARCHAR2(50)"); s.execute("CREATE TYPE t_k AS TABLE OF DATE"); // Bind input and output arrays c.setArray(1, ((OracleConnection) con).createARRAY( "T_J", new String[] { "a", "b", "c" }) ); c.registerOutParameter(2, Types.ARRAY, "T_I"); c.registerOutParameter(3, Types.ARRAY, "T_J"); c.registerOutParameter(4, Types.ARRAY, "T_K"); // Execute, fetch, and display output arrays c.execute(); Object[] i = (Object[]) c.getArray(2).getArray(); Object[] j = (Object[]) c.getArray(3).getArray(); Object[] k = (Object[]) c.getArray(4).getArray(); System.out.println(Arrays.asList(i)); System.out.println(Arrays.asList(j)); System.out.println(Arrays.asList(k)); } finally { try { s.execute("DROP TYPE t_i"); s.execute("DROP TYPE t_j"); s.execute("DROP TYPE t_k"); s.execute("DROP TABLE x"); } catch (SQLException ignore) {} }

The above code will display:

[1, 2, 3]
[a, b, c]
[2018-05-02 10:40:34.0, 2018-05-02 10:40:34.0, 2018-05-02 10:40:34.0]

Exactly what we wanted.

jOOQ Support

A future version of will emulate the above PL/SQL block from the jOOQ INSERT .. RETURNING statement:

DSL.using(configuration) .insertInto(X) .columns(X.J) .values("a") .values("b") .values("c") .returning(X.I, X.J, X.K) .fetch();

This will correctly emulate the query for all of the databases that natively support the syntax. In the case of Oracle, since jOOQ cannot create nor assume any SQL TABLE types, PL/SQL types from the DBMS_SQL package will be used.

The relevant issue is here.

Original Link

Hazelcast Jet Tutorial: Building Custom JDBC Sinks

Hazelcast Jet supports writing into a number of third-party systems, including HDFS, Apache Kafka, and others. But what if you want to write into your own system that is not supported by Jet out-of-the-box? Starting with the version 0.6, Jet offers a new simple-to-use API for building custom Sinks and this tutorial will show you how to use it!

In this tutorial, we are going to build a JDBC sink writing Stock updates to a relational database, but you can apply the same principles for building an arbitrary sink.


The basic construction block is SinkBuilder. You can obtain its instance via the factory method Sinks::builder. The builder accepts a bunch of functions controlling the Sink behavior. The two most important functions are:

  1. The function you have to pass to a builder in a constructor. It creates a context object which is then passed to the onReceive() function.
  2. The onReceive() function. Jet calls this function for each element the Sink receive. The function receives the element itself and also the context and this is where you write an element to your target system.

You can optionally also pass two other functions: they control lifecycle and batching behavior of the sink.

A very simple Sink could look like this:

public class JDBCSink { private static final String INSERT_QUERY = "insert into stock_updates (ts, price, symbol) values (?, ?, ?)"; public static Sink newSink(String connectionUrl) { return Sinks.builder((unused) -> JDBCSink.openConnection(connectionUrl)) .onReceiveFn(JDBCSink::insertUpdate) .destroyFn(JDBCSink::closeConnection) .build(); } private static Connection openConnection(String connectionUrl) { try { return DriverManager.getConnection(connectionUrl); } catch (SQLException e) { throw new IllegalStateException("Cannot acquire a connection with URL '" + connectionUrl + "'", e); } } private static void closeConnection(Connection c) { try { c.close(); } catch (SQLException e) { e.printStackTrace(); } } private static void insertUpdate(Connection c, StockPriceUpdate i) { try (PreparedStatement ps = c.prepareStatement(INSERT_QUERY)) { ps.setLong(1, i.getTimestamp()); ps.setLong(2, i.getPrice()); ps.setString(3, i.getSymbol()); ps.executeUpdate(); } catch (SQLException e) { throw new IllegalStateException("Error while inserting " + i + " into database"); } }

The implementation is rather simplistic and perhaps naive, but it’s working! Jet calls the openConnection() function for each Sink instance it creates. This function acquires a new JDBC connection. This connection is then passed to the function insertUpdate() along with the new item. Sink Builder wires all these functions together and creates a regular sink from them.

One reason why it’s so simple is that it has origins in the Jet threading model: a single Sink instance is always single-threaded and you do not have to deal with concurrency.

This is how you could use your sink in a Jet Pipeline:

String connectionJdbcUrl = getJDBCUrlString();
Pipeline pipeline = Pipeline.create().drawFrom(Sources.mapJournal(MAP_NAME, JournalInitialPosition.START_FROM_OLDEST)) .map(Map.Entry::getValue) .drainTo(JDBCSink.newSink(connectionJdbcUrl)) .getPipeline();

The pipeline reads from IMap change journal, extract just the value from each entry and pass the value to our Sink. That’s it! You can see it in action in this project.


While the code above works it has multiple drawbacks. One of the big ones is performance:

  1. It creates a new prepared statement for each element it receives. This is unnecessary as prepared statements can be perfectly reused.
  2. The insertUpdate() function calls a blocking JDBC method for each element it receives. Again, this is not great performance-wise as it usually involves a network roundtrip to a database and this could very easily become a bottleneck.

We can address both concerns with a rather simple code change:

public class BetterJDBCSink { private static final String INSERT_QUERY = "insert into stock_updates (ts, price, symbol) values (?, ?, ?)"; public static Sink newSink(String connectionUrl) { return Sinks.builder((unused) -> JDBCSink.createStatement(connectionUrl)) .onReceiveFn(JDBCSink::insertUpdate) .destroyFn(JDBCSink::cleanup) .flushFn(JDBCSink::flush) .build(); } private static PreparedStatement createStatement(String connectionUrl) { Connection connection = null; try { connection = DriverManager.getConnection(connectionUrl); return connection.prepareStatement(INSERT_QUERY); } catch (SQLException e) { closeSilently(connection); throw new IllegalStateException("Cannot acquire a connection with URL '" + connectionUrl + "'", e); } } private static void closeSilently(Connection connection) { if (connection != null) { try { connection.close(); } catch (SQLException e) { //ignored } } } private static void cleanup(PreparedStatement ps) { try { if (ps != null) { ps.close(); } } catch (SQLException e) { e.printStackTrace(); } finally { if (ps != null) { try { Connection connection = ps.getConnection(); closeSilently(connection); } catch (SQLException e) { e.printStackTrace(); } } } } private static void flush(PreparedStatement ps) { try { ps.executeBatch(); } catch (SQLException e) { throw new IllegalStateException("Error while storing a batch into database", e); } } private static void insertUpdate(PreparedStatement ps, StockPriceUpdate i) { try { ps.setLong(1, i.getTimestamp()); ps.setLong(2, i.getPrice()); ps.setString(3, i.getSymbol()); ps.addBatch(); } catch (SQLException e) { throw new IllegalStateException("Error while inserting " + i + " into database", e); } }

As you can see the basic structure is still the same: SinkBuilder with a bunch of function registered. There are two significant changes:

  1. The function we are passing to the SinkBuilder constructor does not produce a Connection, but it directly produces a PreparedStatement. This way we can easily re-used the statement as Jet will pass it to the insertUpdate() function.
  2. The insertUpdate() function does not execute the statement directly, but it just creates a batch. This is a feature JDBC provides and a JDBC driver is free to optimize batched query execution. We have to notify the driver when is a good time to actually execute the batches queries. To do so we registered a new function in the SinkBuilder: flush(). Jet will call it when it is a good time to actually flush the batches record.

This optimized Sink will perform much better. It does not need to wait for a database roundtrip with each element. Instead it batches the elements and goes into a database only once in a while. When exactly? This is determined by Jet itself and depends on factors such as incoming data rate: when traffic is low then Jet calls the flush for each item. However when the rate of incoming elements increases then Jet will call the flush less frequently and the batching effect will kick in.

Wrapping Up

We built a completely custom Sink in a few lines of code. It would require minimal changes to use e.g. JMS to write into a message broker. You can see full source of this tutorial on GitHub.

Have a look at Jet Reference Manual and some of the awesome demos the team has built!

Original Link

Accessing Data – The Reactive Way

This is the fourth post of my "Introduction to Eclipse Vert.x." series. In this article, we are going to see how we can use JDBC in an Eclipse Vert.x application using the asynchronous API provided by the vertx-jdbc-client. But before diving into JDBC and other SQL subtleties, we are going to talk about Vert.x Futures.

In "The Introduction to Vert.x" Series

Let’s start by refreshing our memory about the previous articles:

Original Link

Mocking JDBC Using a Set of SQL String/Result Pairs

In a previous post, I showed how the programmatic MockDataProvider can be used to mock the entire JDBC API through a single functional interface:

// context contains the SQL string and bind variables, etc.
MockDataProvider provider = context -> { // This defines the update counts, result sets, etc. // depending on the context above. return new MockResult[] { ... }

Writing the provider manually can be tedious in some cases, especially when a few static SQL strings need to be mocked and constant result sets would be OK. In that case, the MockFileDatabase is a convenient implementation that is based on a text file (or SQL string), which contains a set of SQL string/result pairs of the form:

Original Link

Importing Google BigQuery Data Into H2O

CData JDBC Drivers provide self-service integration with machine learning and AI platforms such as H2O. The CData JDBC Driver for Google BigQuery allows you to import BigQuery tables to H2OFrames in memory. This article details how to use the JDBC driver in R or Python to import BigQuery data into H2O and create a Generalized Linear Model (GLM) based on the data.

The CData JDBC Drivers offer unmatched performance for interacting with live BigQuery data in H2O due to optimized data processing built into the driver. With embedded dynamic metadata querying, you can visualize and analyze BigQuery data using native H2O data types.

Start H2O With the JDBC Driver

Install the CData JDBC Driver for BigQuery and copy the JAR file (cdata.jdbc.bigquery.jar) and accompanying LIC file (cdata.jdbc.bigquery.lic) to the folder containing the H2O JAR file (h2o.jar). Once copied, you will need to start H2O with the JDBC Driver:

$ java -cp "cdata.jdbc.bigquery.jar:h2o.jar" water.H2OApp

Import BigQuery Data Into H2O

With H2O running, we can connect to the instance and use the import_sql_table function to import Google BigQuery data into the H2O instance.

Connecting to a Dataset

You can connect to a specific project and dataset by providing authentication to Google and then setting the Project and Dataset properties.

If you want to view a list of information about the available datasets and projects for your Google Account, execute a query to the Datasets or Projects view after you authenticate.

Authenticating to Google

You can authenticate with a Google account, a Google Apps account, or a service account. A service account is required to delegate domain-wide access. The authentication process follows the OAuth 2.0 authentication standard. See the Getting Started section of the help documentation for an authentication how-to.

Import Data From a Table

In the code samples below (R and Python), we use the import_sql_table function to import a Google BigQuery table into an H2O cloud. For this article, we will import a table representing DVD rental payments (download the CSV).

  • connection_url: The JDBC URL to connect to Google BigQuery using the CData JDBC Driver. For example: jdbc:bigquery:DataSetId=MyDataSetId;ProjectId=MyProjectId;InitiateOAuth=GETANDREFRESH;
  • table: The name of the BigQuery table to import.
  • columns (optional): The list of column names to import from the BigQuery table. All columns are imported by default.

Create a Model

Once the data is imported, we can create a GLM using the existing data as the training set. In R, this simply means calling the glm function, passing the predictor names, response variable and training frame. In Python, we import the H20GeneralizedLinearEstimator class and train the model based on the same parameters. With the model created and trained, you are ready to validate and create predictions based on new sets.

Code Samples

R Code

In R, we connect to the H2O instance (or create a new instance), set the variables, and import the table. Once the table is imported, we fit a GLM to the table using the glmfunction, passing the following parameters:

  • x: The vector containing the predictor variables to use in building the model.
  • y: The name of the response variable in the data.
  • training_frame: The ID of the training data frame.

With the GLM fit, we simply display the model.

h2o.init(strict_version_check = FALSE) connection_url <- "jdbc:bigquery:DataSetId=MyDataSetId;ProjectId=MyProjectId;InitiateOAuth=GETANDREFRESH;"
table <- "payment" my_table <- h2o.import_sql_table(connection_url, table, username = "", password = "") # X is the index of the response variable
pred_names <- names(my_table)[-X] my_table_glm <- h2o.glm(x = pred_names, y = "amount", training_frame = my_table)

Python Code

In Python, we connect to the H2O instance (or create a new instance), import the H2OGeneralizedLinearEstimator class, set the variables, and import the table. Once the table is imported, we create a GLM and then train the model, passing the following parameters (by default the train method uses all columns in the training frame except the response variable as predictor variables):

  • y: The name of the response variable in the data.
  • training_frame: The ID of the training data frame.
import h2o
h2o.init(strict_version_check = False)
from h2o.estimators.glm import H2OGeneralizedLinearEstimator connection_url = "jdbc:googlebigquery:DataSetId=MyDataSetId;ProjectId=MyProjectId;InitiateOAuth=GETANDREFRESH;"
table = "payment" my_table = h2o.import_sql_table(connection_url, table, username = "", password = "") my_glm = H2OGeneralizedLinearEstimator(model_id='my_table_glm')
my_glm.train(y = 'amount', training_frame = my_table)

With the code run, you now have a new model in H2O based on the “payment” table using the “amount” column as the response variable.

Image title

With the new GLM, you are ready to have H2O validate and make predictions on new data based on the “payment” table, allowing you to use machine learning and AI algorithms to drive analytics and create actionable insights to drive business.

Original Link

Top 5 Hidden jOOQ Features

jOOQ’s main value proposition is obvious: type-safe embedded SQL in Java.

People who actively look for such a SQL builder will inevitably stumble upon jOOQ and love it, of course. But a lot of people don’t really need a SQL builder — yet, jOOQ can still be immensely helpful in other situations through its lesser-known features.

Here’s a list of top five “hidden” jOOQ features.

1. Working With JDBC ResultSet

Even if you’re otherwise not using jOOQ but JDBC (or Spring JdbcTemplate, etc.) directly, one of the things that’s most annoying is working with ResultSet. A JDBC ResultSet models a database cursor, which is essentially a pointer to a collection on the server, which can be positioned anywhere, i.e. to the 50th record via ResultSet.absolute(50) (remember to start counting at 1).

The JDBC ResultSet is optimised for lazy data processing. This means that we don’t have to materialize the entire data set produced by the server in the client. This is a great feature for large (and even large-ish) data sets, but in many cases, it’s a pain. When we know we’re fetching only ten rows and we know that we’re going to need them in memory anyway, a List<Record> type would be much more convenient.

jOOQ’s org.jooq.Result is such a List, and fortunately, you can easily import any JDBC ResultSet easily as follows by using DSLContext.fetch(ResultSet):

try (ResultSet rs = stmt.executeQuery()) { Result<Record> result = DSL.using(connection).fetch(rs); System.out.println(result);

With that in mind, you can now access all the nice jOOQ utilities, such as formatting a result, i.e. as TEXT (see the second feature for more details):

| 1| 1|1984 |
| 2| 1|Animal Farm|

Of course, the inverse is always possible, as well. Need a JDBC ResultSet from a jOOQ Result? Call Result.intoResultSet() and you can inject dummy results to any application that operates on JDBC ResultSet:

DSLContext ctx = DSL.using(connection); // Get ready for Java 10 with var!
var result = ctx.newResult(FIRST_NAME, LAST_NAME);
result.add(ctx.newRecord(FIRST_NAME, LAST_NAME) .values("John", "Doe")); // Pretend this is a real ResultSet
try (ResultSet rs = result.intoResultSet()) { while ( System.out.println(rs.getString(1) + " " + rs.getString(2));

2. Exporting a Result as XML, CSV, JSON, HTML, TEXT, or ASCII Chart

As we saw in the previous section, jOOQ Result types have nice formatting features. Instead of just text, you can also format as XML, CSV, JSON, HTML, and again, TEXT.

The format can usually be adapted to your needs.

For instance, this text format is possible as well:

ID AUTHOR_ID TITLE ------------------------ 1 1 1984 2 1 Animal Farm

When formatting as CSV, you’ll get:

2,1,Animal Farm

When formatting as JSON, you might get:

[{"ID":1,"AUTHOR_ID":1,"TITLE":"1984"}, {"ID":2,"AUTHOR_ID":1,"TITLE":"Animal Farm"}]

Or, depending on your specified formatting options, perhaps you’ll prefer the more compact array of array style?

[[1,1,"1984"],[2,1,"Animal Farm"]]

Or XML, again, with various common formatting styles, among which:

<result> <record> <ID>1</ID> <AUTHOR_ID>1</AUTHOR_ID> <TITLE>1984</TITLE> </record> <record> <ID>2</ID> <AUTHOR_ID>1</AUTHOR_ID> <TITLE>Animal Farm</TITLE> </record>

HTML seems kind of obvious. You’ll get:

1 1 1984
2 1 Animal Farm

Or, in code:

<tr><td>2</td><td>1</td><td>Animal Farm</td></tr>

As a bonus, you could even export the Result as an ASCII chart:

Image title

These features are obvious additions to ordinary jOOQ queries, but as I’ve shown in Section 1, you can get free exports from JDBC results as well!

3. Importing These Text Formats Again

After the previous section’s export capabilities, it’s natural to think about how to import such data again back into a more usable format. For instance, when you write integration tests, you might expect a database query to return a result like this:

ID AUTHOR_ID TITLE -- --------- ----------- 1 1 1984 2 1 Animal Farm

Simply import the above textual representation of your result set into an actual jOOQ Result using Result.fetchFromTXT(String) and you can continue operating on a jOOQ Result (or as illustrated in Section 1, with a JDBC ResultSet!).

Most of the other export formats (except charts, of course) can be imported as well.

Now, don’t you wish for a second that Java has multi-line strings (in case of which, this would be very nice looking):

Result<?> result = ctx.fetchFromTXT( "ID AUTHOR_ID TITLE \n" + "-- --------- -----------\n" + " 1 1 1984 \n" + " 2 1 Animal Farm\n"
ResultSet rs = result.intoResultSet();

These types can now be injected anywhere where a service or DAO produces a jOOQ Result or a JDBC ResultSet. The most obvious application for this is mocking. The second most obvious application is testing. You can easily test that a service produces an expected result of the above form.

Let’s talk about mocking…

4. Mocking JDBC

Sometimes, mocking is cool. With the above tools, it’s only natural for jOOQ to provide a full-fledged, JDBC-based mocking SPI. I’ve written about this feature before and again here.

Essentially, you can implement a single FunctionalInterface called MockDataProvider. The simplest way to create one is by using the methods, i.e.:

MockDataProvider provider = Mock.of(ctx.fetchFromTXT( "ID AUTHOR_ID TITLE \n" + "-- --------- -----------\n" + " 1 1 1984 \n" + " 2 1 Animal Farm\n"

This provider simply ignores all the input (queries, bind variables, etc.) and always returns the same simple result set. You can now plug this provider into a MockConnection and use it like any ordinary JDBC connection:

try (Connection c = new MockConnection(provider); PreparedStatement s = c.prepareStatement("SELECT foo"); ResultSet rs = s.executeQuery()) { while ( { System.out.println("ID : " + rs.getInt(1)); System.out.println("First name: " + rs.getString(2)); System.out.println("Last name : " + rs.getString(3)); }

The output being (completely ignoring the SELECT foo statement):

ID : 1
First name: 1
Last name : 1984
ID : 2
First name: 1
Last name : Animal Farm

This client code doesn’t even use jOOQ (although it could)! This means that you can use jOOQ as a JDBC mocking framework on any JDBC-based application, including a Hibernate-based one.

Of course, you don’t always want to return the exact same result. This is why a MockDataProvider offers you an argument with all the query information in it:

try (Connection c = new MockConnection(ctx -> { if (ctx.sql().toLowerCase().startsWith("select")) { // ... }
})) { // Do stuff with this connection

You can almost implement an entire JDBC driver with a single lambda expression. Read more here. Cool, eh?

Side note: Don’t get me wrong: I don’t think you should mock your entire database layer just because you can. My thoughts are available in this tweet storm:

Image title

Speaking of synthetic JDBC connections…

5. Parsing Connections

jOOQ 3.9 introduced a SQL parser whose main use case so far is to parse and reverse-engineer DDL scripts for the code generator.

Another feature that has not been talked about often yet (because still a bit experimental) is the parsing connection, available through DSLContext.parsingConnection(). Again, this is a JDBC Connection implementation that wraps a physical JDBC connection but runs all SQL queries through the jOOQ parser before generating them again.

What’s the point?

Let’s assume for a moment that we’re using SQL Server, which supports the following SQL standard syntax:

SELECT * FROM (VALUES (1), (2), (3)) t(a)

The result is:

--- 1 2 3

Now, let’s assume that we are planning to migrate our application to Oracle. We have the following JDBC code that doesn’t work on Oracle because Oracle doesn’t support the above syntax:

try (Connection c = DriverManager.getConnection("..."); Statement s = c.createStatement(); ResultSet rs = s.executeQuery( "SELECT * FROM (VALUES (1), (2), (3)) t(a)")) { while ( System.out.println(rs.getInt(1));

Now, we have three options (hint #1 sucks; #2 and #3 are cool):

  1. Tediously migrate all such manually written JDBC-based SQL to Oracle syntax and hope we don’t have to migrate back again.
  2. Upgrade our JDBC-based application to use jOOQ instead (that’s the best option, of course, but it also takes some time).
  3. Simply use the jOOQ parsing connection as shown below, and a lot of code will work right out of the box! (And then, of course, gradually migrate to jOOQ — see option #2.)
try (DSLContext ctx = DSL.using("..."); Connection c = ctx.parsingConnection(); // Magic here Statement s = c.createStatement(); ResultSet rs = s.executeQuery( "SELECT * FROM (VALUES (1), (2), (3)) t(a)")) { while ( System.out.println(rs.getInt(1));

We haven’t touched any of our JDBC-based client logic. We’ve only introduced a proxy JDBC connection that runs every statement through the jOOQ parser prior to re-generating the statement on the wrapped, physical JDBC connection.

What’s really executed on Oracle is this emulation here:

select t.a from ( (select null a from dual where 1 = 0) union all (select * from ( (select 1 from dual) union all (select 2 from dual) union all (select 3 from dual) ) t)
) t

Looks funky, eh? The rationale for this emulation is described here.

Every SQL feature that jOOQ can represent with its API and that it can emulate between databases will be supported! This includes far more trivial things, like parsing this query:

SELECT substring('abcdefg', 2, 4)

… and running this one on Oracle instead:

select substr('abcdefg', 2, 4) from dual

You’re all thinking:

Want to Learn More About jOOQ?

There are many more such nice little things in the jOOQ API that help make you super productive. Some examples include:

Original Link

Too Many PreparedStatement Placeholders in Oracle JDBC

There are multiple causes of the ORA-01745 (“invalid host/bind variable name error”) error when using an Oracle database. The Oracle 9i documentation on errors ORA-01500 through ORA-02098 provides more details regarding ORA-01745. It states that the “cause” is “a colon in a bind variable or INTO specification was followed by an inappropriate name, perhaps a reserved word.”

It also states that the “action” is “change the variable name and retry the operation.” In the same Oracle 12g documentation, however, there is no description of “cause” or “action” for ORA-01745, presumably because there are multiple causes and multiple corresponding actions associated with this message.

In this post, I will focus on one of the perhaps less obvious causes and the corresponding action for that cause.

Some of the common causes for ORA-01745 that I will not be focusing on in this post include using an Oracle database reserved name (reserved word), as an identifier, extraneous or missing colon or comma, or attempting to bind structure names (rather than variables) to the placeholders.

In addition to the causes just listed and likely in addition to other potential causes of ORA-01745, another situation that can cause the ORA-01745 error is using too many ? placeholders in a JDBC PreparedStatement with the Oracle database. I will demonstrate in this post that the number of ? placeholders in a PreparedStatement that cause this ORA-01745 is 65536 (216).

I have blogged previously on the ORA-01795 error that occurs when one attempts to include more than 1,000 values in an Oracle SQL IN condition. There are multiple ways to deal with this limitation, and one of the alternative approaches might be to use multiple ORs to “OR” together more than 1,000 values. This will typically be implemented with a PreparedStatement and with a ? placeholder placed in the SQL statement for each value being OR-ed. This PreparedStatement-based alternate approach employing ? placeholders will only work as long as the number of vales being OR-ed together is smaller than 65536.

The code listing that follows demonstrates how a SQL query against the Oracle HR schema can be generated to make it easy to reproduce the ORA-01745 error with too many ? placeholders (full code listing is available on GitHub).

Building up prepared statement withspecified number of ? placeholders:

/** * Constructs a query using '?' for placeholders and using * as many of these as specified with the int parameter. * * @param numberPlaceholders Number of placeholders ('?') * to include in WHERE clause of constructed query. * @return SQL Query that has provided number of '?" placeholders. */
private String buildQuery(final int numberPlaceholders)
{ final StringBuilder builder = new StringBuilder(); builder.append("SELECT region_id FROM countries WHERE "); for (int count=0; count < numberPlaceholders-1; count++) { builder.append("region_id = ? OR "); } builder.append("region_id = ?"); return builder.toString();

The next code listing demonstrates building a PreparedStatement based on the query constructed in the last code listing and setting its placeholders with a number of consecutive integers that match the number of ? placeholders.

Configuring PreparedStatement‘s ? placeholders:

/** * Execute the provided query and populate a PreparedStatement * wrapping this query with the number of integers provided * as the second method argument. * * @param query Query to be executed. * @param numberValues Number of placeholders to be set in the * instance of {@code PreparedStatement} used to execute the * provided query. */
private void executeQuery(final String query, final int numberValues)
{ try (final Connection connection = getDatabaseConnection(); final PreparedStatement statement = connection.prepareStatement(query)) { for (int count = 0; count < numberValues; count++) { statement.setInt(count+1, count+1); } final ResultSet rs = statement.executeQuery(); while ( { out.println("Region ID: " + rs.getLong(1)); } } catch (SQLException sqlException) { out.println("ERROR: Unable to execute query - " + sqlException); }

The next screen snapshot shows the ORA-01745 error occurring when the number of ? placeholders applied is 65536.

Image title

This example shows that there is a maximum number of ? placeholders that can be used in an Oracle SQL statement. Fortunately, there are other ways to accomplish this type of functionality that do not have this ORA-01475 limit of 65536 ? placeholders or the 1,000 IN elements limit that causes an ORA-01795 error.

Original Link

Ingesting RDBMS Data as New Tables Arrive in Hive

Let’s say that a company wants to know when new tables are added to a JDBC source (say, an RDBMS). Using the ListDatabaseTables processor, we can get a list of TABLE s, and also views, system tables, and other database objects, but for our purposes, we want tables with data. I have used the ngdbc.jar from SAP HANA to connect and query tables with ease.

For today’s example, I am connecting to MySQL, as I have a MySQL database available for use and modification.


mysql -u root -p test < person.sql
CREATE USER 'nifi'@'%' IDENTIFIED BY 'reallylongDifficultPassDF&^D&F^Dwird';
mysql> show tables;
| Tables_in_test |
| mock2 |
| personReg |
| visitor |
4 rows in set (0.00 sec)

I created a user to use for my JDBC Connection Pool in NiFi to read the metadata and data.

These table names will show up in NiFi in the attribute.

Step 1

ListDatabaseTables: Let’s get a list of all the tables in MySQL for the database we have chosen.

After it starts running, you can check its state, see what tables were ingested, and see the most recent timestamp (Value).

We will get back what catalog we read from, how many tables there are, and each table name and it’s full name.

HDF NiFi supports generic JDBC drivers and specific coding for Oracle, MS SQL Server 2008, and MS SQL Server 2012+.

Step 2

GenerateTableFetch using the table name returned from the list returned by the database control.

Step 3

We use extract text to get the SQL statement created by generate table fetch.

We add a new attribute, sql.

Step 4

Execute SQL with that $sql attribute.

Step 5

Convert AVRO files produced by ExecuteSQL into performant Apache ORC files:

Step 6

PutHDFS to store these ORC files in Hadoop.

I added the table name as part of the directory structure, so a new directory is created for each transferred table. Now, we have dynamic HDFS directory structure creation.

Step 7

Replace the text to build a SQL statement that will generate an external Hive table on our new ORC directory.

Step 8

PutHiveQL to execute the statement that was just dynamically created for our new table.

We now have instantly queryable Hadoop data available to Hive, SparkSQL, Zeppelin, ODBC, JDBC, and a ton of BI tools and interfaces.

Step 9

Finally, we can look at the data that we have ingested from MySQL into new Hive tables.

That was easy! The best part is that as new tables are added to MySQL, they will be auto-ingested into HDFS and Hive tables.

Future Updates

Use Hive merge capabilities to update changed data. We can also ingest to Phoenix/HBase and use the upsert DML.

Test with other databases. Tested with MySQL.

Quick tip (HANA): In NiFi, refer to tables with their full name in quotes: "SAP_"."ASDSAD.SDASD.ASDAD".

Original Link

Accessing Data Using JDBC on AWS Glue

AWS Glue is an Extract, Transform, Load (ETL) service available as part of Amazon’s hosted web services. Glue is intended to make it easy for users to connect their data in a variety of data stores, edit and clean the data as needed, and load the data into an AWS-provisioned store for a unified view.

Glue supports accessing data via JDBC, and currently, the databases supported through JDBC are Postgres, MySQL, Redshift, and Aurora. Of course, JDBC drivers exist for many other databases besides these four. Using the DataDirect JDBC connectors, you can access many other data sources for use in AWS Glue.

This tutorial demonstrates accessing Salesforce data with AWS Glue, but the same steps apply with any of the DataDirect JDBC drivers.

Download DataDirect Salesforce JDBC Driver

Download DataDirect Salesforce JDBC driver from here.

To install the driver, you would have to execute the JAR package and you can do it by running the following command in terminal or just by double-clicking on the JAR package.


This will launch an interactive Java installer using which you can install the Salesforce JDBC driver to your desired location as either a licensed or evaluation installation.

Note that this will install Salesforce JDBC driver and bunch of other drivers too for your trial purposes in the same folder.

Upload DataDirect Salesforce Driver to Amazon S3

  1. Navigate to the install location of the DataDirect JDBC drivers and locate the DataDirect Salesforce JDBC driver file, named sforce.jar. 
  2. Upload the Salesforce JDBC JAR file to Amazon S3.

Create Amazon Glue Job

Go to AWS Glue Console on your browser, under ETL > Jobs, click on the Add Job button to create a new job. You should see an interface as shown below.

  • Fill in the name of the job, and choose/create an IAM role that gives permissions to your Amazon S3 sources, targets, temporary directory, scripts, and any libraries used by the job. For this tutorial, we just need access to Amazon S3, as I have my JDBC driver and the destination will also be S3.

  • Choose A new script to be authored by you under This job runs options.

  • Give a name for your script and choose a temporary directory for Glue Job in S3.

Under Script Libraries and job parameters (optional), for Dependent Jars path, choose the sforce.jar file in your S3. Your configuration should look as shown below.

  • Click the Next button and you should see Glue asking if you want to add any connections that might be required by the job. In this tutorial, we don’t need any connections, but if you plan to use another destination such as RedShift, SQL Server, Oracle, etc., you can create the connections to these data sources in your Glue and those connections will show up here.

  • Click on Next, review your configuration, and click Finish to create the job.

  • You should now see an editor to write a Python script for the job. Here, you write your custom Python code to extract data from Salesforce using DataDirect JDBC driver and write it to S3 or any other destination.

  • You can use this code sample to get an idea of how you can extract data from data from Salesforce using DataDirect JDBC driver and write it to S3 in a CSV format. Feel free to make any changes to suit your needs. Save the job.

Run Glue Job

  1. Click on the Run Job button to start the job. You can see the status by going back and selecting the job that you have created.
  2. After the Job has run successfully, you should now have a CSV file in S3 with the data that you have extracted using Salesforce DataDirect JDBC driver.

You can use similar steps with any of DataDirect JDBC suite of drivers available for relational, big data, SaaS, and NoSQL Data sources. Feel free to try any of our drivers with AWS Glue for your ETL jobs for a 15-day trial period.

Original Link

Using MySQL JDBC Driver With Spring Boot

In this article, I will show you how to connect a MySQL database with your Spring Boot application.

All the code is available on GitHub!

Tools used in this article include:

  • Spring Boot 1.5.6 release
  • MySQL 5.7.X
  • Maven
  • Java 8
  • Spring Data JPA

Project Structure

The project structure is a typical Maven structure:

Project structure Maven

Project Dependencies

Please note that the parent needs to be declared. If you are using Spring Tool Suite, you can click Spring Starter Project and it will populate this for you.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation=""> <modelVersion>4.0.0</modelVersion> <groupId>com.michaelcgood</groupId> <artifactId>mysql-jdbc</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>mysql-jdbc-driver</name> <description>mysql jdbc driver example</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.5.6.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <properties> <>UTF-8</> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project>


For this example application, our application will be “tracking” the last security audit of systems within a network. As this example application is meant to be simple, there will be minimal fields for the model.

Please note that there is a built-in System class in the Java library. For this reason, I would avoid using as a class name for a real application.

package com.michaelcgood.model; import java.util.Date; import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id; @Entity
public class System { private String name; private Date lastaudit; public Date getLastaudit() { return lastaudit; } public void setLastaudit(Date lastaudit) { this.lastaudit = lastaudit; } @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name="id") private long id; public long getId() { return id; } public void setId(long id) { = id; } public String getName() { return name; } public void setName(String name) { = name; } public String toString(){ return id+" | " + name+ " | "+ lastaudit; } }


This is a simple CrudRepository, which is an interface that allows us to do CRUD (create, read, update, delete) operations.

package com.michaelcgood.dao; import;
import org.springframework.stereotype.Repository;
import com.michaelcgood.model.System; @Repository
public interface SystemRepository extends CrudRepository<System,Long> { }

Database Initialization

Spring Boot enables the dataSource initializer by default and loads SQL scripts (schema.sql and data.sql) from the root of the classpath.


Here we create the SQL file that our application will use for the Table schema:



We insert example values into our database:

INSERT INTO system(name,lastaudit)VALUES('Windows Server 2012 R2 ','2017-08-11');
INSERT INTO system(name,lastaudit)VALUES('RHEL 7','2017-07-21');
INSERT INTO system(name,lastaudit)VALUES('Solaris 11','2017-08-13');


This XML file is used to configure our logging:

<?xml version="1.0" encoding="UTF-8"?>
<configuration> <statusListener class="ch.qos.logback.core.status.NopStatusListener" /> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern> %d{yyyy-MM-dd HH:mm:ss} %-5level %logger{36} - %msg%n </Pattern> </layout> </appender> <logger name="org.springframework.jdbc" level="error" additivity="false"> <appender-ref ref="STDOUT"/> </logger> <logger name="com.michaelcgood" level="error" additivity="false"> <appender-ref ref="STDOUT"/> </logger> <root level="error"> <appender-ref ref="STDOUT"/> </root> </configuration>


We configure our datasource and JPA settings.

#==== connect to mysql ======#
spring.datasource.driver-class-name=com.mysql.jdbc.Driver = org.hibernate.dialect.MySQL5Dialect


CommandLineRunner is implemented in order to execute command line arguments for this example.

package; import javax.sql.DataSource; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.domain.EntityScan;
import; import com.michaelcgood.dao.SystemRepository; @SpringBootApplication
public class MysqlJdbcDriverApplication implements CommandLineRunner { @Autowired DataSource dataSource; @Autowired SystemRepository systemRepository; public static void main(String[] args) {, args); } @Override public void run(String... args) throws Exception { System.out.println("Our DataSource is = " + dataSource); Iterable<com.michaelcgood.model.System> systemlist = systemRepository.findAll(); for(com.michaelcgood.model.System systemmodel:systemlist){ System.out.println("Here is a system: " + systemmodel.toString()); } } }


 . ____ _ __ _ _ /\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/
[32m :: Spring Boot :: [39m [2m (v1.5.6.RELEASE)[0;39m Our DataSource is = org.apache.tomcat.jdbc.pool.DataSource@40f70521{ConnectionPool[defaultAutoCommit=null; defaultReadOnly=null; defaultTransactionIsolation=-1; defaultCatalog=null; driverClassName=com.mysql.jdbc.Driver; maxActive=100; maxIdle=100; minIdle=10; initialSize=10; maxWait=30000; testOnBorrow=true; testOnReturn=false; timeBetweenEvictionRunsMillis=5000; numTestsPerEvictionRun=0; minEvictableIdleTimeMillis=60000; testWhileIdle=false; testOnConnect=false; password=********; url=jdbc:mysql://localhost:3306/mysqltutorial?useSSL=false; username=root; validationQuery=SELECT 1; validationQueryTimeout=-1; validatorClassName=null; validationInterval=3000; accessToUnderlyingConnectionAllowed=true; removeAbandoned=false; removeAbandonedTimeout=60; logAbandoned=false; connectionProperties=null; initSQL=null; jdbcInterceptors=null; jmxEnabled=true; fairQueue=true; useEquals=true; abandonWhenPercentageFull=0; maxAge=0; useLock=false; dataSource=null; dataSourceJNDI=null; suspectTimeout=0; alternateUsernameAllowed=false; commitOnReturn=false; rollbackOnReturn=false; useDisposableConnectionFacade=true; logValidationErrors=false; propagateInterruptState=false; ignoreExceptionOnPreLoad=false; useStatementFacade=true; }
Here is a system: 1 | Windows Server 2012 R2 | 2017-08-11 00:00:00.0
Here is a system: 2 | RHEL 7 | 2017-07-21 00:00:00.0
Here is a system: 3 | Solaris 11 | 2017-08-13 00:00:00.0

Again, the full code is on GitHub!

Original Link

JDBC Master-Slave Persistence Setup With ActiveMQ Using PostgreSQL

This article will help in setting up JDBC master/slave for embedded ActiveMQ in Red Hat JBoss Fuse/AMQ 6.3 with PostgreSQL database from scratch.

Try to search for a PostgreSQL database in RHEL using this command:

yum list postgre*
Loaded plugins: product-id, refresh-packagekit, search-disabled-repos, security, subscription-
: manager
Available Packages
postgresql.x86_64 8.4.20-7.el6 @rhel-6-workstation-rpms
postgresql-libs.x86_64 8.4.20-7.el6 @rhel-6-workstation-rpms
postgresql-server.x86_64 8.4.20-7.el6 @rhel-6-workstation-rpms
postgresql.i686 8.4.20-7.el6 rhel-6-workstation-rpms
postgresql-contrib.x86_64 8.4.20-7.el6 rhel-6-workstation-rpms

Install the available package:

yum install postgresql-server.x86_64

This will install the PostgreSQL database and create a user called postgres. As this user, one can access PostgreSQL. The root user can change the password if required for this user with the following command:

passwd postgres

Now, switch to the postgres user. Then run the psql command.

su - postgres

Create a schema activemq with username and password activemq. Also, provide this schema access to connect with user activemq.

postgres=# create ROLE activemq LOGIN PASSWORD 'activemq' SUPERUSER;
postgres=# CREATE DATABASE activemq WITH OWNER = activemq;
postgres=# GRANT CONNECT ON DATABASE activemq TO activemq;

We have to provide access to remote applications to connect to PostgreSQL. To open database port 5432 to the remote application, we will have to edit postgresql.conf and set listen_addresses to *.

[root@vm252-99 cpandey]# vi /var/lib/pgsql/data/postgresql.conf

To whitelist remote IPs to connect to the PostgreSQL server, we will have to edit pg_hba.conf.

[root@vm252-99 cpandey]# vi /var/lib/pgsql/data/pg_hba.conf #all remote connection host all all md5

Stop and start the PostgreSQL server.

[root@vm252-99 cpandey]# service postgresql stop
Stopping postgresql service: [ OK ]
[root@vm252-99 cpandey]# service postgresql start
Starting postgresql service:

Now, we edit the broker and configure activemq.xml to have the following configuration:

<beans...> <broker... brokerName="testPostgre1"....>
---- <persistenceAdapter> <jdbcPersistenceAdapter dataSource="#postgres-ds" lockKeepAlivePeriod="5000"> <locker> <lease-database-locker lockAcquireSleepInterval="10000"/> </locker> </jdbcPersistenceAdapter> </persistenceAdapter>
---- </broker> <bean id="postgres-ds" class="org.postgresql.ds.PGPoolingDataSource"> <property name="url" value="jdbc:postgresql://"/> <property name="user" value="activemq"/> <property name="password" value="activemq"/> <property name="initialConnections" value="1"/> <property name="maxConnections" value="10"/> </bean>
---- </beans>

Some points to note in the step above:

  • Remember to set brokerName. It should be unique for each broker.
  • persistenceAdapter, referring to datasource #postgres-ds, is set with lease-database-locker within the broker XML tag.
  • The postgre-ds datasource is defined as a bean outside of the broker tag and within the beans tag.

Restart brokers to obtain the lock, then check the database.

[cpandey@vm252-99 ~]$ su - postgres
-bash-4.1$ psql
# connect to activemq schema
postgres=# \c activemq
# check tables
activemq=# \dt
List of relations
Schema | Name | Type | Owner
public | activemq_acks | table | activemq
public | activemq_lock | table | activemq
public | activemq_msgs | table | activemq
(3 rows)
# run select query.
activemq=# select * from activemq_lock;
id | time | broker_name
1 | 1506947282760 | testPostgre1

Above, broker testPostgre1 has occupied the lock.

I hope this article helps you understand and set up JDBC persistence for ActiveMQ using the PostgreSQL database.

Original Link