ALU

java ee

Building Enterprise Java Applications the Spring Way

I think it is fair to say that Java EE has gained a pretty bad reputation among Java developers. Despite the fact that it has certainly improved on all fronts over the years and even changed its home from the Eclipse Foundation to become Jakarta EE, its bitter taste is still quite strong. On the other side, we have the Spring Framework (or to reflect the reality better, a full-fledged Spring Platform) this is a brilliant, lightweight, fast, innovative, and hyper-productive Java EE replacement. So why bother with Java EE?

We are going to answer this question by showing how easy it is to build modern Java applications using most of the Java EE specs. And, the key ingredient to succeeding here is Eclipse Microprofile: enterprise Java in the age of microservices.

Original Link

JVM Ecosystem Survey: Why Devs Aren’t Switching to Java 11

Last week, Oracle’s Java Magazine and Synk released the JVM Ecosystem Report. This survey talked to over 10,000 developers across the globe about their choice of JVM languages, platforms, tools, processes, and applications.

This report shows that 88 percent of developers are still using Java 7 or 8 in their main application, with 8 percent using Java 9 and 10. Since Java 11 is the most recent version of the JDK, this brings up the question: why aren’t developers switching to more recent versions?

Original Link

Operating and Scaling Java EE Apps on DC/OS

In the prior blog posts in this series, we provided a step-by-step tutorial for migrating legacy Java EE applications to DC/OS in order to gain the benefits of the DC/OS platform without requiring any code modifications. Assuming you’ve followed our instructions, your application has been successfully migrated and deployed, is running and healthy, and end users are happily enjoying its functionality. With the migration complete, our focus now shifts towards Day 2 Operations and ensuring the application scales to meet demand while remaining healthy and highly available.

If you recall, recoding the legacy application and redeploying WebLogic instances for high availability (HA) was an impossibility due to budget and schedule constraints.

Original Link

A Step-by-Step Guide to Migrating Java EE Apps to DC/OS

In the prior blog post, we discussed the benefits of migrating legacy Java EE applications to a modern platform such as Mesosphere DC/OS. This blog post presents a set of concrete steps to make this possible.

For our migration, we have a legacy Java EE application running on Oracle’s WebLogic. We don’t have time to re-code the application (although our team has already requested resources for a complete re-write to Spark or Node) and the application is currently chewing up a lot of CPU and memory resources running in a virtual machine. Our goal is to migrate the app to a modern platform quickly and without changes. This tutorial walks you through the steps needed to accomplish this:

Original Link

Solving Java EE Nightmares Without Docker Containers or Microservices

Developers and application owners have many new tools and technologies such as microservices, Docker containers, CI/CD, and DevOps that help them produce better software, faster. Yet, the sad truth is that most organizations rely on untold numbers of legacy applications, many of which are Java EE, to power mission-critical systems that can’t be migrated to some of the emerging technologies and processes.

Legacy Java EE apps are almost a necessary evil, providing core business functions while forcing IT teams to face myriad operational problems including planning for and addressing scaling challenges, managing inefficient and unpredictable resource consumption, protecting unsecured confidential information, and applying patching and restarting applications without service disruption.

Original Link

Top Five Java EE Courses to Learn Online

The Java Platform Enterprise Edition 8 or Java EE 8 was released last year, along with Java 9 in September 2017. If you are a Java developer or someone who wants to learn Java EE for web development and looking for some courses to kickstart your learning, then you have come to the right place. In this article, I am going to share five awesome Java EE courses that cover both Java EE 7 and Java EE 8. If you are wondering what Java EE is and what Java EE 8 brings into the table, let me give you a brief overview of Java EE.

The Java EE is actually a collection of Java technologies and APIs designed to support "Enterprise" Applications, which can generally be classified as large-scale, distributed, transactional and highly-available applications designed to support mission-critical business requirements.

Original Link

Java EE Adoption Interview With Hamed Hatami

One of the most important things to continue to do for Java EE/Jakarta EE is highlighted successful adoption stories at a regular cadence. The community has been doing just that for a long time. A number of these stories are curated here. In this vein, Hamed Hatami graciously agreed to share his Java EE adoption story. Hamed is a very experienced early adopter of Java EE that has developed a number of mission-critical enterprise applications in Iran and now Sweden.

Can you kindly introduce yourself?

Original Link

Does the Spirit of JavaOne Live on Inside Oracle Code One?

In this entry, I want to share some high-level observations in regards to the renaming of JavaOne to Oracle Code One. My accepted sessions at Oracle Code One highlight some important Java EE content at Oracle Code One. 

Observations

Some of you may already be aware that Oracle has decided to rename JavaOne to Oracle Code One. Like many people in the community, when I became aware of this change, I must admit that I expected the worst in regards to Java EE content, specifically, and Java content, generally. Nonetheless, I am not one to base judgments on sentiments alone but wanted to see concrete numbers on how the renaming played out. An important data point in this regard is the makeup of the content compared to the previous few years of JavaOne. Another important bellwether is the number and quality of submissions (more on that a bit later). I am relieved to say the worst did not come to pass — at least for this year. I’ll explain why.

Original Link

Starting a Career in Java Development

Starting a Java development career can be a great choice given the number of job openings that pop up day after day. Java has been around for a while, and there are a tremendous amount of companies that already have a Java system in place. This means that there are a lot of development opportunities.

Java is compiled, type-safe, and fast. I mean really fast! I have, myself, migrated a fairly complex project from Python to Java and another extremely complex system from .Net to Java. The main reasons were performance improvement and platform independence.

Original Link

MicroProfile Fault Tolerance With Java EE [Video]

I’ve recorded a video on how to make Java EE applications more resilient using MicroProfile Fault Tolerance.

In this video, I’ll show you how MicroProfile Fault Tolerance complements existing Java EE applications in regard to resiliency.

If you want to learn more, have a look at the example project.

Original Link

17 Popular Java Frameworks: Pros, Cons, and More (Part 2)

In 2018, Java is still the most popular programming language in the world. It comes with a vast ecosystem and more than 9 million Java developers worldwide. Although Java is not the most straightforward language, you don’t have to write Java programs from scratch. There are many excellent Java frameworks to write web and mobile applications, microservices, and REST APIs that run on the Java Virtual Machine.

Java frameworks allow you to focus on the business logic of your apps instead of writing basic functionality such as making database connections or handling exceptions. Also, if you have some experience with Java, you can get started quickly. The frameworks all use the same syntax and work with similar terms, paradigms, and concepts.

Our top 17 Java frameworks are based on usage through 2018 and listed alphabetically. Here is the second installment of the top Java frameworks.

Play: Reactive Web and Mobile Framework for Highly Scalable Java Applications

The Play framework makes it possible to build a lightweight and web-friendly Java and Scala applications for desktop and mobile interfaces. Play is an incredibly popular framework, used by companies like LinkedIn, Samsung, Walmart, The Guardian, Verizon, and many others.

Play is often compared to powerful web frameworks of other programming languages, such as Ruby on Rails for Ruby, or Django for Python. In fact, Play is a unique Java framework in the sense that it doesn’t rely on the Java EE standards. Instead, it intends to eliminate all the inconveniences of traditional Java web development, such as slow development cycles and too much configuration. It more closely resembles the web frameworks of scripting languages (PHP, Python, Ruby, etc.) as much as possible.

Under the hood, Play is built on top of the Akka toolkit that simplifies the creation of concurrent and distributed applications on the Java Virtual Machine. As a result, Play uses a fully asynchronous model that leads to better scalability, especially because it also follows the statelessness principle.

The Play framework puts developer productivity first by offering features like hot code reloading, convention over configuration, and error messages in the browser. Besides, it’s a Reactive System that follows a modern system architecture (responsive, resilient, elastic, and message-driven) to achieve more flexible and failure-tolerant results.

PrimeFaces: UI Framework for Java EE and JavaServer Faces

PrimeFaces is a popular web framework for creating lightweight user interfaces for Java EE and JavaServer Faces (see above) applications. It’s used by many Fortune 500 companies, government entities, and educational institutions.

The PrimeFaces library is truly lightweight. It’s packaged as a single JAR file, requires zero configuration, and doesn’t have any dependencies. It allows you to create a user interface for your Java application by offering you a rich set of components (100+), a built-in skinning framework, and pre-designed themes and layouts. As PrimeFaces is built on top of JavaServer Faces, it inherits features such as rapid application development. You can also add the framework to any Java projects.

On the PrimeFaces website, you can find an excellent showcase of all PrimeFaces components, templates, and themes. The components come with relevant code snippets you can quickly copy/paste into your app or tweak them when it’s necessary. For instance, here is a horizontal mega menu that lets you display submenus of root items together.

PrimeFaces also has an awesome theme designer that is a Sass-based theme engine with more than 500 variables, a sample theme, and font icons. And, if you don’t want to build a theme yourself, you can also download a community theme or purchase a premium one from the PrimeFaces Theme Gallery.

Spark Framework: Micro Framework for Web Apps and REST APIs

Spark Framework is a micro framework and domain-specific language for the Java and Kotlin programming languages. Kotlin also runs on JVM, and it’s 100 percent interoperable with Java. With Spark, you can painlessly develop web applications, microservices, and REST APIs.

Micro frameworks first appeared in scripting languages like Ruby and PHP and quickly gained traction due to their focus on development speed and simplicity. Spark was inspired by the Sinatra web application framework for Ruby and first released in 2011. It’s not an MVC framework, but it lets you structure your app as you want. As with most micro frameworks, it has a small code base, needs minimal configuration, and doesn’t require you to write too much boilerplate code.

In fact, you can get the Spark framework up and running in just a few minutes. By default, it runs on the Jetty web server that is embedded into the framework. However, you can use it with other Java web servers as well. According to Spark’s own survey, more than 50 percent of their users used the framework to create REST APIs, which can be seen as its most popular use case. Spark also powers high-traffic web applications serving more than 10,000 users a day.

Spring Framework: Enterprise-level Java Application Framework

The Spring Framework is probably the most well-known Java framework out there, with a huge ecosystem and an active community around it. It allows you to build enterprise-level Java applications, web services, and microservices.

The Spring Framework started as a dependency injection tool, but, over the years, it has developed into a full-scale application framework. It provides you with an all-inclusive programming and configuration model that comes with support for generic tasks such as establishing a database connection or handling exceptions. Besides Java, you can also use the framework together with Kotlin and Groovy, both of which run on the Java Virtual Machine.

The Spring Framework utilizes the inversion of control (IoC) software design principle according to which the framework controls the custom-written code (as opposed to traditional programming where the custom code calls into other libraries that handle generic tasks). As a result, you can create loosely coupled modules for your Spring applications.

While the Spring Framework is excellent for building enterprise-level Java applications, it does have a steep learning curve. This is because it’s a broad framework that intends to provide a solution for every task that may come up with an enterprise-level application and also supports many different platforms. Therefore, the configuration, setup, build, and deployment processes all require multiple steps you might not want to deal with, especially if you are working on a smaller project. The Spring Boot (different from the Spring Framework) is a solution for this problem, as it allows you to set up your Spring application faster, with much less configuration.

Struts: MVC Framework for Enterprise-level Java Applications

Struts is a full-featured Java web application framework maintained and developed by the Apache Software Foundation. It’s a solid platform with a vast community, often compared to the Spring Framework. Struts allow you to create enterprise-level Java applications that are easy to maintain over time.

It follows the MVC software design pattern and has a plugin-based architecture. Plugins make it possible to extend the framework to fit with different project needs. Struts plugins are basic JAR packages. Therefore, they are portable and you can also add them to the classpath of your app. Some plugins are bundled with the framework (JSON plugin, REST plugin, Config Browser Plugin, etc.), while you can add others from third-party sources.

You can integrate Struts with other Java frameworks to perform tasks that are not built into the platform. For instance, you can use the Spring plugin for dependency injection or the Hibernate plugin for object-relational mapping. Struts also let you use different client-side technologies to build the front-end of your app, such as JavaServer Pages or HTML with Angular.

However, if you want to create server-side components that can render on the front-end, Struts may not be the best choice for that. Instead, you should look into a framework that has a different architecture, such as Tapestry or Wicket (see both below). Also, note that Struts received bad press recently due to some critical security vulnerabilities you still need to be aware of.

Tapestry: Component-oriented Framework for Highly Scalable Apps

Tapestry is a component-based Java framework that you can create scalable web applications. Its focus on reusable components makes it architecturally similar to JavaServer Faces and the Wicket framework. Just like Struts, Tapestry is also a project of the Apache Software Foundation.

You can write Tapestry pages and components as plain old Java objects (POJOs). Therefore, you can access the whole Java ecosystem from the framework. Besides Java, Tapestry also supports Groovy and Scala and integrates with other Java frameworks, such as Hibernate and Spring. Tapestry has been built with performance in mind. Therefore, it provides you with features like live class reloading, exception reporting, Ajax support, and built-in components and templates.

Tapestry is a developer-friendly framework as well. It has built-in utilities to facilitate test-driven development (TDD) and comes with support for the Selenium testing framework. Tapestry scales nicely both on single servers and server clusters. Apps built with Tapestry run fast in the browser, as it follows a bunch of best practices such as client-side caching, support for concurrent threads, JavaScript aggregation and compression, integrated GZip content compression, and others.

Vaadin: Web Application Framework With a Focus on UX, Accessibility, and Mobile

Vaadin provides you with a platform for streamlined Java development. It allows you to build web applications of customizable components that focus on performance, UX, and accessibility.

The most interesting thing to know about Vaadin is that its latest release (just a few days ago, in June 2018) has been so significant that even major media outlets reported it. Vaadin 10 approaches web app development in an entirely new way: it gives developers direct access to the DOM from the Java Virtual Machine. With the new release, the Vaadin team split the previously monolithic framework into two parts. It has a lightweight Java framework called Vaadin Flow that handles routing and server-client communication and a set of UI components that run in the user’s browser.

The components are mobile-first and follow the latest web and accessibility standards; they were built on the Web Components standards. You can use Vaadin components together with any front-end framework such as React, Angular, or Vue. The creators also recommend them as building blocks for Progressive Web Apps. You can build your own theme based on Vaadin components or use Vaadin’s two pre-made themes: Lumo (default) and Material.

Vaadin Flow provides you with a high-level Java API to manage all the technical aspects of your app, from automatic server-client communication via WebSockets to data binding. As Flow runs on the JVM, you have access to the whole Java ecosystem. For instance, you can run your app with the Spring Boot. Flow also lets you write your app in Kotlin or Scala.

Vert.x: Polyglot Event-driven Application Framework for the Java Virtual Machine

Vert.x is a polyglot framework running on the Java Virtual Machine. It allows you to write your apps in programming languages, such as Java, JavaScript, Groovy, Ruby, Scala, and Kotlin. Its event-driven architecture results in applications that scale nicely even with minimal hardware resources.

Vert.x is developed and maintained by the Eclipse Foundation whose most famous project is the Eclipse IDE for Java development. And, who would know more about Java than the creator of Eclipse? The ‘x’ in Vert.x refers to its polyglottic nature, meaning that you can write valid code in several different languages. It provides idiomatic APIs for every supported programming language.

As Vert.x is an event-driven and non-blocking framework, it can handle a lot of concurrencies using only a minimal number of threads. Vert.x is also quite lightweight, with the core framework weighing only about 650 kb. It has a modular architecture that allows you to use only the modules you need so that your app can stay as slick as possible. Vert.x is an ideal choice if you want to build lightweight, highly scalable microservices.

Wicket: Component-based Web Application Framework for Purists

Wicket is a component-based web application framework similar to JavaServer Faces and Tapestry. It allows you to write elegant, user-friendly apps using pure Java and HTML code. The framework is maintained by the Apache Software Foundation, just like Struts and Tapestry.

As Wicket is a component-based framework, Wicket apps are made up of reusable pages and components such as images, buttons, links, forms, and others. Programming a Wicket app centers around POJOs, therefore components are also ordinary Java objects with object-oriented features such as encapsulation and inheritance. Components are bundled as reusable packages, so you can add custom CSS and JavaScript to them.

Wicket lets you internationalize your apps, pages, and components by providing out-of-the-box support for more than 25 languages. Its built-in Ajax functionality allows you to update parts of your page in real-time, without requiring you to write any JavaScript code. Wicket pays attention to secure URL handling as well. Component paths are session-relative, and URLs don’t reveal any sensitive information. If you want to see how Wicket works in real life, check out the Built with Apache Wicket blog where you can see some nice examples.

Conclusion

When it comes to Java frameworks, keep an open mind and conduct research to find which one is best for you. There are so many frameworks that could suit your project, so use this guide to assess your needs.

Original Link

How to Use a Metronic in Java EE and to Integrate it in JSF

If you head to themeforest.com, you will find a lot of HTML admin themes. For example, let’s pick Metronic. If you’re wondering how to use it as the front of your Java EE 7/8 application, here is how to do it! You will use JDK8, Eclipse Oxygen, and Glassfish 4.1. The first step is to create a dynamic web project, as shown below:

Image title

Image title

Next, setup the web.xml:

<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd" id="WebApp_ID" version="3.1"> <display-name>Metronix-JSF</display-name> <context-param> <param-name>javax.faces.PROJECT_STAGE</param-name> <param-value>Development</param-value> </context-param> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> </servlet-mapping> <mime-mapping> <extension>eot</extension> <mime-type>application/vnd.ms-fontobject</mime-type> </mime-mapping> <mime-mapping> <extension>otf</extension> <mime-type>font/opentype</mime-type> </mime-mapping> <mime-mapping> <extension>svg</extension> <mime-type>image/svg+xml</mime-type> </mime-mapping> <mime-mapping> <extension>ttf</extension> <mime-type>application/x-font-ttf</mime-type> </mime-mapping> <mime-mapping> <extension>woff</extension> <mime-type>application/x-font-woff</mime-type> </mime-mapping> <mime-mapping> <extension>woff2</extension> <mime-type>application/x-font-woff2</mime-type> </mime-mapping> <welcome-file-list> <welcome-file>template.xhtml</welcome-file> </welcome-file-list> </web-app>

Now, create a ‘resources’ folder in WebContent:

Image title

Now, head to the Metronic default folder in “metronic_v5.0.6.1\metronic_v5.0.6.1\theme\default\dist\default”

Image title

By examining the index.html, we found that all CSS and JS files are under the ‘assets’ folder. We select all folders in assets and copy them under the resources folder like this:

Image title

Metronic declares fonts it uses in vendors.bundle.css . However, in my experience, if you did not change the declaration of each font, JSF will not recognize the fonts (in chrome it says 404 error for the resource ..) and will default back to OS font.

You fix this by simply opening vendors.bundle.css . search for @font-face. Here is the first font declaration:

@font-face { font-family: "summernote"; font-style: normal; font-weight: normal; src: url("fonts/summernote/summernote.eot?0d0d5fac99cc8774d89eb08b1a8323c4"); src: url("fonts/summernote/summernote.eot?#iefix") format("embedded-opentype"), url("fonts/summernote/summernote.woff?0d0d5fac99cc8774d89eb08b1a8323c4") format("woff"), url("fonts/summernote/summernote.ttf?0d0d5fac99cc8774d89eb08b1a8323c4") format("truetype"); }

Now, you can change it by adding “.xhtml?ln=vendors” after the font format type. It should look like this:

@font-face { font-family: "summernote"; font-style: normal; font-weight: normal; src: url("fonts/summernote/summernote.eot.xhtml?ln=vendors&0d0d5fac99cc8774d89eb08b1a8323c4"); src: url("fonts/summernote/summernote.eot.xhtml?ln=vendors#iefix") format("embedded-opentype"), url("fonts/summernote/summernote.woff.xhtml?ln=vendors&0d0d5fac99cc8774d89eb08b1a8323c4") format("woff"), url("fonts/summernote/summernote.ttf.xhtml?ln=vendors&0d0d5fac99cc8774d89eb08b1a8323c4") format("truetype"); } 

You will need to do this for all of the fonts. Next, let’s create a template .xhtml file that all other pages are going to be based on.

Image title

You will need to add this first:

<!DOCTYPE html > <html lang="en" xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://xmlns.jcp.org/jsf/html" xmlns:ui="http://xmlns.jcp.org/jsf/facelets" xmlns:c="http://xmlns.jcp.org/jsp/jstl/core" xmlns:f="http://xmlns.jcp.org/jsf/core" xmlns:jsf="http://xmlns.jcp.org/jsf" xmlns:p="http://xmlns.jcp.org/jsf/passthrough">

Next, create the Head :

<h:head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <ui:insert name="meta-tag"></ui:insert> <script src="https://ajax.googleapis.com/ajax/libs/webfont/1.6.16/webfont.js"></script> <script> WebFont.load({ google: {"families":["Poppins:300,400,500,600,700","Roboto:300,400,500,600,700"]}, active: function() { sessionStorage.fonts = true; } }); </script> <h:outputStylesheet library="vendors" name="custom/fullcalendar/fullcalendar.bundle.css" /> <h:outputStylesheet library="vendors" name="base/vendors.bundle.css" /> <h:outputStylesheet library="demo" name="default/base/style.bundle.css" /> <title><ui:insert name="title">Page Title</ui:insert></title> <ui:insert name="page-style"></ui:insert> <link rel="shortcut icon" type="image/png" href="resources/custom/images/icon-derbyware.PNG" /> </h:head>

Next is the body. I cannot show the whole code for this because it’s 1000+ lines of code, but here are the guide lines :

The Metronic HTML page starts with this:

<body class="m-page--fluid m--skin- m-content--skin-light2 m-header--fixed m-header--fixed-mobile m-aside-left--enabled m-aside-left--skin-dark m-aside-left--offcanvas m-footer--push m-aside--offcanvas-default" > <!-- begin:: Page --> <div class="m-grid m-grid--hor m-grid--root m-page">

Now, simply make it like this:

<h:body styleClass="m-page--fluid m--skin- m-content--skin-light2 m-header--fixed m-header--fixed-mobile m-aside-left--enabled m-aside-left--skin-dark m-aside-left--offcanvas m-footer--push m-aside--offcanvas-default" > <!-- start:: Page --> <div class="m-grid m-grid--hor m-grid--root m-page">

Then, copy the rest of the code of the head tag in index.xhtml.

If you go to line 3044 in the index.html, you will look for where the main content is located (other than top and left sidebars).

Once you do that, you put this code instead of the HTML one:

 <div class="m-grid m-grid--hor m-grid--root m-page">

Next, add this:

<!-- end:: Page --> <!-- begin::Scroll Top --> <div class="m-scroll-top m-scroll-top--skin-top" data-toggle="m-scroll-top" data-scroll-offset="500" data-scroll-speed="300"> <i class="la la-arrow-up"></i> </div> <!-- Scripts --> <h:outputScript library="vendors" name="base/vendors.bundle.js" /> <h:outputScript library="demo" name="default/base/scripts.bundle.js" /> <h:outputScript library="vendors" name="custom/fullcalendar/fullcalendar.bundle.js" /> <h:outputScript library="app" name="js/dashboard.js" /> <ui:insert name="page-script"></ui:insert> </h:body>
</html>

Congratulations, now you have your template page up and running.

Let’s make the first page that we will be using (assuming the page exists in WebContent, so is index.xhtml–index is the template).

<?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html > <html lang="en" xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://xmlns.jcp.org/jsf/html" xmlns:jsf="http://xmlns.jcp.org/jsf" xmlns:ui="http://xmlns.jcp.org/jsf/facelets" xmlns:c="http://xmlns.jcp.org/jsp/jstl/core" xmlns:f="http://xmlns.jcp.org/jsf/core" xmlns:p="http://xmlns.jcp.org/jsf/passthrough"> <ui:composition template="index.xhtml"> <ui:define name="title">Yaay!</ui:define> <ui:define name="m-content-body"> <h1 class="m--font-success">JSF based on Metronic Theme</h1> </ui:define> </ui:composition> </html>

      

That’s it! Now you have a Java EE 7/8 that is using the HTML theme “Metronic” as the front.

Image title

Original Link

Design Patterns Explained – Service Locator Pattern with Code Examples

The Service Locator pattern is a relatively old pattern that was very popular with Java EE. Martin Fowler described it in 2004 on his blog. The goal of this pattern is to improve the modularity of your application by removing the dependency between the client and the implementation of an interface.

Interfaces are one of the most flexible and powerful tools to decouple software components and to improve the maintainability of your code. I wrote a lot about them in my series about the SOLID design principles:

  • Following the Open/Closed Principle, you use one or more interfaces to ensure that your component is open for extension, but closed for modification.
  • The Liskov Substitution Principle requires you to implement your interfaces in a way that you can replace its implementations without changing the code that uses the interface.
  • The Interface Segregation Principle ensures that you design your interfaces so that clients don’t depend on parts of the interface.
  • And, to follow the Dependency Inversion Principle, you need to introduce an interface as an abstraction between a higher and a lower level component to split the dependency between both components.

All of these principles enable you to implement robust and maintainable applications. But they all share the same problem — at some point, you will need to provide an implementation of the interface. If that’s done by the same class that uses the interface, you will still have a dependency between the client and the implementation of the interface.

The Service Locator pattern is one option for avoiding this dependency. It acts as a central registry that provides implementations of different interfaces. By doing that, your component that uses an interface no longer needs to know the class that implements the interface. Instead of instantiating that class itself, it gets an implementation from the Service Locator.

That might seem like a great approach, and it was very popular with Java EE, but, over the years, developers started to question this pattern. You don’t get the decoupling of the client and the implementation of the interface for free, and there are other options to achieve the same goal, e.g., the Dependency Injection pattern. But, that doesn’t mean that this pattern is no longer valid. Let’s first take a closer look at the Service Locator pattern before we dive into the details of that discussion.

The Service Locator Pattern

In this article, I use the same example as I used in my article about the Dependency Inversion Principle. It consists of a CoffeeApp class that uses the CoffeeMachine interface to brew a cup of coffee with different coffee machines. There are two machines available, the BasicCoffeeMachine and the PremiumCoffeeMachine class. Both of them implement the CoffeeMachine interface.

As you can see in the diagram, the CoffeeMachine interface ensures that there are no dependencies between the CoffeeApp, BasicCoffeeMachine, and PremiumCoffeeMachine. All three classes only depend on the interface. That improves the maintainability of all classes and enables you to introduce new coffee machines without changing the existing code.

But, it also introduces a new problem: how does the CoffeeApp get an implementation of the CoffeeMachine interface without creating a dependency to that specific class? In my article about the Dependency Inversion Principle, I provided a CoffeeMachine object as a constructor parameter to the CoffeeApp.

public class CoffeeApp { private CoffeeMachine coffeeMachine; public CoffeeApp(CoffeeMachine coffeeMachine) { this.coffeeMachine = coffeeMachine } public Coffee prepareCoffee(CoffeeSelection selection throws CoffeeException { Coffee coffee = this.coffeeMachine.brewFilterCoffee(); System.out.println("Coffee is ready!"); return coffee; }
}

That moved the task of the object instantiation and the dependency from the CoffeeApp to the CoffeeAppStarter class.

public class CoffeeAppStarter { public static void main(String[] args) { // create a Map of available coffee beans Map<CoffeeSelection, CoffeeBean> beans = new HashMap<CoffeeSelection, CoffeeBean>(); beans.put(CoffeeSelection.ESPRESSO, new CoffeeBean( "My favorite espresso bean", 1000)); beans.put(CoffeeSelection.FILTER_COFFEE, new CoffeeBean( "My favorite filter coffee bean", 1000)); // get a new CoffeeMachine object PremiumCoffeeMachine machine = new PremiumCoffeeMachine(beans); // Instantiate CoffeeApp CoffeeApp app = new CoffeeApp(machine); // brew a fresh coffee try { app.prepareCoffee(CoffeeSelection.ESPRESSO); } catch (CoffeeException e) { e.printStackTrace(); } }
}

Introducing the Service Locator

The service locator pattern provides a different approach. It acts as a singleton registry for all services that are used by your application and enables the CoffeeApp to request an implementation of the CoffeeMachine interface.

There are different options to implement the service locator. You can use a static service locator that uses a field for each service to store an object reference. Or, you can create a dynamic one that keeps a java.util.Map with all service references. This one can be dynamically extended to support new services.

Both implementations follow the same approach, but the static service locator is a little bit easier to understand. So, I will use the static one in my coffee machine example.

Adding a Static Service Locator

Before you implement your service locator, you need to decide which interface implementation it shall return, or you can use an external configuration parameter that specifies the name of the class that implements the interface. The latter approach is more flexible, but also more complex. To keep the example easy to understand, I will instantiate a PremiumCoffeeMachine object without using any external configuration parameters. If you decide to use the Service Locator pattern in your application, I recommend to make it as configurable as possible and to provide the name of the class as a configuration parameter.

As I explained earlier, the Service Locator is a singleton. The CoffeeServiceLocator class, therefore, only has a private constructor and keeps a reference to itself. You can get a CoffeeServiceLocator instance by calling the static getInstance method on the CoffeeServiceLocator class.

public class CoffeeServiceLocator { private static CoffeeServiceLocator locator; private CoffeeMachine coffeeMachine; private CoffeeServiceLocator() { // configure and instantiate a CoffeeMachine Map<CoffeeSelection, CoffeeBean> beans = new HashMap<CoffeeSelection, CoffeeBean>(); beans.put(CoffeeSelection.ESPRESSO, new CoffeeBean( "My favorite espresso bean", 1000)); beans.put(CoffeeSelection.FILTER_COFFEE, new CoffeeBean( "My favorite filter coffee bean", 1000)); coffeeMachine = new PremiumCoffeeMachine(beans); } public static CoffeeServiceLocator getInstance() { if (locator == null) { locator = new CoffeeServiceLocator(); } return locator; } public CoffeeMachine coffeeMachine() { return coffeeMachine; }
}

In the next step, you can refactor the CoffeeApp. It can now get the CoffeeMachine object from the CoffeeServiceLocator and not as a constructor parameter.

public class CoffeeApp { public Coffee prepareCoffee(CoffeeSelection selection) throws CoffeeException { CoffeeMachine coffeeMachine = CoffeeServiceLocator.getInstance().coffeeMachine(); Coffee coffee = coffeeMachine.brewFilterCoffee(); System.out.println("Coffee is ready!"); return coffee; }
}

That’s all you need to do to introduce the Service Locator pattern into the coffee machine example. As you have seen, the implementation of a simple Service Locator class isn’t complicated. You just need a singleton that returns instances of the different service interfaces used in your application.

Arguments Against the Service Locator Pattern

After we discussed the implementation details of the Service Locator pattern, it’s time to take a closer look at the discussions about the pattern and its alternatives.

As you will see in the following paragraphs, there are several valid concerns about this pattern. Some of them can be avoided by using the Dependency Injection pattern. If you’re building your application using Jakarta EE, previously called Java EE, or Spring, you already have a very powerful Dependency Injection implementation. In these situations, it’s better to use the Dependency Injection pattern instead of the Service Locator pattern. If that’s not the case, the Service Locator pattern is still a good option to remove the dependency between the client and the implementation of an interface.

The three most common arguments against the Service Locator pattern are:

  • All components need to have a reference to the Service Locator, which is a singleton.
  • The Service Locator makes the application hard to test.
  • A Service Locator makes it easier to introduce breaking changes in interface implementations.

All Components Need to Reference the Service Locator

This is a valid concern. If you use your components in different applications and environments, introducing a dependency to your Service Locator class might be problematic, because the class might not exist in all environments. You can try to avoid that by adding one or more interfaces that abstract the Service Locator and enable you to provide an adapter.

Implementing the Service Locator as a singleton can also create scalability problems in highly concurrent environments.

You can avoid both problems by using the Dependency Injection pattern instead of the Service Locator pattern. Both patterns have the same goal, but use very different approaches to achieve them. I will explain the Dependency Injection pattern in more details in my next article.

It Makes the Application Hard to Test

The validity of this argument against the Service Locator pattern depends on the quality of your code. As long as you implement your Service Locator carefully, you can replace it during your tests with an implementation that provides test stubs for different services. That might not be as easy as it could be if you had used the Dependency Injection pattern, but it’s still possible.

Higher Risk to Introduce Breaking Changes

That is a general issue that is caused by the interface abstraction of your service and not by the Service Locator pattern. As soon as you implement a reusable component and use an interface as an abstraction to make the implementation replaceable, you are taking the risk that the next change on your interface implementation will break some external component. That is the price you have to pay if you want to create reusable and replaceable code.

The best way to handle this risk is to create a well-defined contract for your interface. You then need to document this contract and implement a test suite that validates it. This test suite belongs to the interface and should be used to verify all implementations of it. That enables you to find breaking changes before they cause runtime errors in production.

Summary

You can choose between different patterns that enable you to decouple a client from the implementation of an interface. The Service Locator pattern is one of them.

This pattern introduces a singleton registry that provides an instance of a service interface. That moves the dependency to the interface implementations from the client of the interface to the Service Locator class.

The Service Locator pattern is relatively old and still valid. But Spring and Jakarta EE provide powerful implementations of the Dependency Injection pattern. This pattern has the same goal as the Service Locator pattern, and I will explain it in more details in my next article. If you are building your application with Jakarta EE or Spring, you should prefer the Dependency Injection pattern.

Original Link

Proposed Jakarta EE Design Principles

Jakarta EE is slowly emerging, and with it future enterprise specifications. In order to align the different standards and technology that are about to be formed it might be valuable that the Enterprise Java community agrees upon design principles for Jakarta EE specifications.

I believe that there are a few principles that have made Java EE such a successful technology in the past. The following illustrates my points of view on design principles that I recognized in Java EE, and which might be worth to pursue and record further, as possible guidelines for Jakarta EE.

This blog post was encouraged by Dmitry Kornilov’s proposals on the technical directions of Jakarta EE.

Java EE’s programming model allows developers to focus on what they should focus on: the business logic. There’s no need to extend API classes anymore; developers can write their domain logic in plain Java and control the application server behavior mostly declaratively, via annotations. This causes the framework to integrate leanly into your code and can, in fact, be removed easily again. Don’t design for reusability, design for removal.

The implementations, however, take as much heavy lifting off the shoulders of developers as possible, namely technical requirements that are not related to the business logic. Examples are threading, transactions, inversion of control, or HTTP request handling. On the application side it’s good to “be boring.”

I consider it important that a framework does not get in the way of the business logic but actually empowers developers to get their functionality to production faster. We don’t want the framework to shine, we want your business code to shine. Compare modern Java EE or Spring to the old days of J2EE and I’m sure you’ll see what I mean.

Jakarta EE should continue this trend and should focus on providing specifications that enable developers to ship their business logic as fast as possible.

Convention Over Configuration

Java EE minimizes the configuration that is required to define a day-to-day enterprise application. Convention works out-of-the-box for the majority of use cases, with zero configuration. Examples are that no XML files are required anymore to configure a basic Java EE application, or that JAX-RS provides appropriate default HTTP response codes for JAX-RS method return values.

Java EE indeed offers the flexibility to modify the behavior for more complex scenarios, however, the convention doesn’t require it.

Jakarta EE should continue to “make the easy simple and the difficult possible”.

Interoperability of Specifications

Jakarta EE should continue and extend the interoperability of its specifications. Java EE honors existing specifications and functionality thereof that are already part of the standard.

Developers can expect separate specifications to work well with each other, with zero configuration required. The standards required that if the runtime supports both specification A and B, A + B have to collaborate with each other. An examples of this is that Bean Validation, JAXB, or JSON-B can be used in JAX-RS resource classes, without further configuration.

Dependency Injection and CDI

We should not want Jakarta EE to reinvent already existing things, for example CDI’s dependency injection. The specifications should, if possible, use and leverage the power of JSR 330, or if required, CDI.

An example that we have today is the injection of JAX-RS’ UriInfo into resource methods. It’s not supported yet to use@Inject to inject this type. Laying on one mechanism as much as possible improves the developer experience.

As another concrete action, specifications should provide CDI producers, and if necessary, typesafe qualifiers, for types that need to be created. As of today, an instance of the JAX-RS client, for example, can only be obtained via the programmatic ClientBuilder API. Producers and qualifiers help improving the experience by enabling declarative definitions.

That being said, the Java EE API heavily enables to define various functionality in a declarative way, by using inversion of control. This means, developers do not invoke the functionality directly; rather than it will be invoked by the container, based on the code definitions. Examples are JAX-RS, JSON-B, or CDI, among most other modern specifications.

Besides offering more exhaustive programmatic APIs, Jakarta EE should continue to enable developers to use declarative definitions and inversion of control.

One big difference, and for me advantage, that Java EE offers is the deployment model that separates the business logic concerns from the implementation. Developers solely code against the API, which is not shipped inside the deployment artifact, and which will be implemented by the application container.

These thin deployment artifacts simplify and speed up the delivery of the software, including builds, publication, and deployments. They are also compatible with container file system layers, as used in Docker. The deployment process only needs to re-build or re-transmit what has been changed.

Ideally, the deployment artifacts only comprise the sole business logic; runtime implementations and potential third-party libraries are delivered in a lower layer, for example in the application server libraries that have been added in a previous container build step.

Jakarta EE should continue to offer thin artifact deployments as a first-class citizen. It may optionally make it possible to further slim down the runtime, based on which specification the application requires. However, Jakarta EE should focus on the business logic and developer productivity first, and on tuning the runtimes second.

I think there’s a reason why Java EE APIs have such a wide usage in real-world projects: well-thought out and well-defined design principles that not only touch a single specification but the overall platform in a uniform way. They enable to use multiple specifications with the same look and feel, to write the application code with less artificial obstacles, and they make, so I believe, enterprise development more enjoyable.

Original Link

How Decisions Are Made: Jakarta EE and Eclipse MicroProfile

Recently, I was tasked with preparing a presentation on an update to Jakarta EE and Eclipse MicroProfile, and it got me thinking about the organization and structure involved in this huge effort to transform Java EE into a truly open source standard under the Eclipse Foundation. While organizing my thoughts, I put together a picture showing the structure and tensions of this undertaking to help people understand what various groups do and perhaps how better to get involved. The structure and governance is evolving as I write this, so I may not get everything right.

Image title

Project EE4J

EE4J was the first name we had when it was announced that Java EE is moving to the Eclipse Foundation. EE4J is a top level project in the Eclipse Foundation — a top level project is a way of structuring open source projects at the Eclipse Foundation. Each project lives under a top level project, which is managed by the Project Management Committee (PMC). The PMC of the EE4J is tasked with providing oversight and leadership to its subprojects as well as ensuring they meet the EE4J charter goals and are following the Eclipse Development Process.

Oracle is currently in the process of donating the Java EE code to Eclipse, the progress of which can be seen on the project website. This is a huge undertaking when you think that every piece of code, dependency, and 3rd-party contribution in every project needs to be IP checked and relicensed.

Eventually, under the EE4J project, there will be individual projects for each of the individual APIs; the Reference Implementation (RI); the Technology Compatibility Kit (TCK); and supporting technologies like Grizzly. These projects are where the real work is done, and each uses the Eclipse Development Process as the rules of engagement. Each project consists of a set of individual committers with anybody able to become a committer by contributing to the project and being elected. The individual EE4J projects have a lot of autonomy and can evolve and release their projects in a way the committers feel is best for their project while also upholding the top level project charter to create an integrated platform. The source code for the EE4J projects lives under the Eclipse EE4J organisation in GitHub and evolves using standard open source best practices. Anyone can raise a GitHub issue and a corresponding Pull Request (PR) to develop the project.

The outputs of an EE4J project will be an API, an RI, and a TCK.

Jakarta EE

Jakarta EE is a working group within the Eclipse Foundation with the goal of promoting the Jakarta EE platform. While the EE4J projects are driven by individual committers, the working group provides a forum where organisations that are committed to Jakarta EE can come together to shape the business aspects of driving the Jakarta EE brand. The working group key goals are to promote the Jakarta EE brand; define a compatibility and certification programme so that implementations can be labelled Jakarta EE compatible and to define and run the specification process that creates Jakarta EE specifications. The working group has three working committees and a steering committee. The key committees are for the Specification process; Marketing and Branding; and Enterprise Requirements.

The outputs of the Jakarta EE working group is the brand, a set of specifications for the Jakarta EE platform and profiles of the platform and a compatibility mark.

Creative Tension: Innovation, Maturity, Consistency, and Portability

While the goals of the EE4J project(s) and the Jakarta EE Working Group are close, there is a nice creative tension built into the system. While individual committers in EE4J projects are free to innovate and evolve their APIs within their projects, ultimately, they must try and become a part of the Jakarta EE platform. To do that, they must produce an API, RI, and TCK of sufficient maturity to enter into the specification process and do the work to also deliver a Jakarta EE specification.

At this point, the project will also have to show it integrates with other Jakarta EE specifications in order to provide a platform that has a lot of internal consistency. This provides a quality gate for organizations that create implementations of the Jakarta EE platform they wish to certify as compliant and for end-users that want to build applications against the Jakarta EE specifications and want portability across implementations. Remember, if an EE4J project becomes a Jakarta EE specification and is included in the Jakarta EE platform, every Jakarta EE compatible product will need to create an integrated implementation of the API.

Not every project in EE4J will necessarily become part of Jakarta EE. The EE4J PMC is free to admit any open source project that confirms to the charter of the EE4J project. New, innovative APIs may be born and die within EE4J without ever reaching the level of maturity required for proposal to the Jakarta EE working group as a standard. The EE4J top level project can become the engine of innovation for Jakarta EE, new projects can be proposed with relative ease and if they gain traction — graduate to the Jakarta EE platform.

At the same time, Jakarta EE is not just made up of EE4J projects. Key foundation technologies like CDI live outside EE4J, along with Bean Validation and JBatch. The Jakarta EE specification committee is free to receive proposals from external open source projects that wish to follow the Jakarta EE specification process and become part of the Jakarta EE platform and brand.

Where Does MicroProfile Fit In?

The MicroProfile project is currently a single project in the Eclipse Foundation with the goal of optimizing Enterprise Java for microservice based architectures. The project has a number of repositories in GitHub, one for each API developed under MicroProfile. The MicroProfile project develops Java APIs, a standard document and a TCK for each of the APIs. Currently, MicroProfile is a sub-project of the Eclipse Technology project and so is not related to either EE4J or Jakarta EE although many of the organizations and committers are involved in both projects and the foundations of MicroProfile like CDI and JAX-RS will become Jakarta EE specifications.

There are a number of scenarios for the MicroProfile project. The first is that the project could be moved under the EE4J top level project and perhaps split into a number of individual sub-projects, one for each API. Secondly, the MicroProfile project could remain where it is and approach the Jakarta EE Working Group and seek to have some of the APIs become part of the Jakarta EE platform. Alternatively, MicroProfile may continue as it is and not become part of EE4J or Jakarta EE. Which direction it takes depends on the committers of the MicroProfile project.

What Is Payara Doing?

It has been less than a year since Oracle announced they are moving Java EE to the Eclipse Foundation to open up the platform for the future. They have now started the massive undertaking of moving the Java EE code base to the Eclipse Foundation. Since then I personally have become a member of the EE4J PMC and a director of the Eclipse Foundation board. Members of the Payara team have become project leads and committers on a number of the EE4J sub-projects. Payara has also joined the Eclipse Foundation as a Strategic Member; joined the Jakarta EE Working Group and is actively involved in the committees driving Jakarta EE forward.

It is a testament to the openness of the community that Payara, a small company founded just over two years ago, has been welcomed into both EE4J and Jakarta EE. If we can do it anybody can – so I encourage individuals and organisations to become involved.

How Do I Get Involved?

If you are keen to shape the future of Java EE, there are a number of ways to get involved. If you are an individual interested in specific APIs, their RIs, or the quality and breadth of the TCKs, your first port of call is following the relevant EE4J project. Each project has a mailing list, so join in. Feel free to create, comment, criticize and compromise in issues and PRs on GitHub.

If you represent an organization that creates implementations or builds applications on Java EE now or Jakarta EE in the future, then get your organization to join the working group and shape the Jakarta EE platform and brand through the working group mailing lists and committees.

Original Link

Automate Deployments to Payara Application Server

In the previous post, I talked about Automated deployments to GlassFish application server. Payara is an open-source application server derived from GlassFish, so basically, the FlexDeploy GlassFish plugin works with Payara as well. I have installed Payara on my laptop using a zip download. Payara can be started using the asadmin command line utility, but in my case, I started the Payara domain using the Start Domain operation of the FlexDeploy plugin, which I will not describe in this post, but it is fairly straightforward to use start and stop plugin operations in the Utility workflow.

We will first set up a FlexDeploy workflow with three simple steps. First, make sure the domain is running, then undeploy the application and then deploy a new version of the application.

Configurations in FlexDeploy are done by creating logical Instances (we will name it Payara) and associating them with environments like Development, Test, Pre-Production, Production, etc. In this example, let’s configure Payara server in a Development environment.

We will not talk about build workflow, but you can use Ant, Maven, JDeveloper, or even a Shell plugin to create a war or ear file from source code.

A FlexDeploy Project is basically the artifact(s) being managed as a group. In this case, I will use sample.war to demonstrate deployment to the Payara application server. I have created a FlexDeploy Project to build this war file and then deploy it using the workflow shown above. A deployment request is submitted as it would be for any other type of artifact in FlexDeploy where you will pick the Project Version and Environment.

You can manually submit a request or automate the entire process by using Continuous Integration configurations, or even use Release and Pipelines to continuously deliver changes through various environments. In any case, FlexDeploy allows you to setup Deployment Schedules and/or Approvals, which are provided by the FlexDeploy platform and work for any type of artifact being managed.

As always, you can view logs and steps from workflow execution in the FlexDeploy UI. At the end of deploy workflow execution, the application will be available for use.

Now you can set up continuous integration for your application(s) through FlexDeploy. You can also automate other supporting artifacts for your applications by using other FlexDeploy plugins. For example, JDBC or Oracle Database plugins can be used to automate management of database objects along with application deployments. You can use Test Automation to run automated tests during the build or deploy process to make sure the application being delivered is of good quality. You can even automate resource creation in Payara application server by using the Shell plugin, where FlexDeploy does not yet provide an out-of-box plugin. Even better, you can develop your own FlexDeploy plugins using the Plugin SDK. Now, enjoy the benefits of automation.

Original Link

Hibernate Tips: How to Order the Elements of a Relationship

Hibernate Tips is a series of posts in which I describe a quick and easy solution for common Hibernate questions. Some of the most popular tips are also available as a book.

If you have a question for a future Hibernate Tip post, please leave a comment below.

Question:

How can I order the elements of an annotated relationship without writing my own query?

Get more videos from the Hibernate Tips playlist.

Solution:

JPA supports the @OrderBy annotation, which you can add to a relationship attribute as you can see in the following code snippet.

@ManyToMany
@JoinTable(name="BookAuthor",
joinColumns={@JoinColumn(name="bookId", referencedColumnName="id")},
inverseJoinColumns={@JoinColumn(name="authorId", referencedColumnName="id")})
@OrderBy(value = "lastName ASC")
private Set<Author> authors = new HashSet<Author>();

In this example, I want to order the authors who have written a specific book by their last name. You can do this by adding the @OrderBy annotation to the relationship and specifying the ORDER BY statement in its value attribute. In this case, I define an ascending order for the lastName attribute of the Author entity.
If you want to order by multiple attributes, you can provide them as a comma-separated list as you know it from SQL or JPQL queries.

Hibernate uses the value of the annotation to create an ORDER BY statement when it fetches the related entities from the database.

05:22:13,930 DEBUG [org.hibernate.SQL] – select authors0_.bookId as bookId1_2_0_, authors0_.authorId as authorId2_2_0_, author1_.id as id1_0_1_, author1_.firstName as firstNam2_0_1_, author1_.lastName as lastName3_0_1_, author1_.version as version4_0_1_ from BookAuthor authors0_ inner join Author author1_ on authors0_.authorId=author1_.id where authors0_.bookId=? order by author1_.lastName asc

Original Link

Property Injection in Java With CDI

One of the more common tasks faced in Java application development environments is the need to obtain constant values (Strings, numbers, etc.) from external properties files or the environment. Out of the box, Java provides methods such as System.getenv and Properties.load for retrieving such values, but their use can often lead to excess boilerplate code and logic checking for missing values in order to apply defaults, etc. Using CDI or CDI extensions, most of the boilerplate can be avoided, moving property retrieval and usage out of the way.

Consider the simple case where we have a class declaring a String field that must be set dynamically at runtime to a property value. The assumption is that this class is being managed by a CDI runtime such as a Java EE container.

In this example, we would like to set the value of `simple` to be the value of a system property, if available. Otherwise, set the value to be the property contained in a standard format properties file on the class path. Finally, set the value to be an empty String if nothing is found. By convention, both the system property and the name of the properties file match the fully qualified name of the class plus the field. Exceptions are thrown to the caller for the sake of brevity.

package com.example.injection; public class Example { private String simple = null; public String getSimple() throws Exception { if (this.simple == null) { // Do we have a System property to use? String systemSimple = System.getProperty("com.example.injection.Example.simple"); if (systemSimple == null) { /* No System property found, check in * Example.properties on class path */ Properties classProperties = new Properties(); ClassLoader loader = getClass().getClassLoader(); String resName = "com/example/injection/Example.properties"; try (InputStream in = loader.getResourceAsStream(resName)) { classProperties.load( in ); } this.simple = classProperties.getProperty("simple", ""); } else { this.simple = systemSimple; } } return this.simple; }
}

There is quite a bit of code here for something that, on the surface, seemed to be a simple task. The level of complexity increases if we want to parse the property into another type of object such as an Integeror a Date, or if we need to cache the Properties for performance reasons. No developer wants to pollute the code with methods like this (not to mention test it).

Enter CDI and CDI extensions. In a Java EE environment or other runtime that supports CDI, we can replicate the functionality above in a much simpler way.

package com.example.injection; import javax.inject.Inject;
import io.xlate.inject.Property; public class Example { @Inject @Property(defaultValue = "") private String simple; public String getSimple() { return this.simple; }
}

This example makes use of a small library Property Inject to obtain values from system properties and/or property files. Using default naming conventions, all of the logic from the earlier example is handled under the hood. When necessary, we can override the default behavior and specify the name of the system property to use, the URL containing the Properties we want to reference, and the name of the key within those properties.

Consider the example where we would like to override the property location and naming. Below, the CDI extension will first attempt to find the value in the system property called `global.simple` (e.g. command line argument -Dglobal.simple="really simple". If not found, the properties file named `config/my-app.properties` will be loaded from the class path and searched for entry `example.simple`. Finally, if nothing has been found, the value will default to null since no defaultValue has been defined.

package com.example.injection; import javax.inject.Inject;
import io.xlate.inject.Property;
import io.xlate.inject.PropertyResource; public class Example { @Inject @Property(name = "example.simple", resource = @PropertyResource("classpath:config/my-app.properties"), systemProperty = "global.simple") private String simple; public String getSimple() { return this.simple; }
}

In addition to Strings, Property Inject also supports the injections of all Java primitive types and their wrapper classes, BigInteger, BigDecimal, Date, JsonArray, JsonObject, and java.util.Properties collections themselves.

How are you using properties in your CDI-enabled applications today?

Original Link

Getting Started With Java EE 8, Payara 5 and Eclipse Oxygen

Some days ago, I had the opportunity/obligation to set up a brand new Linux (Gentoo) development box, hence to make it “enjoyable,” I prepared a back to basics tutorial on how to set up a working environment.

Requirements

In order to set up a complete Java EE development box, you need at least a:

  1. Working JDK installation and environment

  2. IDE/text editor

  3. Standalone application server if your focus is “monolithic”

Due to personal preferences, I choose:

  1. OpenJDK on Gentoo Linux (Icedtea bin build)

  2. Eclipse for Java EE developers

  3. Payara 5

Installing OpenJDK

Since this is a distribution dependent step, you could follow tutorials on Ubuntu, CentOS, Debian and many more distributions if you need to. At this time, most application servers have Java 8 as their target due to the new Java-LTS version scheme, as in the case of Payara.

For Gentoo Linux, you could get a new OpenJDK setup by installing dev-java/icedtea for the source code version and dev-java/icedtea-bin for the precompiled version.

emerge dev-java/icedtea-bin

Is OpenJDK a Good Choice for My Needs?

Currently, Oracle has plans to free up all enterprise-commercial JDK features. In the near future, the differences between OracleJDK and OpenJDK should be zero.

In this line, Red Hat and other big players have been offering OpenJDK as the standard JDK in Linux distributions, working flawlessly for many enterprise-grade applications.

Eclipse for Java EE Developers

After a complete revamp of the website’s GUI, you can go directly to eclipse.org and download Eclipse IDE.

Eclipse offers collections of plugins called Packages — each package is a collection of common plugins aimed for a particular development need. Hence to simplify the process, you could download Eclipse IDE for Java EE Developers.

On Linux, you will download a .tar.gz file, hence you should uncompress it in your preferred directory.

tar xzvf eclipse-jee-oxygen-3a-linux-gtk-x86_64.tar.gz

Finally, you could execute the IDE by entering the bin directory and launching the eclipse binary.

cd eclipse/bin
./eclipse

The result should be a brand new Eclipse IDE.

Payara

You can grab a fresh copy of Payara by visiting the payara.fish website.

From Payara, you will receive a ZIP file that again you should uncompress in your preferred directory.

unzip payara-5.181.zip

Finally, you could add Payara’s bin directory to the PATH variable in order to use the asadmin command from any CLI. You could achieve this by using a ~/.bashrc file. For example, if you installed Payara at ~/opt/, the complete instruction is:

echo "PATH=$PATH:~/opt/payara5/bin" >> ~/.bashrc

Integration between Eclipse and Payara

After unzipping Payara, you are ready to integrate the app server in your Eclipse IDE.

Recently and due to the Java/Jakarta EE transition, the Payara Team has prepared a new integration plugin compatible with Payara 5. In the past, you would also use Glassfish Developer Tools with Payara, but this is not possible anymore.

To install it, simply grab the following button on your Eclipse Window, and follow wizard steps.

Drag to your running Eclipse* workspace. *Requires Eclipse Marketplace Client

In the final step, you will be required to restart Eclipse, after that you still need to add the application server. Go to the Servers tab and click create a new server:

Select the Payara application server:

Find Payara’s install location and JDK location (corresponding to ~/opt/payara5 and /opt/icedtea-bin on my system):

Configure Payara’s domain, user, and password.

In the end, you will have Payara server available for deployment:

Test the Femo Environment

It’s time to give it a try. We could start a new application with a Java EE 8 archetype, one of my favorites is Adam Bien’s javaee8-essentials-archetype, which provides you an opinionated essentials setup.

First, create a new project and select a new Maven Project:

In Maven’s window, you could search, by name, any archetype in Maven Central, but you should wait a little bit for synchronization between Eclipse and Maven.

If waiting is not your thing, you could also add the archetype directly:

This archetype also creates a new JAX-RS application and endpoint. After some minor modifications, just deploy it to Payara 5 and see the results:

Original Link

DZone Research: The Problems With Java

To gather insights on the current and future state of the Java ecosystem, we talked to executives from 14 companies. We began by asking, “What are the most common problems with the Java ecosystem today?” Here’s what the respondents told us:

Verbosity

  • It’s a bit verbose. Other languages are simpler. A lot of libraries – hard to choose which one. The JDK is complete but there are a lot of dependencies. 
  • The most common complaint about Java is that it tends to be long-winded, meaning you have to type more characters to get what you want done than some other, more modern languages. That’s something newer languages have been trying to tackle, with the goal of trying to get the quality, features, and power of Java without needing to do quite as much typing. The ability to write code faster is one of the biggest reasons that some people prefer languages that are actually worse languages when compared at side-by-side with Java. 
  • Java developers have no problems with Java. Those on the outside will complain that it’s too verbose. The Java 9 upgrade is not seamless, you must add dependencies to adopt. It will not be supported long term. Java 10 will be out this month and may result in a lot of people skipping the Java 9 upgrade. 
  • Java itself is verbose but there are alternatives like Kotlin, Scala, and project Lombok.

Nothing

  • I think the ecosystem is pretty healthy right now, relatively speaking. A benevolent dictator and an engaged community is probably the best mix. Feels like good energy stemming from more frequent releases. 
  • If you had asked me this same question one year ago, I would have suggested that the uncertainty of Oracle’s intentions around Java EE was a substantial problem.   As previously mentioned, Java dominates the enterprise.  While we did have folks like IBM, Tomitribe, Red Hat, Payara and others working to create MicroProfile, it was unclear what would happen if Oracle remained neutral. Now, we have Jakarta EE at Eclipse and we are open for innovation yet again.

Other

  • Software engineering quality is degrading with less care and pride of workmanship. No real engineering processes. No one is nailing down well-known methodologies.
  • It can be hard to consume all that’s contained in a release every six months.
  • 1) The Maven repository was revolutionary in its time as a central repository for Java libraries, but it is starting to show its age, especially compared with the npm library. Compare the simplicity of searching for quality libraries in the npm libraries, which includes a great search tool, a way to grade the quality of a library, and a culture of readmes that help you figure out what the library is all about at a glance. 2) Java libraries tend to be too large and try to include as much stuff as possible in a library, with, in the best case, huge documentation that is very difficult to comprehend due to the number of things to understand, and to the worst case where all you get is a huge Javadoc. Compare and contrast that with the JS library ecosystem that tends to favor small libraries, a “pick and choose” methodology, which enables easy understanding of a library. 3) The slowness of the evolution of the language is a big advantage, but because it was too slow, it has driven many developers, especially thought leaders, away from the language.
  • Governance of the JVM. Eclipse is a good steward of open source. The rate of innovation into the JVM is stalling. Java 9 represents the last attempt to push technology into the platform. I’m not sure where the product roadmap goes after that. The language goes in circles. Every generation needs their own language. It’s nicer to be part of a vibrant ecosystem. Java’s guarantee of compatibility gives you a wide variety of languages to choose from.
  • Participation and engagement, growing real-world developer interest. We want to hear from every user, not just architects. We encourage developers to contribute as a group. The Brazilian User Groups have adopted the JSR concept to forward feedback as a group. To continue the momentum, we have an open JDK adoption group.
  • The freedom can sometimes become a curse. In languages such as .Net where you are more “within boundaries” it is easier to not make the wrong decisions. The different permutations of what dependencies you can use together can make your system become an unproven snowflake. There’s also the notion of Java being an “old and grumpy” language. Though I don’t agree with this notion the rumor can be a bit hurtful. Hopefully, this will change with the new release cadence.
  • It lags behind because it’s used by large enterprises. With slowness comes stability. It lacks some of the niceties of other languages; however, it provides quick wins with fast coding.
  • The main problem will be release fatigue. The increase in cadence means developers have to keep up with new versions of Java. So much going on in open source community it’s hard to get a handle on new APIs, components, projects. Every time you try to learn something new you’re placing a bet with your mindshare on whether or not it will be relevant in a couple of years. 

Here’s who we spoke to:

Original Link

CodeTalk: Red Hat CTO on Jakarta EE, Cloud Native, Kubernetes, and Microservices [Podcast]

Thanks for tuning in to another episode of DZone’s CodeTalk Podcast where hosts Travis Van and Travis Carlson have early conversations with the creators of new developer technologies. For the next few episodes, we’ll be stepping outside of our usual coverage to interview a series of folks on Jakarta EE (the platform formerly known as Java EE) to hear from a number of perspectives on what the switch from Oracle governance to Eclipse means for developers.

Image title

Check back every Wednesday (and occasionally on other days, like today!) for a fresh episode, and if you’re interested in being involved as a guest or have feedback for our hosts, scroll down to the bottom for contact information. Also be sure to check out the previous episode in our Jakarta series, where Travis and Travis both spoke with Ian Robinson, IBM WebSphere chief architect and distinguished engineer — who is heavily involved with Jakarta EE as well as the MicroProfile project — to talk about changes afoot for Jakarta EE and implications for the Java community.

CodeTalk Episode 8: Red Hat CTO on Jakarta EE, Cloud Native, Kubernetes, and Microservices

In this episode of CodeTalk, we wrap up our series on Jakarta EE — exploring new technical directions for the platform and the new governance approach being modeled by the Eclipse Foundation and its members. We talk with Mark Little, CTO of JBoss Middleware at Red Hat and long-time participant in the Java ecosystem as well as standards bodies like OASIS.

Key Discussion Points

  • Defining a governance more appropriate for enterprise Java in an Open Source dominated world.
  • Evolving Jakarta EE into a more dynamic, iterative, and adaptable platform for the 21st century.
  • The rise of Linux containers and Kubernetes as a cloud deployment model, and the nature of how Jakarta EE is likely to innovate to support cloud-native use cases. The rise of Linux containers and Kubernetes as a cloud deployment model, and the nature of how Jakarta EE is likely to innovate to support cloud-native use cases.
  • Insights on work being done to make the JVM function better in a Kubernetes and container world, and how that will be reflected up through the Jakarta EE stack.
  • How -aaS models are driving new architecture requirements for no longer assuming that frameworks and business services are co-located.
  • A retrospective on the rise of microservices architecture, dating back to CORBA standards, up through the present, and opportunities for Jakarta EE to evolve for microservices.
  • What does success look like for Jakarta EE? A look at key metrics to watch on the reinvigoration of the platform.

Want More CodeTalk?

We’re still in the early stages here with the relaunch… and to those following along, what I’m about to say will sound familiar. Anyway, we hope to soon give our CodeTalk landing page a facelift and are going to house all future and past episodes in this one place.

For now, stay tuned to DZone for weekly episodes released each Wednesday (and, as is evident, occasionally on other days.) And, if you’d like to contact the showrunners to get involved as an interviewee or just simply share your feedback, feel free to message Travis and Travis here: codetalkpodcast@gmail.com.

Original Link

The Road to Jakarta EE

The Eclipse Foundation is making multiple announcements related to Jakarta EE that includes the unveiling of https://jakarta.ee and the Jakarta EE logo, the results of the developer survey, etc. It’s probably a good time to reflect on how we got here…

End 2016, early 2017, I was jokingly using the following slide when I was discussing Java EE. 

I am relatively confident that at that time no one would have predicted what would happen. You have to remember that during that period, things were not so simple for Java EE as questions were raised about the future of the platform. At that time, Oracle along with the different JCP experts were focused on finalizing Java EE 8 and its various APIs. 

Relatively soon in the process we also started to think about the future of the platform, i.e. ‘Java EE Next’ as we informally called the post-Java EE 8 era. The Java EE 9 plans that were shared at JavaOne 2016 weren’t particularly well received so we went back to the drawing board. It was clear that we had to do something radically different to get the Java EE ecosystem excited and engaged again. 

To lift all the concerns that were raised on Java EE through the years, we thought that the platform should, from that point on, evolve in a more open fashion and at a more rapid pace. It was clear that the platform had to adopt an open source model including a well-established governance. Early in those reflections, Oracle along with some key players from the ecosystem, namely Red Hat and IBM, decided that the Eclipse Foundation would be the obvious venue to host this radical evolution. One of the many reasons is that the Eclipse is already hosting the MicroProfile.IO project, which is itself augmenting the Java EE platform with capabilities geared towards Microservices Architecture. Shortly after that, additional Java EE players such as Tomitribe, Payara and Fujitsu joined the initiative. And that’s in a nutshell how EE4J Jakarta EE came to life.

Transitioning the development of the platform to the Eclipse Foundation is a huge undertaking. It involves many technical and non-technical aspects including complex legal issues that I won’t cover here given IANAL! In addition, we are not talking about a small project; we are talking about a large collection of established projects that includes GlassFish, Jersey, Grizzly, Mojarra, Open MQ to name just a few. And that’s not all, there are also all the activities related to the opening of the various TCKs. It is simply a huge effort and probably the largest project that the Eclipse Foundation has ever embarked on (see here for some background). This is one of the reasons why it was decided early on that Jakarta EE would use Java EE 8 as its baseline and that older versions of the platform would not be part of Jakarta EE; that approach was simply reasonable and pragmatic. All those efforts happen as we speak and while we would all prefer that works to be behind us so that we can all effectively focus on the key goal of Jakarta EE, i.e. evolving the platform, we still have to wait that everything is in place. On that note, I have recently received the following mail from a well-known community member. 

We were discussing a matter unrelated to Jakarta EE and while I sincerely appreciate the gratitude from that person, I really need to stress something about the whole Jakarta EE effort. Some of us are clearly more visible in the community (ex. Dmitry Kornilov who represents Oracle in the PMC and me as an evangelist) but Jakarta EE is really a team effort on the Oracle side. There are many people who are working behind the (Oracle) scene to transition Java EE to the Eclipse Foundation. It is impossible to mention all my colleagues that are, closely or remotely, involved in this effort. The list is simply too long and I don’t want to take the risk of omitting someone. I, however, want to publicly acknowledge the work of Ed Bratt, Will Lyons, and Bill Shanon who deserve a particular mention as they have been working tirelessly since the early days of this effort to make sure Jakarta EE happens! So thanks to you all!

You should also realize that usually when a project is open-sourced, all the related activities, including all the legal aspects, are happening upstream and it is only when everything is discussed, agreed and done that the project is made public. But early on, we have decided to be as transparent as possible, which is why we have announced our initial intent last summer. At that time, lots of things were not decided yet and that lead us to where we are today, i.e. in the early days of Jakarta EE including the creation of a new but already actively engaged open-source community. A lot of work still needs to happen to properly tackle the ultimate goal of Jakarta EE, i.e. evolve the platform towards an open-source and Java-based, Cloud Native foundation that will be relevant for the next decade. The Jakarta EE community is actively working towards that goal and today’s announcement represent an important initial milestone!

Original Link

CodeTalk: Jakarta EE’s Cloud Native Opportunities [Podcast]

Thanks for tuning in to another episode of DZone’s CodeTalk Podcast where hosts Travis Van and Travis Carlson have early conversations with the creators of new developer technologies. For the next few episodes, we’ll be stepping outside of our usual coverage to interview a series of folks on Jakarta EE (the platform formerly known as Java EE) to hear from a number of perspectives on what the switch from Oracle governance to Eclipse means for developers.

Image title

Check back every Wednesday for a fresh episode, and if you’re interested in being involved as a guest or have feedback for our hosts, scroll down to the bottom for contact information. Also be sure to check out the previous episode in our Jakarta series, where Travis and Travis both spoke with Mike Milinkovic, Executive Director of the Eclipse Foundation and Director of the Open Source Initiative, to talk about the Eclipse Foundation and what’s in store for Jakarta EE and the specification process behind it.

CodeTalk Episode 8: Jakarta EE’s Cloud Native Opportunities

Today the Eclipse Foundation raised the curtain on new “Cloud Native Java” aspirations for the evolution of the Jakarta EE platform. In this week’s episode of CodeTalk we speak with IBM WebSphere chief architect and distinguished engineer Ian Robinson — who is heavily involved with Jakarta EE as well as the MicroProfile project — to continue our look at the changes afoot for Jakarta EE and implications for the Java community.

Key Discussion Points

  • Where Java EE has excelled to date for building cloud-native applications vs. opportunity areas for improvement
  • The release cycles and specification processes around Jakarta EE, and how to balance the need to accelerate platform innovation, while at the same time having a “comprehensive enough” suite of tests to meet compatibility and reliability requirements.
  • What can be learned and applied from the MicroProfile project in terms of faster innovation cycles for Jakarta EE.
  • What MicroProfile does, what the original goal of the project is, and where things stand with the project today.

Want More CodeTalk?

We’re still in the early stages here with the relaunch… and to those following along, what I’m about to say will sound familiar. Anyway, we hope to soon give our CodeTalk landing page a facelift and are going to house all future and past episodes in this one place.

For now, stay tuned to DZone for weekly episodes released each Wednesday. And, if you’d like to contact the showrunners to get involved as an interviewee or just simply share your feedback, feel free to message Travis and Travis here: codetalkpodcast@gmail.com.

Original Link

ODI 11g and ODI 12c: What’s an Agent?

What Is an Agent?

An agent is a Java process that’s usually located on the server and listens to a port for incoming requests. It runs the requested scenario, reverse-engineers requested datastores, etc.

When a job submitted through ODI Studio GUI or through the startscen.sh agent gets scenario from the work repository and topology definitions from master repository, it combines and converts them into a runnable job, usually consisting of more than one code block. Then, it sends code blocks to destination environments, which may be DB servers, file servers, Hadoop name nodes, etc. Finally, the agent gets job statuses from these environments and writes into work repository tables for us to see from the Operator tab of ODI Studio.

Agent diagram from Oracle A-Team Blog

Agent diagram from Oracle A-Team blog

Agent Types

Standalone Agent

It is the basic agent of ODI. It does not require an application server like JEE Agent. It is easy to configure, start, and stop this agent from the shell. I’ve always used this agent and never tried other versions. This is the most lightweight and low footprint choice.

JEE Agent

When it comes to the Java Enterprise Edition agent, which requires an application server, in most documentation, you can see the name of the WebLogic Server since it’s another Oracle product. 

Depending on this CertMatrix of Oracle, ODI 11.1.1.7.0 only supports WLS and does not support Tomcat or other application servers. You may — or may not — configure them to run together, but this is not supported.

This agent is first delivered with ODI 11g and it still exists in ODI 12c.

Some pros of JEE agent are:

  • High availability: Through Web Logic Server’s cluster architecture, even if a node is down, agents may run on other nodes.
  • Configurable connection pooling: Connection pool can be configured through WLS.
  • Monitoring: Oracle Enterprise Manager can monitor, configure, alert, and manage ODI JEE agents. But there is a plug-in to be installed to achieve this tasks from OEM.

Colocated Agent

This is the newest agent type that arrived with ODI 12c. It’s like a combo of other two types. The agent is a standalone agent, but it can be monitored and configured through WLS. Unfortunately, it does not take advantage of connection pooling or high availability. Our agent will be in the WLS domain and can be managed through WLS — and that’s all. It is lighter than the JEE agent. Companies that prefer JEE Agent as production agents should choose the colocated agent as their dev/test agent.

Agent Types diagram from Gerard Nico’s website

Agent types diagram from Gerard Nico’s website

Where to Locate an Agent

To decrease network I/O, it is better to locate the agent for the target DB server. Since the agent submits code to DB Engine, it is better for them to be on the same machine. Don’t forget that ODI is an ELT tool, which means it will load data into target server and then transform your data. So, most of the load will be on the target server, which also means that most of the code will be submitted to the target server.

Also, since an agent is a local Java process, the agent will write files to the machine upon which it is set up. If you have a file server other than the DB server, it’s better to have another agent on the file server to handle read/write file processes. Mounting the file server directory to the DB server as a directory and setting up only one agent is another solution.

Also, these solutions will prevent any firewall-related problems. 

Thanks for reading — don’t forget to share and comment!

Original Link

Connecting Elasticsearch Directly to your Java EE Application

The trendy word big data comes of the 3 Vs: volume, variety, and velocity. Volume refers to the size of data, variety refers to the diverse types of data, and velocity refers to the speed of data processing. To handle persistent big data, there are NoSQL databases that write and read data faster. But with the diversity in a vast volume, a search engine is required to find information that is without significant computer power and that takes too much time. A search engine is a software system that is designed to search for information; this mechanism makes it more straightforward and clear for users get the information that they want.

This article will cover NoSQL that is both document type and search engine Elasticsearch.

Elasticsearch is a NoSQL document type and a search engine based on Lucene. It provides a distributed, multi-tenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java and is released as open-source under the terms of the Apache License. Elasticsearch is the most popular enterprise search engine followed by Apache Solr, which is also based on Lucene. It is a near-real-time search platform. What this means is there is a slight latency (normally one second) from the time you index a document until the time it becomes searchable.

Steps in a Search Engine

In Elasticsearch, the progress of a search engine is based on the analyzer, which is a package containing three lower-level building blocks: character filters, tokenizers, and token filters. Through the Elasticstatic documentation, the definitions are:

  • A character filter receives the original text as a stream of characters and can transform the stream by adding, removing, or changing characters. For instance, a character filter could be used to convert Hindu-Arabic numerals into their Arabic-Latin equivalents or to strip HTML elements from the stream.

  • A tokenizer receives a stream of characters, breaks it up into individual tokens (usually individual words), and outputs a stream of tokens. For instance, a whitespace tokenizer breaks the text into tokens whenever it sees any whitespace. It would convert the text “Quick brown fox!” into the terms [Quick, brown, fox!].

  • A token filter receives the token stream and may add, remove, or change tokens. For example, a lowercase token filter converts all tokens to lowercase, a stop token filter removes common words (stop words) like the from the token stream, and a synonym token filter introduces synonyms into the token stream.

How to Install ElasticSearch in Docker

The first step to use ES is to install it in Docker. You can install both manually and through Docker. The easiest way is with Docker following the steps below:

docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.3

Elasticsearch and Java EE Working Together

Eclipse JNoSQL is the bridge to work between these platforms (Java EE and the search engine). An important point to remember is that Elasticsearch is also a NoSQL document type, so a developer may model the application as such. To use both the standard document behavior and the Elasticsearch API, a programmer needs to use the the Elasticsearch extension.

<dependency> <groupId>org.jnosql.artemis</groupId> <artifactId>elasticsearch-extension</artifactId> <version>0.0.5</version>
</dependency>

For this demo, we’ll create a contacts agenda for a developer that will have a name, address, and, of course, the language that they know. An address has fields and becomes a subdocument that is a document inside a document.

@Entity("developer")
public class Developer { @Id private Long id; @Column private String name; @Column private List < String > phones; @Column private List < String > languages; @Column private Address address;
} @Embeddable
public class Address { @Column private String street; @Column private String city; @Column private Integer number; }

With the model defined, let’s set the mapping. Mapping is the process of determining how a document and the fields it contains are stored and indexed. For this example, the fields are usually the type keyword and those are only searchable by their exact value. Also, there is the languages field that we defined as text with a custom analyzer. This custom analyzer, the whitespace_analyzer, has one tokenizer, whitespace, and three filters (standard, lowercase, and asciifolding).

{ "settings": { "analysis": { "filter": { }, "analyzer": { "whitespace_analyzer": { "type": "custom", "tokenizer": "whitespace", "filter": [ "standard", "lowercase", "asciifolding" ] } } } }, "mappings": { "developer": { "properties": { "name": { "type": "keyword" }, "languages": { "type": "text", "analyzer": "whitespace_analyzer" }, "phones": { "type": "keyword" }, "address": { "properties": { "street": { "type": "text" }, "city": { "type": "text" }, "number": { "type": "integer" } } } } } }
}

With the API, the developer can do the basic operations of a document NoSQL database — at least, a CRUD — however, in ES, the behavior of search engine matters and is useful. That why it has an extension.

public class App { public static void main(String[] args) { Random random = new Random(); Long id = random.nextLong(); try (SeContainer container = SeContainerInitializer.newInstance().initialize()) { Address address = Address.builder() .withCity("Salvador") .withStreet("Rua Engenheiro Jose") .withNumber(10).build(); Developer developer = Developer.builder(). withPhones(Arrays.asList("85 85 343435684", "55 11 123448684")) .withName("Poliana Lovelace") .withId(id) .withAddress(address) .build(); DocumentTemplate documentTemplate = container.select(DocumentTemplate.class).get(); Developer saved = documentTemplate.insert(developer); System.out.println("Developer saved" + saved); DocumentQuery query = select().from("developer") .where("_id").eq(id).build(); Optional < Developer > personOptional = documentTemplate.singleResult(query); System.out.println("Entity found: " + personOptional); } } private App() {}
}

From the Elasticsearch extension, the user might use the QueryBuilders, a utility class to create search queries in the database.

public class App3 { public static void main(String[] args) throws InterruptedException { try (SeContainer container = SeContainerInitializer.newInstance().initialize()) { Random random = new Random(); long id = random.nextLong(); Address address = Address.builder() .withCity("São Paulo") .withStreet("Av. nove de Julho 1854") .withNumber(10).build(); Developer developer = Developer.builder(). withPhones(Arrays.asList("85 85 343435684", "55 11 123448684")) .withName("Maria Lovelace") .withId(id) .withAddress(address) .withLanguage("Java SE") .withLanguage("Java EE") .build(); ElasticsearchTemplate template = container.select(ElasticsearchTemplate.class).get(); Developer saved = template.insert(developer); System.out.println("Developer saved" + saved); TimeUnit.SECONDS.sleep(2 L); TermQueryBuilder query = QueryBuilders.termQuery("phones", "85 85 343435684"); List < Developer > people = template.search(query); System.out.println("Entity found from phone: " + people); people = template.search(QueryBuilders.termQuery("languages", "java")); System.out.println("Entity found from languages: " + people); } } private App3() {}
}

Conclusion

An application that has an intuitive way to find data in an enterprise application is prime, mainly when the software handles a massive and with several data kinds. Elasticsearch can help the Java EE world with both NoSQL documents and a search engine. This post covered how to join the best of these two worlds using Eclipse JNoSQL.

Original Link

Run and Debug a WildFly Swarm App From NetBeans

Java EE developers using NetBeans are used to being able to run and debug their thin-WAR applications in their application server of choice directly from NetBeans. When developing microservices packaged as über-or hollow-JARs, you expect the same effortless way of running and debugging. The good news is that you can. In this post, I show step-by-step how to run and debug the WildFly Swarm version of CloudEE Duke in NetBeans.

Run WildFly Swarm Application

The easiest way of running CloudEE Duke in NetBeans is to edit the Run project action for the project. Right click on CloudEE Duke, select properties, then Actions as shown below.

Image title

Configure the Execute Goals to package wildfly-swarm:run, remove all the default properties, and you’re all set. Run Project (F6) will start the application using the WildFly Swarm Maven Plugin.

Debug WildFly Swarm Application

To enable debugging, you follow the same steps as described above, but in this case, it is the Debug Project action you select.

Image title

Execute Goals is configured the same way as for Run, but in the Set Properties, you need to configure a debug port for WildFly Swarm. This is done by setting the swarm.debug.port property, e.g. to 9000.

Debug Project (Ctrl-F5) will start the application in debug mode. Note that the execution will halt while waiting for the debugger to attach. See the screenshot below to see how it will look in the log.

Image title

Select Debug->Attach Debugger from the menu in NetBeans. Change the value for Port to 9000 (or the value you chose in the previous step) and click OK.

Image title

To verify the setup, set a breakpoint at line 16 in the class HelloWorldEndpoint.

Image title

Then navigate to http://localhost:8080/hello. The execution will stop at the breakpoint at line 16 inHelloWorldEndpoint.

Image title

Original Link

DZone Research: The Most Significant Changes to the Java Ecosystem

To gather insights on the current and future state of the Java ecosystem, we talked to executives from 14 companies. We began by asking, “What have been the most significant changes to the Java ecosystem in the past year?” Here’s what the respondents told us:

Release Cadence

  • 1) The move to half-yearly releases. This will be a major boost to the Java language, given that Java today is lagging behind more advanced languages such as C#, and JavaScript in its way. This change will enable Java to be much more agile in its approach to the evolution of the language. 2) Kotlin: Kotlin is probably the competition that Java needs to start evolving faster in terms of code conciseness. Combined with half-yearly releases, the competition will drive Java to be a much better language. I believe that in two years, Java evolution will increase in a quantum leap. 
  • 2017 was the first year with two major releases – SE9 in September with modularity and Java EE8. We’re introducing a faster release cadence. EE4J is transitioning to Eclipse. And Java EE has been renamed Jakarta EE. 
  • Java moving to six-month release cycle. Making Java EE open source and letting the community decide how it evolves. 
  • To me the most significant change is the change of the release cycle to go to a 6-month release cadence. This change has the potential to have a huge impact on future releases and the evolution of Java as a language.
  • Increase in the cadence of releases to every six months to a year. In my opinion, three years is too long, and six months is too short. Java 10 is trivial.

Java EE = Jakarta EE

  • Java and open source are morphing together. Enterprise Java standards moving from the JCP to Eclipse is significant. The micro profile movement helping to build elements that don’t exist – Red Hat, IBM. Tomi tribe, and JavaEE. 
  • Java EE moving to the Eclipse Foundation. Java 9 modules in conflict with Java 8 but they are optional to adopt. Useful for the IoT environment. May not be widely adopted. Choose the type of JDK you want for shorter run times in smaller devices and container applications and smaller deliverables. 
  • Evolution from Glassfish to Eclipse is the most significant change and puts the future of enterprise Java in the hands of the community. IBM has provided open source Liberty fit for purpose deploy. You can ship the full EE or just what’s needed. We’ve taken the open sourced JVM from proprietary to full open source stack with more than four million lines of code. We now have open conversations with the open source community. The release train of every six months with every three-year release being designated for long-term support. This enables developers to get features more quickly and enables enterprises to know what will be supported long-term. Java 9 provided modularity and has changed how apps can be built. 
  • The “open sourcing” of Java EE to the Eclipse Foundation, creating EE4J and now the birth of Jakarta EE.   Java has been the dominant player in enterprise applications for two decades now. Jakarta EE ensures that Java will continue to be the dominant player for enterprise computing for a long time to come. 
  • EE. It was a mess with in-fighting, bickering, and Oracle ending support. The Eclipse Foundation and Java EE champions stepped up and took ownership which is for the best.

Other

  • Cool new features – Lambdas and tooling are evolving.
  • The change in use cases for Java around databases, big data, and Hadoop. Today, Java is the language of choice for microservices.
  • Java has been adding new language features that has allowed it to be more functional. Functional programming makes a more powerful form of programming possible. It’s different than imperative programming, which is a more ‘normal’ type of programming. Where in imperative programming you say you want something to do ‘A, B, C, and D,’ in functional programming, you describe a series of ways that you want to transform your data. Functional programming has been around for over 30 years yet has largely failed to become mainstream. It’s becoming more mainstream now, however, in part because Java has added support for functional features which is allowing functional practices to become more mainstream.
  • Introduction and traction of microservices with more companies migrating to them.

Here’s who we spoke to:

Original Link

Getting to Know JSON-P 1.1 (Part 1)

Java EE 8 includes an update to the JSON Processing API and brings it up to date with the latest IEFT standards for JSON. They are:

I will cover these topics in this mini-series.

Getting Started

To get started with JSON-P, you will need the following dependencies from the Maven central repository.

<dependency> <groupId>javax.json</groupId> <artifactId>javax.json-api</artifactId> <version>1.1</version>
</dependency> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.json</artifactId> <version>1.1</version>
</dependency>

JSON-Pointer

A JSON Pointer defines a string expression that references an element within the hierarchical structure of a JSON document. With a JSON pointer expression, you can access and manipulate a JSON document by retrieving, adding, removing, and replacing an element or value referenced by the expression.

The entry API is the javax.json.JsonPointer interface. An instance is created by calling the static factory method createPointer(String expression) on the javax.json.Json class and passing it the pointer expression.

Retrieve a Value

If you have the JSON document below and you want to retrieve the value of the title element, you create the JSON Pointer expression /title.

{ "title":"Java EE: Only What's New", "author":"Alex Theedom", "chapters":[ "Chapter 1: Java EE 8 What’s New Overview", "Chapter 2: Java API for JSON Binding 1.0 (JSR 367)", "Chapter 3: Java EE Security API 1.0 (JSR 375)" ], "released":true, "pages":300, "sourceCode":{ "repositoryName":"Java-EE-8-Only-Whats-New", "url":"github.com/readlearncode/" }, "otherBooks":[ { "title":"Professional Java EE Design Patterns", "length":350 } ]
} JsonObject jsonObject = ... create JSONObject from JSON document ...;

The code snippet below creates a JsonPointer and references the title element. It then calls the getValue() method, which is passed to the JsonObject to query.

JsonValue jsonValue = Json.createPointer("/title").getValue(jsonObject);

Add a Value

To add (or insert) a value to a JSON document, follow the same logic as retrieval by using a JSON pointer expression to identify the insertion point within the document. The following code snippet adds a new “category”: “Programming” JSON object to the root of the document.

JsonObject jsonObject = Json .createPointer("/category") .add(jsonObject, Json.createValue("Programming"));

The JsonObject returned is the entire new modified object.

Remove a Value

The removal process requires the location of the value to remove expressed as a JSON Pointer expression. The code snippet below removes the title element and returns the modified JSON document as a JsonStructure instance

JsonStructure jsonStructure = Json.createPointer("/title").remove(jsonObject);

Replace a Value

To replace a value, use the JSON pointer expression of the element to replace. Then, the replacement element is passed to the replace() method. The code snippet below replaces the title element’s value and returns the modified JSON document.

JsonStructure jsonStructure = Json .createPointer("/title") .replace(jsonObject, Json.createValue("Java EE 8"));

Test a Value

The existence of a value at a location can be tested with the containsValue() method. The code snippet below tests to see if there is a value at the location expressed by the JSON Pointer expression /doesNotExist.

Boolean containsValue = Json .createPointer("/doesNotExist") .containsValue(jsonObject);

Original Link

A Detailed Guide to EJBs With Code Examples

By 1996, Java had already become popular among developer for its friendly APIs and automated Garbage Collection and was starting to be widely used in back-end systems. One problem, however, was that most of these systems needed the same set of standard capabilities — such as persistence, transaction integrity, and concurrency control — which the JDK lacked at that time. That, naturally, led to many home-grown, closed implementations.

IBM stepped forward and released the Enterprise Java Bean (EJB) specification in 1997, with the promise that developers could write code in a standard way, with many of the common concerns automatically handled.

That’s how the first Java framework for the enterprise was born; the specification was later adopted by Sun in 1999 as EJB 1.0.

Fast forward twenty years and EJB 3.2 is now the subset of the JavaEE 9 specification.

What Is an Enterprise Java Bean

Simply put, an Enterprise Java Bean is a Java class with one or more annotations from the EJB spec which grant the class special powers when running inside of an EJB container. In the following sections, we’ll discuss what these powers are and how to leverage them in your programs.

A side note – annotations in EJB are relatively new and are available since EJB 3.0. Previous versions of EJB used to have interfaces which classes had to implement. I’m not going to cover that in this article.

JNDI Names

JNDI or Java Naming Directory Interface is a directory service which allows lookup of resources. Every resource like an EJB, a Datasource or a JMS Queue running on an application server is given a JNDI name which will be used to locate the resource.

All servers have a default scheme of assigning JNDI names but it can be overridden to provide custom names. The general convention is {resourceType}/{resourceName}. For example, a DataSource’s JNDI name can be jdbc/TestDatabase and a JMS queue can have jms/TestQueue as JNDI name.

Types of Enterprise Beans

Let’s now go a bit deeper into the specifics of Enterprise beans:

  • Session Beans
  • Message-Driven Beans

Session Beans

A session bean encapsulates business logic that can be invoked programmatically by a client. The invocation can be done locally by another class in the same JVM or remotely over the network from another JVM. The bean performs the task for the client, abstracting its complexity similar to a web service, for example.

The lifecycle of a session bean instance is, naturally, managed by the EJB container. Depending on how they’re managed, sessions beans can be in either of the following states:

As the name suggests, Stateless beans don’t have any state. As such, they are shared by multiple clients. They can be singletons but in most implementations, containers create an instance pool of stateless EJB. And, since there is no state to maintain, they’re fast and easily managed by the container.

As a downside, owing to the shared nature of the bean, developers are responsible to ensure that they are thread safe.

Stateful beans are unique to each client, they represent a client’s state. Because the client interacts (“talks”) with its bean, this state is often called the conversational state. Just like stateless beans, instance lifecycle is managed by the container; they’re also destroyed when the client terminates.

A Singleton session bean is instantiated once per application and exists for the lifecycle of the application. Singleton session beans are designed for circumstances in which state must be shared across all clients. Similar to Stateless beans, developers must ensure that singletons thread safe. However, concurrency control is different between these different types of beans, as we’ll discuss further on.

Now, let’s get practical and write some code. Here, we’re going to create a Maven project with a packaging type of ejb, with a dependency on javaee-api:

<project ...> <modelVersion>4.0.0</modelVersion> <groupId>com.stackify</groupId> <artifactId>ejb-demo</artifactId> <version>1.0-SNAPSHOT</version> <packaging>ejb</packaging> <dependencies> <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>8.0</version> </dependency> </dependencies> </project>

Alternatively, we could include the target server runtime dependency instead of the JavaEE APIs, but that does reduce portability between different containers.

Modern-day EJB is easy to configure, hence writing an EJB class is just a matter of adding annotations i.e. , or . These annotations come from the javax.ejb package:

@Stateless
public class TestStatelessEjb { public String sayHello(String name) { return "Hello, " + name + "!"; }
}

Or:

@Stateful
public class TestStatefulEjb {
}

Finally:

@Singleton
public class TestSingletonEjb {
}

There’s also a javax.inject.Singleton annotation, but that’s a part of the CDI spec, so we need to be aware of that if we’re going to use it.

Message-Driven Beans

A message-driven bean or MDB is an enterprise bean that allows you to process messages asynchronously. This type of bean normally acts as a JMS message listener, which is similar to an event listener but receives JMS messages instead of events.

They are in many ways similar to a Stateless session bean but they are not invoked by a client. instead, they are event-driven:

@MessageDriven(mappedName = "jms/TestQueue")
public class TestMessageDrivenBean implements MessageListener { @Resource MessageDrivenContext messageDrivenContext; public void onMessage(Message message) { try { if (message instanceof TextMessage) { TextMessage msg = (TextMessage) message; msg.getText(); } } catch (JMSException e) { messageDrivenContext.setRollbackOnly(); } }
}

Here, the mapped name is the JNDI name of the JMS queue that this MDB is listening to. When a message arrives, the container calls the message-driven bean’s onMessage method to process the message. The onMessage method normally casts the message to one of the five JMS message types and handles it in accordance with the application’s business logic. The onMessage method can call helper methods or can invoke a session bean to process the information in the message.

A message can be delivered to a message-driven bean within a transaction context, so all operations within the onMessage method are part of a single transaction. If message processing is rolled back, the message will be redelivered.

Accessing Enterprise Beans

As discussed before, MDBs are event-driven, so in this section, we’ll talk about how to access and invoke methods of session beans.

To invoke the methods of an EJB locally, the bean can be injected in any managed class running in the container — say a Servlet:

public class TestServlet extends HttpServlet { @EJB TestStatelessEjb testStatelessEjb; public void doGet(HttpServletRequest request, HttpServletResponse response) { testStatelessEjb.sayHello("Stackify Reader"); }
}

Invoking the method from a remote JVM is trickier and requires a bit more code. As a prerequisite, EJB must implement a remote interface to enable remoting capabilities. You will need to write an EJB client which will perform a lookup over the network.

The interface is annotated with :

@Remote
public interface TestStatelessEjbRemote { String sayHello(String name);
}

Make sure that the TestStatelessEjb implements this interface.

Now let’s write the client which in this case would just be a simple Java SE application with the main method:

public class TestEjbClient { public static void main(String[] args) throws NamingException { Properties properties = new Properties(); properties.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.apache.openejb.client.LocalInitialContextFactory"); properties.setProperty(Context.PROVIDER_URL, "ejbd://host:4201"); Context context = new InitialContext(properties); TestStatelessEjbRemote testStatelessEjbRemote = (TestStatelessEjbRemote) context.lookup("ejb/TestStatelessEjbRemote"); testStatelessEjbRemote.sayHello("Stackify"); }
}

First, we created a Context with properties referring to the remote JVM. The initial context factory name and the provider URL used here are defaults for Open EJB and will vary from server to server.

Then we performed a lookup of the EJB by using the JNDI name of the bean and then typecast it to the desired remote type. Once we get the remote EJB instance, we were able to invoke the method.

Note that you’ll need two JAR files in the classpath of your client:

  • One containing the initial context factory class. This will vary from server to server.
  • Another containing the remote interface of your EJB.

As it happens, the Maven EJB plugin will generate a client jar file which will only have all the remote interfaces. You just need to configure the plugin:

<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <version>3.0.0</version> <configuration> <!-- this is false by default --> <generateClient>true</generateClient> </configuration>
</plugin>

In case of Stateful beans, a new instance of the bean is returned every time a client performs a lookup. In case of Stateless beans, any one bean from the pool is returned.

Concurrency in Singleton Beans

With both Stateless and Stateful enterprise beans, methods can be concurrently invoked by multiple clients or by multiple threads from the same client. However, in case of Singleton enterprise beans, the default mode is LockType.WRITE. This means that only one thread is allowed to invoke the method at once.

That can be changed by adding the annotation over a method and setting to LockType.READ:

@Singleton
public class TestSingletonEjb { @Lock(LockType.READ) public String sayHello(String name) { return "Hello, " + name + "!"; }
}

This fine-grained concurrency management over method level allows developers to build robust multi-threaded applications without having to deal with actual threads.

Say we have a Map instance variable in a Singleton EJB. Most clients read from the Map but a few do put elements into it. Marking the get method as lock type read and put method as lock type write would make up for a perfect implementation:

@Singleton
public class TestSingletonEjb { private Map<String, String> elements; public TestSingletonEjb() { this.elements = new HashMap<>(); } @Lock(LockType.READ) public String getElement(String key) { return elements.get(key); } @Lock(LockType.WRITE) public void addElement(String key, String value) { elements.put(key, value); }
}

A write-lock locks the whole class, so when the map is being updated in the addElement method, all the threads trying to access getElement will also be blocked.

EJB Timers

Running scheduled jobs in EJB is simplified to the maximum possible level i.e. adding the annotation over the method that needs to be invoked. Parameters of this annotation configure when the timer will be executed:

@Singleton
public class TestScheduleBean { @Schedule(hour = "23", minute = "55") void scheduleMe() { }
}

Note here that the EJB is a Singelton. This is important because only singleton beans guarantee that only one instance of the bean will be created and we don’t want our scheduler to be fired from multiple instances.

Conclusion

Although Spring has gained a lot of traction in the enterprise development world, EJB is still very relevant and quite powerful. Out of the box remoting capabilities and concurrency management are still exclusive to Enterprise Beans; JMS and JPA are a part of the JavaEE spec as well and hence treated as first-class citizens in EJB.

EJB has certainly evolved beyond its previous limitations and has re-invented itself into a modern and powerful tool in the rich Java ecosystem.

Original Link

The Relationship Between Jakarta EE, EE4J, and Java EE

The Relationship Between Jakarta EE, EE4J, and Java EE The Jakarta EE name has been out for about a month, and even if Mike Milinkovich explained the names and concepts pretty well in his blog post  And the Name Is…, there still is a bit of confusion about how it all relates, and I get questions around it whenever the topic comes up. I have tried to sum up some of it here. Hope it helps!

Java EE

Java EE (or Java Platform, Enterprise Edition) is the name of the current platform governed by the Java Community Process (JCP). The latest version is Java EE 8, which was released in September 2017.

Jakarta EE

Jakarta EE is the name of the platform governed by the Jakarta EE Working Group. The first version will be Jakarta EE 8, which will be based on the Java EE 8 technologies transferred from Oracle to the Eclipse Foundation.

EE4J

Eclipse Enterprise for Java (EE4J) is the top level project in the Eclipse Foundation for all the projects for creating the standards that will form the base for Jakarta EE. The EE4J Project Management Committee (PMC) is responsible for maintaining the overall vision for the top level project. It will set the standards and requirements for releases and help the projects communicate and cooperate.

Summing Up

Jakarta EE does not replace Java EE! It is the name for the platform evolving with Java EE 8 as a starting point. Java EE 8 will still exist, but there will not be any new versions of the platform.

Jakarta EE does not replace EE4J! It is the name of the platform based on the EE4J projects with Java EE 8 as a starting point.

Original Link

CodeTalk: Where Java EE Lost Its Pace, Reasons to Be Optimistic About Jakarta [Podcast]

Thanks for tuning in to another episode of DZone’s CodeTalk Podcast where you’ll hear hosts Travis Van and Travis Carlson have early conversations with the creators of new developer technologies
Image title

Check back every Wednesday for a fresh episode, and if you’re interested in being involved as a guest or have feedback for our hosts scroll down to the bottom for contact information.

CodeTalk Episode 5: Where Java EE Lost Its Pace, New Reasons to Be Optimistic 

Josh Juneau is a Java champion whose blog post observations last year drew a lot of attention as they framed the stagnation of innovation in Java EE. Under the JCP process, Java EE lost its way, and the grumblings of the community have increased in key areas of stagnation (around cloud deployment, in particular).

Key Discussion Points

In this episode of DZone’s CodeTalk, we synch up with Juneau to hear his take on:

  • When and why Java EE 8 progress had stalled after work had begun
  • Reasons to be optimistic about the new governance by Eclipse Foundation, who is expected this Spring to unveil its strategy for Jakarta
  • Some misconceptions about Java EE, and positive breakthroughs in Java EE 8
  • Key technology areas where there is growing consensus Jakarta is hoped to evolve
  • Ways that Jakarta is expected to become more “community-driven” and less of by-committee black box specs style

Want More CodeTalk?

We’re still in the early stages here with the relaunch. But soon we hope to give our CodeTalk landing page a facelift and are going to house all future and past episodes in this one place. 

For now, stay tuned to DZone for weekly episodes released each Wednesday. And, if you’d like to contact the showrunners to get involved as an interviewee or just simply share your feedback, feel free to message Travis and Travis here: codetalkpodcast@gmail.com.

Original Link

How Java EE Can Get Its Groove Back

One of the most intriguing developments in the Java landscape is the transition of governance of the Java EE platform from Oracle’s JCP process to Eclipse Foundation. We’re anticipating a reveal this summer of more details around the technical directions of Java EE from this new governance. To help DZone readers understand some of the key considerations ahead of how Java EE stays relevant in the new, distributed computing, cloud-native trends in enterprise computing, we caught up with Lightbend CTO, Akka creator and original author of the Reactive Manifesto, Jonas Bonér.

DZone: Other JVM languages like Scala — and the many frameworks that target distributed systems challenges like Akka — saw an opportunity to tackle use cases in new ways beyond the classic Java / Java EE approach. What would you say are some of the key ways that the Java ecosystem needs to evolve to keep evolving the Java / Java stack capabilities moving forward?

JB: When it comes to high-level abstractions for distributed computing and concurrency, Java EE has fallen off the pace, and Java programmers either have to resort to quite low level and primitive programming models, or they have to bring in a third-party library like Akka to tackle these challenges.

Java EE has also to a large extent missed the train on streaming data, the concept of data-in-motion. The good news is that there are new initiatives in front of the Eclipse Foundation—proposals trying to address these shortcomings, using the Reactive Streams specification—and it seems likely that some of these will make it through the Jakarta EE process.

There are also proposals in front of the JDK itself, for example, a Reactive Streams-based (available in the JDK as the java.util.concurrent.Flow API) version of java.util.stream. Having a native implementation of Reactive Streams in the JDK would make it easier to build reactive and stream-based JDK components on top, for example async HTTP, async JDBC, and support for streaming in WebSockets.

We also have proposals—currently discussed in the MicroProfile group — that are trying to push Java more into the event-driven space. The JMS and Message Driven Beans specs are very outdated and there’s a need for a new messaging standard that more fully understands this new world of event-driven systems, real-time data, and data-in-motion. More details about these proposals can be found in this article.

DZone: There are a huge number of systems out there today running on Java—especially when you think about arenas like financial services and other major systems built for huge scale, etc. where the JVM offers so much stability. How would you describe the sorts of modernization efforts that you are seeing at big enterprises in how they keep trying to extend the lives of these systems and what their modernization projects typically look like?

JB: Most people that want to modernize their applications hit the ceiling when it comes to the monolith. The monolith can strangle productivity, time to market, development time, and getting features out to customers. You can reach a threshold where you have to coordinate too many things across too many teams in lock-step, in order to get features rolled out at all, when this happens the whole development organization slows down to halt. This forces many organizations to move to microservices, where they can have autonomous teams delivering features independently of each other.

There are many ways you can do microservices. But the naïve way of chopping up services, turning method calls into synchronous RPC calls, and trying to maintain strong consistency of data across services, maintains the strong coupling that microservices can liberate you from. So, what’s wrong with that? What’s wrong with it is that you’ve now paid the technical cost associated with microservices—more expensive communication between components, higher chances and rates of failure—but you haven’t got any of the technical benefits. So from this perspective, you’re now in a worse off position than you were with the monolith.

If you fully embrace the fact that you now have a distributed system, embrace eventual consistency and asynchronous communication/coordination, then you are in a position to take advantage of the benefits of moving to the cloud: loose coupling, system elasticity and scalability, and a higher degree of availability. Here, an event-driven and reactive design can really help, which is why I believe that Java EE needs to embrace event-driven messaging and reactive.

In cloud environments, it’s pay-as-you-go. The problem with operating a monolith is that the granularity of how you can operate your system is a system of “one”—which is too coarse-grained, making it very expensive and hard to scale. By splitting the system up into multiple independent and autonomous services, you can scale different services according to their different needs. For example, one service might need 10x of the memory of another service, or 20x of the CPU processing at peak times. A microservices-based design allows you to fine-tune each of these services to their specific resource needs, independently. Whereas with the monolith, you need to scale your infrastructure based on your highest demand piece (with limited options for scaling down during low-traffic times), which means that you have to pay for much more hardware resources than you actually need. So we’re seeing scale and costs as the two main imperatives for modernization to the cloud for big enterprise Java shops.

DZone: What do you think about the Eclipse Foundation taking over stewardship of Java EE from Oracle, and the opportunities that they have to rethink its governance and pace of innovation?

JB: I’m excited about it after talking to the Jakarta EE leads. It’s clear that what they want to do now is focus on innovation—which has been a problem in the JCP. And they want to do that by borrowing the great and proven ideas from Open Source models: focusing on code first—encouraging experimentation and working code that’s been tested and vetted by many contributors—and more of a focus on an open process—not “design by committee” like the JCP the last 20 years, which we all know doesn’t work. Closed processes, working in a closed room with limited connection to reality, don’t get in touch with the realities of usage until the very end. This new proposed open governance model has made it really exciting not only for the users of Java EE, but the vendors like us that want to contribute.

DZone: You are very focused on streaming data and the path to systems that are always-on, built for data in motion. How is this changing the game for both the developers and the operators supporting those types of data-driven systems? How does this trend from batch to streaming likely change the JVM ecosystem ongoing?

JB: Most of the APIs working with data in the JDK and the Java EE are designed around a world where data is at rest. The problem is that the world we live in today is radically different, most systems today need a way to manage massive amounts of data that is in motion, and often need to do this is a close to real-time fashion. In order to do this, we need to fully embrace streaming as a first-class concept. The APIs in Java needs to evolve to treat streams as values, have high-level DSLs for managing streams (transforming, joining, splitting, etc.), and have APIs for consuming and producing streams of data, with mechanisms for flow control/backpressure.

With streaming, there are also huge implications on DevOps/Operations on things like monitoring, visualization, and general transparency into the system—how events flow, where bottlenecks arise, how to fine-tune the data pipelines, and so on. As soon as you have continuous streams that might never end these things becomes a real challenge, and its still an area where we as an industry—not just in Java—are lacking in standards and good tools.

Also, the move towards microservices and making Java applications more appealing for the microservices world—running JVMs in Docker containers, deployed by Kubernetes, and the likes—requires a focus on reducing memory footprint. In relation to alternatives like Node.js and Golang, Java consumes too much memory and that needs to be a target for improvement.

DZone: You had an interesting take that “streaming is the new integration”—what do you mean by that?

JB: Historically, approaches to enterprise integration are all coming from the legacy messaging tradition and products like Tibco and WebSphere MQ. It’s mainly cast in the direction that messages flow between different integration as one-offs, one message at a time—a view that was maintained by ESBs and SOA. But that approach doesn’t really work well with streams of data, in particular when working with streams that potentially never ends.

There are endless amounts of data we need to get into systems these days. Mobile users alone produce massive amounts of streaming data, and we have the upcoming wave of IoT just around the corner. How do you deal with all these streams of data? You need to better tools for ingesting, joining, splitting, transforming, mining knowledge from, and passing streams of data on to other systems and users—which calls for a new type of integration DSLs and Enterprise Integration Patterns (EIPs). Viktor Klang and I recently wrote an article on the subject, discussing these challenges and opportunities in more detail.

One of the issues is flow control, support for backpressure between different producers of streaming data and their consumers. Here, Reactive Streams is an excellent protocol to lean on, giving us a standardized way for realizing and controlling backpressure between different systems, products, and libraries, and has proved to be a great foundation for doing integration in a fully stream-oriented and asynchronous fashion. One great example of this new approach to enterprise integration is the Alpakka project.

Original Link

A Personal Opinion on the Future of Jakarta EE

The following is just my personal opinion of where Jakarta EE may be going. I am far removed from the decision makers and this opinion is based on what I have read and my experience as a teacher. Feel free to label me a crackpot.

I began to teach J2EE in 2002. It was a mess. Most of my time was spent on explaining the purpose of the numerous XML files and the three strange beans, session, session stateless, and EJB. Of course, this was in the days when it was thought that programs will be frequently reconfigured via their XML files and that pooling the strange beans was the state of the art. When I got a copy of Rod Johnson’s book J2EE Development without EJB, I thought that the concept was intriguing. However, I was required to teach what was most commonly used in industry, and at that time, it was J2EE.

Fast forward to today and much of what Rod rallied against and that led to the creation of Spring has changed. While some may disagree, I find Java EE 7 easier to use and easier to teach than Spring.

All hell broke loose last year with the much-delayed release of Java EE 8. Components that we were looking forward to were dropped. The IDEs were slow to upgrade to Java EE 8. Then came the bombshell that Oracle was going to drop their stewardship of Java EE.

The Guardians came into being just before the release of Java EE 8, when Oracle announced that it was dropping certain popular new components. While I could speculate why Oracle did this and complain bitterly, there is nothing to be gained. Oracle controls the destiny of Java and as a corporation, they will make decisions that have a positive effect on their bottom line. I can only assume that the expense of maintaining and promoting Java EE was no longer cost-effective. The money invested in Java EE could be spent in other areas that offered the company a higher rate of return.

Oracle then began to either donate and/or make fully open source different parts of their Java portfolio. NetBeans went to Apache and Java EE went to Eclipse. Fortunately, the word Java is not in NetBeans, so it kept its name. Java EE needed to be rebranded for no discernible reason. There was much written on this, but in the end, it turned out to be nothing more than a distraction. It also ignored the real issue.

Java EE is a full stack development environment for enterprise level applications running on an application server. It was promoted as such and Oracle had evangelists traveling the world to get that message out. That ended abruptly when the evangelists were all let go. I was disappointed because I thought once I retired from teaching, I could become an evangelist. That was also the point in time when Java EE began its slow death march inside Oracle.

Apache, Eclipse, and other open source foundations do an amazing job at ensuring that the software that falls under their umbrellas is engineered to the highest standards. What they don’t do effectively is promote or evangelize their portfolio of software. Don’t get me wrong, I have seen Eclipse at conferences, and I will be submitting proposals to an Apache Con coming up in Montreal soon. That is not the sufficient level of promotion that Java EE needs.

Java EE is not without its cheerleaders. Red Hat, IBM, and Payara, as well as some others are actively out there promoting Java EE, but none have the gravitas that Oracle, the home of Java, had. In my myopic view of the world, Spring has managed to convince the enterprise development community that Java EE is nothing more than the necessary support libraries for some aspects (sorry about the AOP humor) of Spring.

What does this all mean for Jakarta EE? On the plus side, now that it’s fully open source, many more interested developers will be able to contribute to its various components. On the minus side, I fear that a clear vision of what Jakarta EE is will be lost. Communities will coalesce around specific elements for their own self-interest. Self-interest is a good thing. What I believe will be lost is a vision of Jakarta EE as a complete and integrated system to be considered alongside Spring or Vaadin. I worry that Jakarta EE will become another Commons style library.

That was a depressing conclusion. What can be done? I am way out of my league on this. I can only speculate that we need one of the existing corporations or the creation of a new one to put Jakarta EE on the same playing field as the frequently mentioned Spring. Spring is an excellent framework, and I have taught Spring, but it’s a different product from Jakarta EE. Spring is also backed by the deep pockets of Pivotal, who are in turn backed by the enormous pockets of Dell. An obvious choice for a corporation to step up is Payara, who promote their working version of Glassfish. Whomever it might be, I wonder if there is the necessary investment available to return EE to its status as a true enterprise stack.

For the past three years, I have attended parties at Bain Capital while at JavaOne. They have a wall with the names of all the companies they have invested in. A company that leads the commercialization of Jakarta EE should be on that wall alongside Twitter and others. To the investors out there, I will happily provide you with my mailing address to send me the checks.

Anyone want to start a business or hire a 64-year-old evangelist?

One final word concerning the Java EE Guardians: It came into being because it was felt that Oracle was not listening to the developer community. With the donation to Eclipse, I am uncertain what the raison d’être is of the group. Reza Rahman has done an amazing job in rallying the EE community to get their voices heard. Now I think we should all join the Eclipse Foundation and directly influence Jakarta EE. We need to make noise inside Eclipse rather than outside.

Original Link

Java EE Has a New Name….

… and it is Jakarta EE!

Nearly 7,000 people voted in the community poll. Over 65% voted for Jakarta EE against Enterprise Profile.

So What Happens Now?

Permission is being sought to formally use the Jakarta EE trademark. This process is being undertaken by EE.next.

The process of migrating Java EE to the Eclipse Foundation has been in full swing for a few months and will continue until all projects are moved over.

What About EE4J?

EE4J was never intended to be the new brand name of Java EE. It’s the top-level project at the Eclipse Foundation under which the source code and Technology Compatibility Kit (TCK) for Glassfish and EclipseLink exist.

How Do I Talk About Java EE (or Whatever It’s Called Now)?

The Java EE name has not gone away. It will never go away — as it’s the certification name for implementations of the Java Enterprise Platform. To say that something is “Java EE”, it would need to have a license from Oracle and to have passed the TCK. There are multiple implementations of Java Enterprise Edition, such as Glassfish (the reference implementation), WebSphere WebLogic, and JBoss.

Java EE 5, 6, 7, 8 are still referred to by their normal names, but there will be no Java EE 9. Instead, there will be Jakarta EE.

So when you talk about Java EE 8, you are talking about Java EE 8. That’s the version released at JavaOne 2017. If you want to talk about Java EE’s future incarnation, use Jakarta EE.

Name Changes

Here is a table that summarises the name changes.

¹  The JCP will continue supporting the Java SE/ME communities. However, Jakarta EE specifications will not be developed under the JCP.

How Will Jakarta be Versioned?

This question and many others are yet unanswered.

Conclusion

There are still many questions to answer and unknowns to address, so caution is still advised. Whatever happens, I am already looking forward to the new dawn that awaits the greatest Java framework I have ever used.

Interesting Links

Original Link

The State of Java in 2018: Faster Updates and New Features

&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;!– Google Tag Manager (noscript) –&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;iframe src=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;https://www.googletagmanager.com/ns.html?id=GTM-PDSRGWC&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot; height=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;0&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot; width=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;0&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot; style=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;display:none;visibility:hidden&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/iframe&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;

2017 was a turbulent year in the Java world. The long-awaited release of Java 9 brought a lot of changes and interesting new features, and Oracle announced a new release schedule for the JDK.

And that was just the beginning. In the past, developers often complained that Java wasn’t developing fast enough. I don’t think you will hear these complaints in the near future. It might be quite the opposite.

In 2018, the JDK will follow a new release schedule. Instead of a huge release every few years, you will get a smaller one every six months. So, after the release of Java 9 in September 2017, Java 10 is already planned for March 2018. But more about that later.

Enterprise Stack Overview

Most enterprise projects don’t use the JDK alone. They also rely on a stack of enterprise libraries, like Spring Boot or Java EE, which will also evolve over the next several months. In this article, I will mostly focus on the JDK. But here is a quick overview of what you should expect from the two major enterprise stacks in the Java world.

The Spring development team is working hard on Spring Boot 2 and released the first release candidate in January. The team doesn’t expect any major API changes and doesn’t plan to add any new features until the final release. So, if you are using Spring Boot in your projects, it’s about time to take a closer look at the new version and to plan the updates of your existing Spring Boot applications.

At the end of 2017, Oracle started to hand over the Java EE specifications to the EE4J project managed by the Eclipse Foundation. As expected, such a transfer is a huge project which can’t be completed in a few days. There is a lot of organizational and technical work that still needs to be done. Java EE needs a new name and development process. And the transfer of the source code and all the artifacts stored in different bug trackers is still ongoing. We will have to wait a little bit longer before we can see the effects of the transfer and the stronger community participation.

Short JDK Release and Support Cycles

As announced last year, Oracle will release two new JDK versions in 2018. Instead of the slow release cycle where every few years produced a new major release with lots of changes, we will now get a smaller feature release every six months. This allows for faster innovation of the Java platform. It also reduces the associated risks of a Java update. For Java developers, these smaller releases will also make it a lot easier to get familiar with the latest changes and to apply them to our projects.

I expect this to be a very positive change for the Java world. It will add a new dynamic to the development of the Java language, and allows the JDK team to adapt and innovate a lot faster.

Changes and New Features in JDK 10

Due to the short release cycle, Java 10 only brings a small set of changes. You can get an overview of the currently included 12 JEPs (JDK Enhancement Proposal) on the OpenJDK’s JDK10 page.

The most notable change is probably the support for type inference of local variables (JEP 286). But you should also know about the new time-based release versioning (JEP 322), and parallel full GC (garbage collector) support added to G1, or Garbage First (JEP 307).

Type Inference

JDK 10 will finally introduce type inference to the Java language. Most other statically-typed languages have been supporting this feature for quite a while, and a lot of Java developers have been asking for it.

JEP 286 introduces the keyword var, which shortens the declaration of a local variable. It tells the compiler to infer the type of the variable from its initializer. So, instead of:

List<String> paramNames = List.of("host.name", "host.port");
Configuration config = initializeConfig(paramNames);

You will be able to write:

var paramNames = List.of("host.name", "host.port");
var config = initializeConfig(paramNames);

As you can see in the code snippets, the keyword var removes the redundancy from the variable declaration. This can make your code easier to read, especially if you use good variable names and if it’s a variable that you only use a few times directly after you declared it.

If you want to dive deeper into JEP 286 and when you should use it, I recommend you take a look at Nicolai Parlog’s very detailed article about type inference in Java 10.

Time-Based Release Versioning

Beginning with Java 10, the format of the Java version number changes to improve the support for a time-based release model.

The main challenge introduced by the new release model is that the content of a release is subject to change. The only thing that’s defined in the beginning is the point in time at which the new version will be released. If the development of a new feature takes longer than expected, it doesn’t make the cut for the next release and will not be included. So, you need a version number that represents the passage of time, instead of the nature of the included changes.

JEP 322 defines the format of the version number as $FEATURE.$INTERIM.$UPDATE.$PATCH, and plans to use it as follows:

  • Every six months, the development team will publish a new feature release, and increment the $FEATURE part of the version number.
  • The release published in March 2018 will be called JDK 10, and the one in September JDK 11. The development team states in JEP 223 that they expect to ship at least one to two significant features in each feature release.
  • The $INTERIM number is kept for flexibility and will not be used in the currently planned 6-month release model. So, for now, it will always be 0.
  • Updates will be released between the feature releases and shall not include any incompatible changes. One month after a feature release and after that every three months, the $UPDATE part of the version number will be incremented.

Parallel Full GC in G1

For most developers, this is one of the smaller changes. Depending on your application, you might not even recognize it.

G1 became the default garbage collector in JDK 9. Its design tries to avoid full garbage collections, but that doesn’t mean that they never happen. Unfortunately, G1 only uses a single-threaded mark-sweep-compact algorithm to perform a full collection. This might result in a performance decrease compared to the previously used parallel collector.

JEP 307 addresses this issue by providing a multi-threaded implementation of the algorithm. Beginning with JDK 10, it will use the same number of threads for full collections as it applies for young and mixed collections.

So, if your application forces the garbage collector to perform full collections, JDK 10 might improve its performance.

Plans for JDK 11

JDK 10 isn’t even released yet, and there are only seven months left until the release of JDK 11. So, it’s no surprise that there is already a small set of JEPs planned for the second feature release in 2018.

In addition to the removal of deprecated Java EE and CORBA modules (JEP 320) and a new garbage collector (JEP 318), JDK 11 will most likely introduce dynamic class-file constants (JEP 309), and support the keyword var for implicitly-typed lambda expressions (JEP 323).

The current scope of JDK 11 shows the benefits of shorter release cycles. The JEPs 309 and 318 introduce new functionality, while the other two JEPs use an iterative approach to evolve existing features.

With the release of JDK 9 in September 2017, the Java EE and CORBA modules became deprecated. One year later, with the release of JDK 11, JEP 320 removes them from the JDK. So, instead of keeping them for several years, they will be removed in a timely and predictable way.

And JEP 323 is a logical next step after JEP 286 introduced type inference for local variables in JDK 10. You should expect to see this approach more often in the future. The short release cycles make it a lot easier to ship a huge feature in multiple, logical steps distributed over one or more feature releases.

Short Support Cycles Require Fast Adoption

Together with the new release model, Oracle also changed their support model. The new model differentiates between short-term and long-term releases.

Short-term releases, like Java 9 and 10, will only receive public updates until the next feature release gets published. So support for Java 9 ends in March 2018, and Java 10 will not receive any public updates after September 2018.

Java 11 will be the first long-term release. Oracle wants to support these releases for a more extended period. But until now, they didn’t announce how long they will provide public updates for Java 11.

As an application developer, you will need to decide if you want to update your Java version every six months, or if you prefer a long-term release every few years. In addition to that, Oracle encourages everyone to migrate to their Java SE Advanced product. It includes at least five years of support for every long-term release.

Summary

In the past, a lot of developers complained about the slow evolution of the Java language. That will no longer be the case in 2018. The new, 6-month release cycle and the adapted support model will enforce faster updates of existing applications and introduce new features on a regular basis. In combination with the evolution of existing frameworks, like Java EE or Spring, this will add a new dynamic to the Java world. And it will also require a mindset shift in all companies that are used to updating their applications every few years.

Original Link

What Can Reactive Streams Offer EE4J?

In my current role at Lightbend, I’m investigating and pursuing opportunities where Reactive Streams can make the lives of EE4J (the new Java EE) developers better. In this blog post, I’m going to share some of the ideas that we’ve had for Reactive Streams in EE4J, and how these ideas will benefit developers.

Reactive Streams was adopted by the JDK in the form of the java.util.concurrent.Flow API. It allows two different libraries that support asynchronous streaming to connect to each other, with well-specified semantics about how each should behave, so that backpressure, completion, cancellation and error handling is predictably propagated between the two libraries. There is a rich ecosystem of open source libraries that support Reactive Streams, and since its inclusion in JDK9, there are a few in development implementations that are targetting the JDK, including the incubating JDK9 HTTP Client, and the Asynchronous Database Adapter (ADBA) effort that have also adopted it.

High-Level Use Case

Before I jump into the specific parts of EE4J where Reactive Streams would be useful, it’s worth talking about the overall picture. Reactive Streams is an integration API, or more specifically, a Service Provider Interface (SPI). It’s not intended that application developers implement the reactive streams interfaces directly themselves, rather, it is intended that the various streaming data sources and sinks provided by libraries, database connectors, clients and so on, implement reactive streams, so that application developers can then easily plumb those sources and sinks together.

Doing a quick off the top of my head count based on my knowledge of existing specs that do streaming in EE4J and the JDK, there exist no less than 10 different APIs that are offered by the various specs for streaming data either synchronously or asynchronously. These range from, of course, InputStream and OutputStream in the JDK, to NIO Channel’s, to the Servlet 3.1 ReadListener and WriteListener extensions, to the JDBC ResultSet, JSR 356 @OnMessage annotations, Message Driven Beans and JMS, CDI events using @Observes, Java collection Stream and Iterator based APIs, and finally, the new JDK9 Flow API. Each of these APIs streams data in some way, and offers varying levels of capability – some are synchronous, some are asynchronous, some offer backpressure and some don’t, some have well-defined error handling semantics and others don’t.

The problem arises when I want to connect two of these together. For example, if I want to emit CDI events on a WebSocket. Or if I want to plumb a stream of messages into a database. Or if I want to stream data from an HTTP client response to a servlet response. When the two APIs for streaming are different, then I have to write non-trivial boilerplate code to connect them together. This even applies to connecting an InputStream and OutputStream, I can’t simply pass an InputStream to an OutputStream and say “here’s your source of data, write it”, I have to allocate a buffer, read and write in a loop making sure that I get the semantics and argument ordering correct, I have to ensure that I properly wrap things in try-with-resources blocks to ensure that things are cleaned up correctly even when there are errors, and so on. And all this for what should be the simplest of all use cases.

It gets even more complex when we start integrating asynchronous APIs. What if the source is producing too much data for the sink, how do I tell the source to slow down? In some cases, the API’s don’t even offer a backpressure mechanism, in others, they do, but the mechanisms differ, we have interest based event APIs like NIO, we have on ready callback-based APIs like Servlet 3.1, and we have token based backpressure APIs like JDK9 Flow. Connecting these together, especially given that these asynchronous APIs require concurrent programming, is unreasonable to expect application developers to implement and maintain for each permutation of APIs that need to interoperate.

And so the high-level reason for supporting Reactive Streams in EE4J is in two parts. Firstly, it facilitates integration of EE4J APIs with each other. Secondly, it facilitates integration of EE4J APIs with third-party libraries. To connect a Reactive Streams publisher to a Reactive Streams Subscriber is one line of code, publisher.subscribe(subscriber);, and in this single line of code, a developer can be confident that data flow, backpressure, completion handling and error handling are correctly handled.

So as we consider the individual specs within EE4J and the use cases associated with them that Reactive Streams helps with, we should remember that each additional place where Reactive Streams is used in EE4J increases the usefulness of Reactive Streams in other parts of the spec. In my use cases below I’ve tried to focus on use cases that are interesting today in and of themselves, assuming no other changes are made to support Reactive Streams. For each use case that does get implemented, the usefulness of Reactive Streams will multiply.

Servlet IO

It is a common use case for an application to store or transfer files, sometimes large files, perhaps hundreds of megabytes in size. And a very common place to store these is in an object storage service, such as Amazon S3. Let’s imagine you have an expense reporting application, and each expense must have an associated scan or photo of a receipt, so your application offers the ability to upload and download these receipts, using Amazon S3 to store them.

A unique thing about a request to upload or download large files is that such a request is long running. It can take minutes to upload or download a single file – consider someone trying to use the dodgy airport wifi to upload a receipt for a meal they purchased at the airport, uploading their 4mB image at 20kB/s. That single upload will take over 3 minutes. Using the existing blocking APIs offered by the servlet API requires each upload consuming a thread from the servlet containers thread pool for the duration of the upload, 3 minutes. Threads are a very limited resource. Each thread requires up to a few megabytes of memory allocated for it stack (depending on configuration and use), you can’t just go create more threads when you need them because they are expensive. For this reason, servlet containers use pools of threads, a typical configuration is to have a pool of 200 threads. With 200 threads, that server would only be able to serve one upload of a receipt on dodgy wifi per second before the server exhausts its thread pool and starts rejecting requests. And that’s not including any other requests the server has to handle. This is dismal performance and a very inefficient use of resources.

The solution to the problem is to use asynchronous IO to handle these file uploads and downloads. Asynchronous IO allows the server to only assign a thread to a connection when it actually needs it — when there’s data available to read and write. There are a number of asynchronous HTTP clients out there, for this use case we’ll choose the JDK9 HTTP client. Servlet 3.1 introduced asynchronous IO, so we could use that to receive the uploaded data, however, connecting the JDK9 HTTP client to the Servlet 3.1 asynchronous IO APIs is not at all trivial. These 180 lines of concurrent code are what it takes to write code that adapts Servlet IO to the Reactive Streams API offered by the JDK9 HTTP client. However, if HttpServletRequest offered a method that allowed getting a Publisher<ByteBuffer> for consuming a request, this is what a servlet that handled file uploads to S3 would look like:

public class S3UploadServlet extends HttpServlet { private final HttpClient client = HttpClient.newHttpClient(); private final String S3_UPLOAD_URL = ...; public void doPost(HttpServletRequest req, HttpServletResponse resp) { AsyncContext ctx = req.startAsync(); client.sendAsync(HttpRequest.newBuilder(S3_UPLOAD_URL) // pipe the incoming request bytes directly to S3 .POST(BodyPublisher.fromPublisher(req.getPublisher())) .build() ).thenAccept(s3Response -> { resp.setStatus(s3Response.statusCode()); ctx.complete(); }); }
}

As you can see, the line of code needed to connect the servlet request body stream to the HTTP client request body stream was .POST(BodyPublisher.fromPublisher(req.getPublisher())). That’s 180 lines of non-trivial concurrent code in the example I linked to above down to one line of trivial code, with built-in handling of backpressure and completion/error propagation. Furthermore, the code reads like the high-level task that the developer is trying to achieve, literally “Post the body published from the request.”

Multipart Request Handling

Servlet 3.0 introduced support for handling multipart/form-data requests, however, this support involves buffering to disk, and does not offer any asynchronous streaming capabilities. As an extension of the previous use case, a developer might like to use multipart/form-data to upload files. A reactive streams-based API to handle this might expose the parts of the form as a stream of parts, and each part might be a ByteBuffer substream itself. This is what the code might look like, using Akka Streams to handle the outer stream:

public class S3UploadServlet extends HttpServlet { private final HttpClient client = HttpClient.newHttpClient(); private final ActorSystem system = ActorSystem.create(); private final Materializer materializer = ActorMaterializer.create(system); private final String S3_UPLOAD_URL = ...; public void doPost(HttpServletRequest req, HttpServletResponse resp) { AsyncContext ctx = req.startAsync(); Source.fromPublisher(req.getPartPublisher()) // Filter to only get the file part .filter(part -> part.getName().equals("file")) // Handle one part at a time asynchronously .mapAsync(1, part -> { // Post the file part to S3 client.sendAsync(HttpRequest.newBuilder(S3_UPLOAD_URL) // Here we plumb the publisher for the part (containing the // bytes) to the HTTP client request body to S3. .POST(BodyPublisher.fromPublisher(part.getPublisher())) .build()) }) // Collect the the result of the upload .runWith(Sink.head(), materializer) // Attach a callback to the resulting completion stage .thenAcceptAsync(s3Response -> { resp.setStatus(s3Response.statusCode()); ctx.complete(); }); }
}

Similarly, other implementations of Reactive Streams can easily be used to handle the multipart request in the same way. The important thing that Reactive Streams allows here is that a developer can select whatever tool they want to handle whichever part of the stream they want, and be assured that end to end, all data, backpressure, completion, and errors are consistently propagated.

Messaging/JMS

So far we’ve only looked at use cases that deal with streaming bytes for IO. Reactive Streams is also very useful for streaming high-level messages, such as those produced and consumed by message brokers.

Here’s an example of what it might look like to subscribe to a queue using a Reactive Streams compatible messaging API in EE4J, using Akka Streams to handle the stream and save message content using an asynchronous database API:

@MessageSubscriber(topic = "mytopic")
public Subscriber < MessageEnvelope < MyEvent >> handleMyTopic() { // Create an Akka stream that will materialize into a Subscriber // that can subscribe to the events Subscriber <MessageEvelope<MyEvent>> handler = Source.asSubscriber() // Handle each message by saving it to the database .mapAsync(1, msg -> saveToDatabase(msg.data()) // return the message rather than the result of the database op .thenApply(result -> msg) // Commit the message once handled ).map(msg -> msg.commit()) // Feed into an ignoring sink, since everything is now handled .to(Sink.ignore()) // And run it to get the subscriber .run(materializer); return handler;
}

This is a fairly trivial example, but using Akka Streams, we could have the messages fanned out to multiple other consumers, we could have cycles in the processing that aggregate state, etc.

WebSockets

WebSockets is another type of long-lived connection where synchronous IO is not appropriate. The current EE4J spec for WebSockets, JSR-356, does offer asynchronous handling of messages, however, it does not support backpressure on receiving (so if the other end is sending too much data, there is no way to tell it to slow down, you must buffer or fail). It is also a purpose-built API just for WebSockets, so any integration with other asynchronous data sources or sinks must be implemented manually by the end user.

Now imagine we wanted to implement a chat room, perhaps we’re going to use Apache Kafka as our chatroom backend, with one single partition topic per room. To implement this today, a reasonable amount of boilerplate is required, not just to transfer messages from the client to Apache Kafka and back, but also to propagate errors. Furthermore, if a client was producing messages at a high rate — faster than Apache Kafka is willing to consume, these messages are going to buffer on the server, causing it to run out of memory.

There exist however a number of Reactive Streams implementations for Apache Kafka. If JSR-356 were to support Reactive Streams, perhaps in the form of an @OnStream annotation that can be used as an alternative to @OnMessage, this is what it might look like to implement such a chat room:

@OnStream
public Publisher <ChatMessage> joinRoom( @PathParam("room") room, Publisher <ChatMessage> incomingMessages
) { // The Kafka subscriber will send any messages it receives to Kafka Subscriber <ChatMessage> kafkaSubscriber = createKafkaConsumerForRoom(room); // The Kafka publisher will emit any messages it receives from Kafka Publisher <ChatMessage> kafkaPublisher = createKafkaProducerForRoom(room); // We now connect the incoming chat messages to the subscriber incomingMessages.subscribe(kafkaSubscriber); // And return the rooms publisher to be published to the WebSocket return kafkaPublisher;
}

The createKafka* methods would be code specific for connecting to Kafka, the only JSR-356 specific code would be the above method itself.

CDI

Contexts and Dependency Injection (CDI) could take advantage of Reactive Streams for event publishing and subscribing, with many use cases similar to the ones described above, but another opportunity for CDI that we think is important is rather something that facilitates not just CDI, but all asynchronous processing in general.

The problem comes with CDI implementations using thread locals to propagate context, for example, the context for the current request, which might include the current authenticated user, cached authorization information for the user, etc. The assumption in CDI is that the container will always be in control of the threads that operate within this context. However, with asynchronous processing, that is not the case.

A good example of this is in a chat room. Each user connected to the chat room has an active WebSocket request, which has a particular CDI context that goes with it. Processing of messages received from these users might happen in the right CDI context, but what happens when User A sends a message to the chat room, and then it has to be sent to User B? The thread that is propagating the message to User B now needs the request context for User B’s request, in order to correctly handle the message, but that thread will have User A’s request context, because it was the receipt of a message from User A that initiated the current processing.

The only answers to this in CDI currently is to not use CDI contexts, rather, to capture all context at the start of the request, and from then on use a non-CDI mechanism (such as simply passing the context every manually) to use that context when it’s needed. We think CDI can offer a better solution to this, by allowing context to be captured, and set up/torn down in another thread.

JPA

Before JPA can support any asynchronous operations, a standard for asynchronous database drivers is needed. Fortunately, there is currently a lot of active development going on towards this, so we can expect to see progress here I think in the next 12 months. Once that support exists, here are some examples of how JPA could be modified to support asynchronous streaming:

Streamed Queries

Database exports are one obvious example for query streaming where you don’t want to load the entire result set into memory, but I think more interesting use cases will arrive in the not too distant future, particularly around event logging, as different services may want to stream events from a database to be brought up to date on demand. Here’s an example of serving one such event stream through a Reactive Streams WebSocket API, using an imagined getResultPublisher method on TypedQuery:

@OnStream
public Publisher <Event> streamEvents( @QueryParam("since") Date since
) { Publisher <Event> events = entityManager.createQuery( "select e from events where e.timestamp >= :since", Event.class ).setParameter("since", since) .getResultPublisher(); return events;
}

Once again, backpressure, completion and error handling is all done for you.

Streamed Ingestion

An example of streamed ingestion might be persisting logs, here’s an example where logs a pushed in via WebSockets, using an imagined persistSubscriber method on EntityManager:

@OnStream
public void ingestLogs( Publisher <Log> logs
) { Subscriber <Log> ingester = entityManager.persistSubscriber(Log.class); logs.subscribe(ingester);
}

Streaming Combinators

In the use cases above, when plumbing streams, there’s often a need to transform the stream in some way, whether it’s a simple map or filter, or perhaps collecting a stream into a single value, or maybe concatenating, broadcasting or merging streams. For Reactive Streams, this functionality is provided by a number of implementations such as Akka Streams, Reactor, and RxJava. While these do the job well, it may be desirable for EE4J developers to have common functionality available to them out of the box.

Furthermore, a standard library for streaming combinators allows other APIs to use this to provide more fluent APIs to developers than what the bare Reactive Streams Publisher and Subscriber interfaces offer, without having to depend on a third party library.

Finally, it would allow EE4J libraries to assume a basic set of streaming operators that they can use in their own implementations, since they could not and should not depend on 3rd party libraries. This is very important, as otherwise, the burden of reimplementing many operators would fall into the hands of each and every library wanting to provide streaming capabilities, sidetracking them from the core issue they’re attempting to solve, be it database access, WebSockets or anything else.

New Possibilities

All of the use cases above only cover changes to existing streaming usages in EE4J, however, adoption of a robust streaming standard like Reactive Streams facilitates new possibilities for features and technologies that EE4J could support.

Here are some examples.

Event Sourcing

Event sourcing itself may not be that interesting from a streaming perspective, but the ability to consume the event log as a stream of messages is incredibly interesting. Basing the event log stream on Reactive Streams would allow all sorts of consumers to be plugged in, such as message brokers, WebSockets, CDI event publishers, and databases.

Real-Time Message Distribution

Applications are tending to provide more responsive user interfaces, with updates being pushed to users as soon as they happen. While WebSockets may offer the mechanism to communicate with the client, EE4J doesn’t yet offer any backend solution for communicating pushes between clusters of machines in the backend. Message brokers can often be too heavyweight to achieve this, and instead, a lighter-weight, inter-node communication mechanism would be better. Reactive Streams is the perfect API for building this on, as it then allows seamless plumbing to WebSockets and other stream sinks/sources.

Data Processing Pipelines

Current EE4J features tend to be focused on short-lived transactional processes, which have a definite start and definite end. There is a shift in the industry to more stream based approaches, which is seen in stream processing projects like Spark, Flink and Kafka Streams. EE4J could provide a mechanism for integrating EE4J applications into these pipelines, and Reactive Streams would be the perfect API to offer this integration.

Summary

In this article, we’ve looked at a broad array of possibilities of what Reactive Streams brings to the table for EE4J. Each possibility brings value in its own right, but the more possibilities that are implemented overall, the more value each brings. It’s our hope at Lightbend that we can now start collaborating with the EE4J community in make all of these possibilities a reality.

Original Link

Configuring HTTPS for Use With Servlets [Snippet]

Configuring your Java EE application to communicate over HTTPS requires a few lines of XML in the web.xml file.

The web.xml file is located in the WEB-INF directory of your project and is usually created automatically when your IDE generates a Java EE web application. If it is not, you can create it yourself.

Motivation for HTTPS

The reasons for configuring a secure connection for your web application is to allow secure communication between your application and the user of your application. Beyond this consideration, if you want your application to communicate with the client using the HTTP 2 protocol, then a secure connection over HTTPS is required.

Configure a Secure Connection

A secure connection is configured in the web.xml file within the <security-constraint> element. The following code snippet shows a simple example of how to do this.

<security-constraint> <web-resource-collection> <web-resource-name>Servlet4Push</web-resource-name> <url-pattern>/*</url-pattern> <http-method>GET</http-method> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint>

Let’s look at each element in turn:

  • <web-resource-name> is the name of the web resource you want to secure. This is likely to match the context root of your application.
  • <url-pattern>/*</url-pattern> is the URL to be protected.
  • <http-method> is the HTTP method to protect. If you omit this line, then all HTTP method calls are protected.
  • <transport-guarantee> specifies the security constraint to use. CONFIDENTIAL means that HTTPS should be used. NONE means HTTP should be used.

This is the simplest example of how to implement HTTPS in a Java EE application.

Source Code

The source code for this example can be found in the ReadLearnCode GitHub repository.

Original Link

The Final Name for Java EE: A Vote

As you’re all likely aware, Java EE is migrating to the Eclipse Foundation. Formerly known as Java EE, the specification will no longer be governed by JCP. Instead, a truly open-source initiative named Eclipse Enterprise for Java was founded — abbreviated as EE4J. The EE4J project’s aim is to drive standardization while being completely open for other competing vendors to add value to it.

The process of transitioning from Oracle and JCP to EE4J has been flawless so far, with one exception regarding naming issues. Oracle refused to let the community use the word “Java.” There was an open letter by Java EE Guardians to Oracle, asking for permission not only to use “Java” in the new name for old Java EE, but to use it in package names as well. However, Oracle refused.

Well, the whole community kept suggesting new names for the modern successor of Java EE, and the list has been whittled to two names.

  • Enterprise Profile
  • Jakarta EE

Have your say and vote for the name of the next generation of Java EE.

Everyone from the community is welcome to vote. Voting closes on 23 February 2018.

Original Link

CodeTalk 2.0: Containers, the Java EE Way [Podcast]

Introduction

Intro by DZone Content and Community Manager, Michael Tharrington

Welcome to the first episode in DZone’s relaunch of our official CodeTalk podcast!

Image title

Picking up the podcast where our previous host, John Esposito, left off, our new joint hosts, Travis Van and Travis Carlson, are adapting the show slightly to fit the theme of early conversations with the creators of new developer technologies.

While I’m sure you’ll get to know them better through the show, I’d like to take a brief moment to introduce our hosts before we dive into the first episode.

Image title

Travis Van comes from a decade-long background in tech PR and marketing, having worked at a variety of places including MuleSoft where he was “Employee #3 (first marketing hire).” In 2007, underwhelmed by the range of PR and marketing research tools available to tech companies, Travis decided to build something better and thus founded TechNews.io, a platform for promoting tech companies.

Image title

Travis Carlson is a veteran software dev focused on back-end enterprise systems. He’s been working with the JVM since Java 1.2 and distributed systems since the advent of the Internet. As a systems architect, he has managed the entire stack—from DevOps in AWS all the way to front-end development with AngularJS. His passion is creating systems which are agile, maintainable, scalable, and robust.

So, now that you know a little bit about our hosts, let’s get to the meat—our first new episode in the CodeTalk relaunch. Take it away Travis and Travis!

CodeTalk: Containers, the Java EE Way

Sebastian Daschner joins CodeTalk this week to preview a talk he’ll be giving at the Index Developer Conference organized by IBM (Feb. 20-22 in San Francisco): “Containers and Container Orchestration — The Java EE Way.”

DZone readers who have been following containers may have noticed how conspicuously absent the Java stack is from the whole Docker/Kubernetes/Container discussion. We were really interested to jump in and understand how Java EE fits the world of Docker.

If you’re at an enterprise Java shop, you’re going to find Daschner’s talk an interesting reference point in understanding:

  • When it makes sense to containerize Java applications
  • How Kubernetes and Istio make it possible to orchestrate Java EE microservices in a modern enterprise system
  • How this convergence fits into the overall Java developer priority to build and scale faster

Want More CodeTalk?

We’re still in the early stages here with the relaunch. But soon we hope to give our CodeTalk landing page a face lift and are going to house all future and past episodes in this one place. 

For now, stay tuned to DZone for weekly episodes released each Friday. And, if you’d like to contact the showrunners to get involved as an interviewee or just simply share your feedback, feel free to message Travis and Travis here: codetalkpodcast@gmail.com.

Original Link

EE4J: An Update

Mike Milinkovich of the Eclipse Foundation has recently posted a blog providing an overall update on the status of the project. To summarize:

  • We are working on defining a new brand name using the community process described here.
  • We have begun the process of moving Oracle GlassFish sources to the EE4J project. So far, Oracle has contributed sources for the following projects:  EE4J GitHub repos
    • Eclipse Grizzly
    • Eclipse OpenMQ
    • Eclipse Project for JAX-RS
    • Eclipse Project for JMS
    • Eclipse Tyrus
    • Eclipse Project for WebSocket
    • Eclipse Project for JSON Processing
  • In addition to the above:
    • The Eclipse Yasson and EclipseLink projects have been transferred to EE4J, and are now part of the overall EE4J project.
    • We have created Eclipse Jersey and Eclipse Mojarra projects and are working on contributing sources for these.
  • You can watch (and star!) EE4J project repositories as they are being created in the EE4J GitHub organization.
  • Oracle is working on Eclipse project proposals for all of the technologies Mike mentions in his blog: JSON-B API, Concurrency, Security, JTA, JavaMail, JAX-B, JAX-WS, JSTL, UEL, JAF, Enterprise Management, and Common Annotations. We intend to formally propose these projects to the EE4J Project Management Committee (PMC) very soon. One of our major near-term goals is to transfer all of the Oracle-owned GlassFish technologies to EE4J such that we can build “Eclipse GlassFish” from EE4J sources and demonstrate Java EE 8 compliance.
  • We are working on establishing an Eclipse Foundation working group to provide a member-driven governance model for EE4J.

In short, there is a lot of positive progress being driven in the EE4J project. For further updates refer to this blog and the links provided above, or subscribe to the ee4j-community mailing list.

Original Link

Prometheus With Java EE and MicroProfile Metrics [Video]

I have recorded a video in which I show how to realize business metrics by integrating Prometheus using Java EE and MicroProfile Metrics on OpenLiberty.

Similar to a previous video that used the Prometheus Java API, we’ll expose “coffee business metrics” from a Java Enterprise application.

MicroProfile Metrics aims to provide a unified way for exporting monitoring data.

The microprofile branch of the hello-prometheus project can be found on GitHub.

Original Link

An Early Look at Features Targeted for Java 11

With JDK 10 about to enter its release candidate phase, it’s interesting to start looking at what will come after that via JDK 11. As of this writing, four JEPs (JDK Enhancement Proposals) have been officially targeted for JDK 11 (with more likely to come). This post summarizes some details about each of the four JEPs currently targeted for JDK 11.

JEP 309: Dynamic Class-File Constants

JEP 309 (“Dynamic Class-File Constants”) “seek[s] to reduce the cost and disruption of creating new forms of materializable class-file constants, which in turn offers language designers and compiler implementors broader options for expressivity and performance.” JDK bug JDK-8189199 (“Minimal ConstantDynamic support”) “implement[s] JEP 309 by properly parsing and resolving new CONSTANT_Dynamic constants in JVM class files used by Hotspot” and was resolved four days ago. JEP 309 was officially targeted for JDK 11 on 14 December 2017.

JEP 318: Epsilon: An Arbitrarily Low-Overhead Garbage Collector

The currently stated goal of JEP 318 (“Epsilon: An Arbitrarily Low-Overhead Garbage Collector”) is to “provide a completely passive GC implementation with a bounded allocation limit and the lowest latency overhead possible, at the expense of memory footprint and memory throughput.” The JEP’s summary currently states, “Develop a GC that handles memory allocation but does not implement any actual memory reclamation mechanism. Once the available Java heap is exhausted, the JVM will shut down.” JEP 318 is associated with issue JDK-8174901 (“JEP 318: Epsilon: An Arbitrarily Low-Overhead Garbage Collector”) and was officially targeted for JDK 11 on 18 January 2018. Additional details regarding JEP 318 can be found in online resources such as The Last Frontier in Java Performance: Remove the Garbage Collector and Java garbage collector proposal aimed at performance testing.

JEP 320: Remove the Java EE and CORBA Modules

JEP 320 (“Remove the Java EE and CORBA Modules”) has a current “Summary” stating, “Remove the Java EE and CORBA modules from the Java SE Platform and the JDK. These modules were deprecated in Java SE 9with the declared intent to remove them in a future release.” This JEP is not terribly surprising given that CORBA and Java EE modules did not have default visibility in Java SE when JDK 9 introduced modularity. The “Motivation” section of this JEP provides insightful historical background on why Java EE and CORBA modules were included in Java SE in the first place. Among many other interesting tidbits in this “Motivation” section, these two conclusions stand out to me:

  • “Since standalone versions of the Java EE technologies are readily available from third-party sites, such as Maven Central, there is no need for the Java SE Platform or the JDK to include them.”
  • “Since the costs of maintaining CORBA support outweigh the benefits, there is no case for the Java SE Platform or the JDK to include it.”

JEP 320 lists several modules and tools that it will remove. The to-be-removed modules include java.xml.ws, java.xml.ws.annotation, jdk.xml.ws, java.xml.bind, jdk.xml.bind. The to-be-removed tools include wsgen, wsimport, schemagen, xjc, and servertool.

The JEP 320 “Risks and Assumptions” section illustrates the impact of these removals. It states that developers using --add-modules java.xml.bind currently to include JAXB classes in their Java 9 applications will need to change this for JDK 11. Specifically, the JEP text states, “This proposal assumes that developers who wish to compile or run applications on the latest JDK can find and deploy alternate versions of the Java EE technologies.” Fortunately, the text in JEP 320 does a nice job of providing details on current alternate implementations of many of the libraries and tools that will be removed with JDK 11 and JEP 320.

JEP 320 also mentions that most modules it will be removing are “upgradeable,” meaning that “developers on JDK 9 who use --add-modules java.xml.bind, etc. have the choice of either relying on the Java EE modules in the JDK runtime image, or overriding them by deploying API JAR files on the upgrade module path.” The JEP further explains why this is significant in terms of making it easier to move to JDK 11 when the modules are removed from the JDK runtime image.

JEP 320 is associated with issue JDK-8189188 (“JEP 320: Remove the Java EE and CORBA Modules”) and was officially targeted for JDK 11 on 26 January 2018.

JEP 323: Local-Variable Syntax for Lambda Parameters

JEP 323 (“Local-Variable Syntax for Lambda Parameters”) is intended to “allow var to be used when declaring the formal parameters of implicitly typed lambda expressions.”

JEP 323 is associated with issue JDK-8193259 (“JEP 323: Local-Variable Syntax for Lambda Parameters”) and was officially targeted for JDK 11 yesterday (2 February 2018).

Conclusion

I mostly like to look forward to what’s coming soon to a JDK near me because I find it interesting to think about. However, there are also practical advantages in some cases to knowing what’s coming. For example, JEP 320 provides details regarding alternatives to the modules and tools that will be removed in JDK 11. Developers can start moving to those alternatives now or before migrating to JDK 11 to make that future transition easier.

Original Link

Java in 2018

So what’s ahead in 2018? How will the community deal with the transition? How will Java evolve to meet the new needs of organizations big and small?

Thanks to John Duimovich, IBM Distinguished Engineer and Java CTO, for sharing his predictions on what to expect from Java 2018.

2018 Will Be the Year of Eclipse

With key projects like EE4J and MicroProfile now under its stewardship, the Eclipse Foundation will become even more important in 2018 — more than just an IDE. This will be where cloud-native Java will be defined for the next 10 years. More innovation will be taking place here as Eclipse has a 60% smaller footprint with the same throughput so companies will pay less for memory. In addition, start-up is two times faster than open JDKs and JVMs. Developers will want to keep an eye on the Eclipse Foundation next year.

Convergence With Containers Will Accelerate

As part of the broader effort to simplify development and management, containers and runtimes like Java will become more tightly coupled. They’ll be optimized together to enable seamless management and configuration of Java applications. Consistent memory management and easier wiring between Java constructs and containers will take hold so developers can leverage the benefits of containers and Java runtimes, which are essentially another form of containers.

Kotlin Will Become the Next Hot Language

Kotlin is poised to become a major force in the programming world. Kotlin’s concise coding syntax and interoperability with Java have already made it popular for many developers. Now, it has first-class support on Android, which is bound to boost its use for mobile. Look for it to gain even more ground in 2018.

New Release Model Will Drive Faster Innovation

Developers rejoice. The new six-month release interval for Java will mean more frequent changes and faster introduction of features. Look for enterprising Java shops to take advantage of these features and use Java to solve new problems and enter new areas. Large organizations will likely wait for the support of the long-term releases, but they’ll now have a clearer roadmap.  Community support also has the potential to rally around popular changes in interim releases.

Serverless Will Begin a Major Reshaping of Java

Demand is growing for serverless platforms – initially driven as a consumption model but now expanding from simple, event programming models to composite flow-based systems. This innovation will continue as cloud developers want to shift their focus on the application, and not worry about servers. This means Java runtimes will need to be optimized and re-architected for a serverless world where fast start-ups and smaller footprints matter even more.

Original Link

Would You Use JSF for Your Next Project?

There was an excellent StackOverflow blog post last week about the “Brutal Lifecycle of JavaScript Frameworks”. The article was about the speed at which JavaScript UI frameworks (Angular, Jquery, and React) come into and fall out of fashion. The key metric for this post is questions per month on the framework, which is a reasonable metric to demonstrate these trends. Downloads would have been interesting too.

It got me thinking: Where are we with JSF? And my starting point was to superimpose JSF on top of the JavaScript data.

Its hard to see clearly but JSF is in decline based on questions asked on StackOverflow. If we remove JavaScript, we can see the decline started around 2013:

That said, the level of questions is fairly small, and the level is relatively stable.

This post tries to understand the current state of JSF, and whether there is still a place for JSF in modern development.

What Is JSF?

JSF is a component-based web framework that is part of Java EE. It was the only frontend framework under Java EE until Java EE 8 added its new MVC framework.

What’s Good about JSF?

For me, the key strength of JSF lies in the component frameworks in the JSF ecosystem — in particular, PrimeFaces or the utility libraries like omnifaces. They let you quickly get started on projects, have plenty examples, and are especially suited for a team or for projects where developers lack frontend skills. The deployment model is often simple, with a single WAR or EAR file per server

The current release of JSF is 2.3, with the specification for 2.4 currently in progress.

What’s Bad about JSF?

In 2014 JSF received criticism from the ThoughtWorks TechRadar, which placed it on hold.

The main part of the criticism was that the JSF model is flawed as it…

“encourages use of its own abstractions rather than fully embracing the underlying web model”

They do make the concession that the web model is getting more prominence in later versions of JSF.

There were rebuttals against this post, particularly relating to more recent JSF versions, but it has contributed to JSF being regarded as a difficult framework to use.

JSF Is Marmite

JSF is the marmite of frontend development.

What’s marmite? It’s a yeast extract that you spread on toast. Some people love it, some hate it, but there is no middle ground. For the record, I hate marmite, but I like JSF.

The reason I like JSF is that you can access good quality components that are mature and well-documented. It also has the advantage of allowing teams that are weak on frontend skills to develop professional looking websites. There is a downside in that it can be hard to deliver more complex requirements as the Request/Response model is more abstract under JSF.

Should You Use JSF for New Projects?

The JSF model has fallen out of favor. It is viewed as a legacy framework against today’s JavaScript frameworks with RESTful API backends. This has moved Java toward implementing RESTful microservices. This approach can often scale better than JSF.

The StackOverflow blog post shows it’s not all plain sailing in the frontend JavaScript world. The frameworks suffer from relatively short lifespans, and although there are migration strategies, you do run the risk of your javascript framework being obsolete.

JSF has the advantage of being a mature model in this respect. It’s also worth remembering that if your team is lacking in frontend skills, then JSF will help you quickly deliver a professional-looking website.

Question

I’d be interested in hearing other peoples experiences, and whether they will be using JSF in future projects.

Original Link

An Introduction to Hollow JARs

I have written in the past about the difference between an application server and an UberJAR. The short version of that story is that an application server is an environment that hosts multiple Java EE applications side by side, while an UberJAR is a self-contained executable JAR file that launches and hosts a single application.

There is another style of JAR file that sits in-between these two styles called a Hollow JAR.

What Are Hollow JARs?

A Hollow JAR is a single executable JAR file that, like an UberJAR, contains the code required to launch a single application. Unlike an UberJAR though, a Hollow JAR does not contain the application code.

A typical Hollow JAR deployment then will consist of two files: the Hollow JAR itself and the WAR file that holds the application code. The Hollow JAR is then executed referencing the WAR file, and from that point on the two files run much as an UberJAR would.

At first blush, it might seem unproductive to have the two files that make up a Hollow JAR deployment instead of the single file that makes up the UberJAR, but there are some benefits.

The main benefit comes from the fact that the JAR component of the Hollow JAR deployment pair won’t change all that frequently. While you could expect to deploy new versions of the WAR half of the deployment multiple times per day, the JAR half will remain static for weeks or months. This is particularly useful when building up layered container images, as only the modified WAR file needs to be added as a container image layer.

Likewise, you may also reduce build times with technologies like AWS autoscaling groups. Because the JAR file doesn’t change often, this can be baked into an AMI, while the WAR file can be downloaded as an EC2 instance is deployed through the scripting placed into an EC2 user data field.

Building a Hollow JAR

To see Hollow JARs in action, let’s take a look at how you can create one with WildFly Swarm.

For this demo, we will be building the pair of Hollow JAR deployment files required to run a copy of the Ticket Monster demo application. Ticket Monster is a sample application created to demonstrate a range of Java EE technologies, and is designed to build a WAR file to run on a traditional application server.

To build the JAR half of the Hollow JAR, we will make use of SwarmTool. Unlike WildFly Swarm, which usually requires special configuration in the Maven project to build an UberJAR or Hollow JAR, SwarmTool works by inspecting an existing WAR file and building a Hollow JAR to accommodate it. It is a neat way of migrating existing applications to the Swarm platform, without modifying the existing build process.

First, clone the Ticket Monster source code from https://github.com/jboss-developer/ticket-monster. The code we are interested in is under the demo subfolder.

There are two changes we need to make to the pom.xml file under the demo subfolder to accommodate Java 9 and SwarmTool.

First, we need to add a dependency on javax.xml.bind:jaxb-api. This is because the java.xml package is no longer part of Java 9. If you try to compile the application under Java 9 without this additional dependency, you will receive the error:

java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException

The following XML adds the required dependency:

<dependencies> <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.0</version> </dependency>
</dependencies>

The second change is to embed the Jackson libraries used by Ticket Monster into the WAR file. In the original source code, the Jackson library has a scope of provided, which means that we expect the application server (or Hollow JAR in our case) to provide the library.

<dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jackson-provider</artifactId> <scope>provided</scope>
</dependency>

However, the version of Swarm we will be using has a different version of the Jackson library to the one used by the Ticket Monster application. This mismatch means that the @JsonIgnoreProperties annotation used by Ticket Monster is not recognized by the version of the Jackson library provided by Swarm, resulting in some serialization errors.

Fortunately, all that is required is to use the default scope, which will embed the correct version of the Jackson library into the WAR file. Dependencies embedded in the WAR file take precedence, and so the application will function as expected.

<dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jackson-provider</artifactId>
</dependency>

We can now build the Ticket Monster application like any other WAR project. The following command will build the WAR file.

mvn package

Now we need to use SwarmTool to build the Hollow JAR. Download the SwarmTool JAR file locally.

wget https://repo1.maven.org/maven2/org/wildfly/swarm/swarmtool/2017.12.1/swarmtool-2017.12.1-standalone.jar

Then build a Hollow JAR.

java -jar swarmtool-2017.12.1-standalone.jar -d com.h2database:h2:1.4.196 --hollow target/ticket-monster.war

The -d com.h2database:h2:1.4.196 arguments instruct SwarmTool to add the H2 in-memory database dependencies to the Hollow JAR. SwarmTool can detect most of the dependencies required to boot the WAR file by scanning the classes referenced by the application code. However it cannot detect dependencies like database drivers, so we need to manually tell SwarmTool to include this dependency.

The --hollow argument instructs SwarmTool to build a Hollow JAR that does not embed the WAR file. If we left this argument off, the WAR file would be embedded in the resulting JAR file, creating an UberJAR instead of a Hollow JAR.

At this point we have the two files that make up our Hollow JAR deployment. The WAR file at target/ticket-monster.war contains our application, while the ticket-monster-swarm.jar file is our Hollow JAR.

Executing a Hollow JAR

To run the application, use the following command.

java -jar ticket-monster-swarm.jar target/ticket-monster.war

You can then open http://localhost:8080 to view the application.

Conclusion

Hollow JARs are a neat solution that provide a lot of flexibility in deployment strategies while retaining the convenience of an UberJAR. You can find more information on the different strategies from the blog post The Skinny on Fat, Thin, Hollow, and Uber.

Original Link

Staring Into My Java Crystal Ball for 2018

Last year, I wrote a blog with my predictions for what would happen in the world of Java in 2017. Now that 2017 has ended and we’re starting 2018, I thought it would be interesting to look back at what I had predicted and make some new predictions for this year.

I’ll use the same format as last year, looking at the major areas of Java individually

The Core Java Platform (JDK)

My prediction in this area was that JDK 9 would be released (that didn’t exactly require a sixth sense), although I did feel that the scheduled release date might slip (which it did). What I didn’t see was the rejection by the JCP EC of the initial Public Review of the Java Platform Module System (Jigsaw) component JSR. That demonstrated that the JCP EC could make Oracle aware that changes to the spec were required and that Oracle was responsive in making changes necessary to get the JSR approved at the reconsideration ballot.

What I don’t think anyone could have seen, including those making the decisions at Oracle, were the changes to the way the OpenJDK will be released. These announcements came shortly before JavaOne and will have a significant impact on what happens to the JDK in 2018. I don’t think I need to class this as a prediction, but we will have two releases of the JDK this year, JDK 10 and JDK 11. The contents of JDK 10 have now been finalized (I’ll be writing another blog on this shortly) but the contents of JDK 11 still need to be discussed and decided. Looking at the JEPs, we already have the Epsilon GC and dynamic class-file constants, but I predict we’ll also get JEP 323, Local-Variable Syntax for Lambda Parameters. I also predict that the Java EE and CORBA modules will be removed from the JDK. These have been deprecated so won’t require any real engineering effort to remove. I’ll even go further and predict that some other things will be removed in JDK 11: the CMS collector has already been suggested for removal and the browser plugin and even WebStart could be the end of applets entirely. If that does happen, then the previously deprecated applet package and contents will also likely disappear from the java.desktop module.

Java EE

Again, my prediction, if you can call it that, was the release of Java EE 8, which duly happened in September. I also don’t think anyone would have predicted the move of the specifications from Oracle to the Eclipse Foundation.

I know that there is a lot of work going on behind the scenes to get all the required administration work finished. My prediction is that we won’t see a new version of the umbrella EE4J (or whatever the new name is) specification this year. Given enterprise Java’s maturity and the speed at which most users like to upgrade I don’t think this will be an issue.

Java ME

I correctly predicted that there would be no Java ME 9 specification to accompany the launch of Java SE 9. There is some activity currently going on in the JCP to take the most useful parts of Java ME and make them available as separate libraries for use with Java SE. However, I don’t see much demand for this and suspect we won’t see anything substantial in this area again this year. The realities and economics of Moore’s law combined with the Java Platform Module System have made Java ME somewhat irrelevant in this day and age.

Embedded Java

I didn’t predict anything here, other than the continued popularity of Java for developing these types of application.

It, therefore, came as a bit of a surprise that Oracle decided to discontinue distributing binaries for ARM-based processors, starting with JDK 9.

For developers working in this space, Azul will continue to produce and make available JDK binaries for embedded platforms. In addition to ARM, we can also provide PowerPC-based ones. Just let us know if this is something you’re interested in.

Performance

One last thing that’s not a specific Java platform but very important.

Having seen the recent news about potential security issues related to speculative execution in Intel CPUs, it seems that the fixes in the Linux and Windows operating systems are likely to have a significant adverse impact on performance. This will mean people will be looking to extract as much performance as possible from their JVMs to offset reductions in performance at the operating system level.

Azul’s Zing JVM is designed to do just that. If you need more performance from your JVM, why not try it free for 30 days?

Original Link

Morning Java: The Blunt Fundamentals

With the holidays behind us and a new year freshly being rung in, it’s time to see what’s been going on in the world of Java! This ended up being a very blunt compilation of articles and news (although the headlines were pretty enjoyable). “Java 8: The Bad Parts” still makes me chuckle, and we’ll even cover something as fundamental as a name in the news section. But this compilation also dives into some fundamental aspects of programming and considers a higher-level view of what’s coming ahead.


It’s Java’clock

By the way, if you’re interested in writing for your fellow DZoners, feel free to check out our Writers’ Zone, where you can also find some current hot topics and our Bounty Board, which has writing prompts coupled with prizes.


Coffee and the News

What’s in a Name?

The Java EE Guardians have published an open letter about how Java EE will be named and packaged moving forward now that the standard is moving to the Eclipse Foundation. The letter details concerns the group has with Oracle’s desire to, in their own words “restrict the use of the word ‘Java’ and the use of the ‘javax’ packages for EE4J due to corporate branding concerns.” The Guadians propose a set of solutions as well. Check it out and see if you agree with their points.

What’s New for Groovy?

Last month, InfoWorld compiled a list of items on track for Groovy 2.5 and 3.0. The big player detailed in this roadmap? Modularity. Among the enhancements: support for Java 9 modules and Java 8 lambda expressions.

Java 10 and 11

At the tail end of last year, Ben Evans of InfoQ put out a nice compilation of what to expect in the next two releases of Java, since the plan is to move to a 6-month release plan. See what’s on track for 2018, including the few confirmed parts so far for Java 11.


Diving Deeper Into Java

Original Link

Transactional Exception Handling in CDI [Snippet]

In Java EE, exceptions that are raised during the execution of a transactional business method cause the transaction to roll back. However, this is only the case for system exceptions, that is, runtime exceptions, which are not declared in the method signature.

For application exceptions, that is, checked exceptions, or any exception annotated with @ApplicationException, the transaction is not automatically rolled back. This sometimes causes confusion among enterprise developers.

For EJB business methods, the transactions can be forced to roll back on application exceptions as well by specifying @ApplicationException(rollback = true). However, this annotation is only considered if the managed bean in an EJB.

CDI also makes it possible to execute business methods transactionally using @Transactional. This annotation gives us even more control. With @Transactional, we can not only define the transaction type, such as REQUIRED or REQUIRES_NEW, but also on which exception types we do or do not want to roll back:

public class CarManufacturer { @Inject CarFactory carFactory; @Inject Event<CarCreated> createdCars; @PersistenceContext EntityManager entityManager; @Transactional(rollbackOn = CarCreationException.class, dontRollbackOn = NotificationException.class) public Car manufactureCar(Specification specification) { Car car = carFactory.createCar(specification); entityManager.persist(car); createdCars.fire(new CarCreated(car.getIdentification())); return car; } }

The transaction will be rolled back in case a CarCreationException occurs, but not for NotificationExceptions.

This post was reposted from my newsletter issue 016.

Original Link

Caching Method Results With JCache

In JCache, there is a handy functionality that transparently caches the result of methods. You can annotate methods of managed beans with @CacheResult, and the result of the first call will be returned again without calling the actual method a second time.

import javax.cache.annotation.CacheResult;
// ... public class Calculator { @CacheResult public String calculate() { // do some heavy lifting... LockSupport.parkNanos(2_000_000_000L); return "Hi Duke, it's " + Instant.now(); }
}

If the bean is injected and the method calculate is called, the result will be cached after the first call. By default, this mechanism doesn’t cache and return exceptions.

We can include the calculator in a JAX-RS resource as follows:

@Path("calculation")
public class CalculationResource { @Inject Calculator calculator; @GET public String calculation() { return calculator.calculate(); }
}

Calling that HTTP resource will return the same value for all subsequent invocations.

For this example to run on Java EE application servers we, for now, have to declare the interceptor that is responsible for caching the result. This is due to JCache not being included in the EE umbrella. Therefore, this small configuration overhead needs to be done for now.

If you want to run this example in WildFly specify the interceptor in the beans.xml:

<interceptors> <class>org.infinispan.jcache.annotation.CacheResultInterceptor</class>
</interceptors>

WildFly uses Infinispan, which needs to be added in the pom.xml.

<dependency> <groupId>javax.cache</groupId> <artifactId>cache-api</artifactId> <version>1.0.0</version>
</dependency>
<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-jcache</artifactId> <version>8.2.4.Final</version>
</dependency>

This post was reposted from my newsletter issue 011.

Original Link

JAX-RS List Generic Type Erasure

Want to maintain a List’s generic type? The problem is that when you want to return a List from a JAX-RS resource method, the generic type is lost.

The following code results in type loss:

@GET
@Produces(MediaType.APPLICATION_JSON)
public Response getAllBooks() { List<Book> books = BookRepository.getAllBooks(); // queries database for all books return Response.ok(books).build();
}

And the following exception:

MessageBodyWriter not found for media type=application/json, type=class java.util.Arrays$ArrayList, genericType=class java.util.Arrays$ArrayList

Luckily, JAX-RS is packaged with a solution in the form of the GenericEntity class, which is designed to maintain the generic type. To use this class, just wrap the Collection in the GenericEntity like shown in this code:

import javax.ws.rs.core.GenericEntity;
...
@GET
@Produces(MediaType.APPLICATION_JSON)
public Response getAllBooks() { List<Book> books = BookRepository.getAllBooks(); // queries database for all books GenericEntity<List<Book>> list = new GenericEntity<List<Book>>(books) {}; return Response.ok(list).build();
}

I was banging my head against the wall trying to figure this out, but thanks to this post by Adam Bein I was saved. Hopefully, this post finds you and stops any headaches before they begin.

@XmlRootElement
public class Book { private String isbn; private String title; private String author; private Float price; public Book() { } public Book(String isbn, String title, String author, Float price){ this.isbn = isbn; this.title = title; this.author = author; this.price = price; } // Getters and Setters removed for brevity }

Important Update

The behavior described above is demonstrated on GlassFish 4.1 and produces the following exception in the server.log files:

[2017-08-30T20:29:56.489+0100] [glassfish 4.1] [SEVERE] [] [org.glassfish.jersey.message.internal.WriterInterceptorExecutor] [tid: _ThreadID=70 _ThreadName=http-listener-1(2)] [timeMillis: 1504121396489] [levelValue: 1000] [[MessageBodyWriter not found for media type=application/json, type=class java.util.ArrayList, genericType=class java.util.ArrayList.]]

However, the same behavior is not seen on IBM WebSphere Liberty Profile. In fact, no error is thrown and the List of books is successfully serialized to a JSON representation. Further investigation shows that Liberty Profile is far more forgiving of deviations from the specification. More detailed research is needed to fully document the difference between server implementations I have only looked at GlassFish and Liberty Profile. However, as GlassFish is the reference implementation for Java EE, my advice is to develop with reference to its expectations because all other servers implementations should at least conform to its requirements.

The source code for this article is in the readlearncode_articles GitHub repository.

Further Reading

I regularly blog about Java EE on my blog readlearncode.com where I have recently published a mini-series of articles on the JAX-RS API.

Among the articles, there are discussions on bean validation failure in REST endpoints, how to work with Consumers and Producers, and how to create JAX-RS Resource Entities.

Do you want to know all the ways the @Context (javax.ws.rs.core.context) annotation can be used within your JAX-RS application. If so take a look at this five-parts series:

Original Link

  • 1
  • 2