ALU

gradle

Mono-Repo Build With Gradle

Sometimes, we are faced with a project that is both open source and part proprietary. Here’s how I address the challenges of synchronization amongst builds.

When faced with a partly open-source and partly closed-source project, it is common to use a Git sub-tree to synchronize them. The open-source project is added in a folder of the closed-source project, and all development happens in that root project (alias the mono-repo). On top of that, we want our build tool to handle the sub-tree project as it is a part of the mono-repo.

Original Link

Gradle: Modernization of Build Process

Many of you are stuck with the question of which build system is best for your project?

It depends on many factors, including the size of your project, your need for customization, dependency handling, external dependency, and a few other variables that can help you choose. Let’s take a look.

Original Link

Introducing Source Dependencies in Gradle

This post introduces a new Gradle dependency management feature called “source dependencies."

Normally, when you declare a dependency on a library, Gradle looks for the library’s binaries in a binary repository, such as JCenter or Maven Central, and downloads the binaries for use in the build.

Original Link

Setting Up Static Code Analysis for Java

Introduction

In this article, we will go through the basic setup of static code analysis for your Java project.

Why is static code analysis important? It helps us to ensure the overall code quality, fix bugs in the early stage of development, and ensure that each developer is using the same coding standards when writing the code.

Original Link

Custom Grammar to Query JSON With Antlr

Antlr is a powerful tool that can be used to create formal languages. Vital to the formalization of a language are symbols and rules, also known as grammar. Defining custom grammar and generating the associated parsers and lexers is a straightforward process with Antlr. Antlr’s runtime enables tokenization of a given character stream and parsing of those tokens. It provides mechanisms to walk through the generated parse tree and apply custom logic. Let’s take this tool for a spin and create a custom grammar to query JSON. Our end goal is to be able to write queries like the one shown below:

bpi.current.code eq "USD" and bpi.current.rate gt 650.60

Original Link

Stop Rerunning Your Tests

Tests are usually the longest running operation in your development process. Running them unnecessarily is the ultimate time waster. Gradle helps you avoid this cost with its build cache and incremental build features. It knows when any of your test inputs, like your code, your dependencies or system properties, have changed. If everything stays the same, Gradle will skip the test run, saving you a lot of time.

So you can imagine my desperation when I see snippets like this on StackOverflow:

Original Link

Get Ready for Kotlin DSL 1.0

Gradle Kotlin DSL 1.0 release candidate is generally available, including Gradle 4.10. The Kotlin DSL is nearly ready for widespread use.

We want you to enjoy a build authoring experience with the benefits provided by Kotlin’s static type system: context-aware refactoring, smart content assist, debuggable build scripts, and quick access to documentation. In case you haven’t seen it, you can watch Rodrigo B. de Oliveira demonstrate these benefits in this KotlinConf 2017 video.

Original Link

Add Code Snippets to Your Technical Presentations

Most of my technical presentations contain code snippets, but putting nice-looking code into my slides was always a tedious task. After years of using PowerPoint, I started looking for alternatives. There are many good ones, and I was able to find a solution that matches both my personal preferences and my typical use cases.

My source code is usually Java, Groovy, or Kotlin. For each presentation, I set up a project containing the code examples I’m going to use in my slides. I want the presentation files to integrate seamlessly into this project. Below are a few aspects that I considered in order to tailor my solution.

Original Link

Java Annotated Monthly: August 2018

July was a very quiet month, news-wise, so August is a leisurely perusal of the bits and bobs that are surfacing this summer.

Java

Here is the usual mixed bag of Java-related news, tutorials, and interesting snippets.

Original Link

JUnit Tutorial: Setting Up, Writing, and Running Java Unit Tests

Today, we’re going back to unit testing basics with a short JUnit tutorial about how to set up, write, and run your JUnit tests. What is a JUnit, and how do you get started? You can get started by understanding the basics and scaling your unit testing practice like a pro.

What Is Unit Testing?

But, before we go too much further into JUnits, let’s talk a little bit about unit testing and regression testing and why they matter in general. We’ll get into some good examples as we go on.

Original Link

Gradle Multi-Project Builds: Parent Pom Structure

When you come from a Maven background, most likely, you have been used to the parent pom structure. Now, when it comes to Gradle, things are a little bit different.

Imagine the scenario of having a project with the interfaces and various other implementations. This is going to be our project structure.

multi-project-gradle
-- specification
-- core
-- implementation-a
-- implementation-b

The specification project contains the interfaces, which the implementations will be based upon. The core project will contain functionality that needs to be shared among implementations.

The next step is to create each project inside the multi-project-gradle. Each project is actually a directory with the builde.gradle file.

plugins { id 'java'
} repositories { mavenCentral()
} dependencies { testCompile group: 'junit', name: 'junit', version: '4.12'
}

Once this is done, you need to create a link between the parent project and the child project.
To do so, you create the multi-project-gradle/settings.gradle and include the other projects.

rootProject.name = 'com.gkatzioura'
include 'specification'
include 'core'
include 'implementation-a'
include 'implementation- b'

Now, if you set the build.gradle file for every sub project, you’ve just included the JUnit dependency and the Maven central repository everywhere.

One of the main benefits on using multi-project builds is removing duplication. To do so, we shall create the multi-project-gradle/build.gradle file and add the JUnit dependency and the Maven central reference.

subprojects { apply plugin: 'java' repositories { mavenCentral() } dependencies { testCompile group: 'junit', name: 'junit', version: '4.12' } }

Now, we can add our dependencies to each project and even specify the dependencies needed from the sub-projects.

For example, the core project uses the specification project.

dependencies { compile project(':specification')
}

And, each implementation project uses the core project.

dependencies { compile project(':core')
}

You can find the project on GitHub.

Original Link

Development of a New Static Analyzer: PVS-Studio Java

Picture 3

The PVS-Studio static analyzer is known in the C, C++, and C# worlds as a tool for detecting errors and potential vulnerabilities. However, we have few clients from the financial sector, because it turned out that now Java and IBM RPG are in high demand. As for us, we would like to be closer to the enterprise world, so after some consideration, we decided to start creating Java analyzer.

Introduction

Sure, we had some concerns. It is quite simple to carve out a niche of IBM RPG analyzers. I am not even sure that there are decent tools for static analysis for this language. In the Java world, things are completely different. There is already a range of tools for static analysis and — to get ahead — you need to create a really powerful tool.

Nevertheless, our company had experience with using several tools for static analysis of Java. We are convinced that many things can be implemented better.

In addition, we had an idea of how to tap the full power of our C++ analyzer into the Java analyzer. But, first things first.

Tree

Picture 6

First, it was necessary to decide how we would get a syntax tree and semantic model.

The syntax tree is the base element around which the analyzer is built. When running checks, the analyzer traverses the tree and reviews its separate nodes. It is practically impossible to perform serious static analysis without such a tree. For example, a search for bugs using regular expressions is futile.

It also should be noted that the syntax tree alone is not enough. The analyzer requires semantic information as well. For example, we need to know the types of all elements in the tree, be able to jump to a declaration of a variable, etc.

We reviewed several options for obtaining syntax tree and semantic model:

We gave up on the idea of using ANTLR almost at once since it would unreasonably complicate the development of the analyzer (semantic analysis would have been implemented on our own). Eventually, we decided to settle on the Spoon library:

  • It is not just a parser, but also a whole ecosystem, that provides not only a parsing tree but also abilities for semantic analysis. For example, it allows getting information about variables types, move to variable declaration, and information about a parent class and so on.
  • It is based on Eclipse JDT and can compile the code.
  • Supports the latest Java version and is constantly updated.
  • Presentable documentation and intuitive API.

Here is an example of a metamodel that Spoon provides that we use when creating diagnostic rules:

Picture 10

This metamodel corresponds to the following code:

class TestClass
{ void test(int a, int b) { int x = (a + b) * 4; System.out.println(x); }
}

One of the nice features of Spoon is that it simplifies the syntax tree by removing and adding nodes to make it easier to work with it. With this, the semantic equivalence of a simplified metamodel to a source metamodel is guaranteed.

For us, this means, for example, that we do not need to care about skipping the redundant parentheses when traversing the tree. In addition, each expression is placed in the block, imports are expended, and some more similar simplifications are performed.

For example, the following code:

for (int i = ((0)); (i < 10); i++) if (cond) return (((42)));

will be formatted to the one shown below:

for (int i = 0; i < 10; i++)
{ if (cond) { return 42; }
}

Based on the syntax tree, a pattern-based analysis is performed. This is a search for errors in the source code of a program by known code patterns containing an error. In the simplest case, the analyzer performs a search in the tree for places, similar to an error, according to the rules described in the appropriate diagnostic. The number of such patterns is large and their complexity can vary greatly.

The simplest example of detectable error using pattern-based analysis is the following code from a jMonkeyEngine project:

if (p.isConnected()) { log.log(Level.FINE, "Connection closed:{0}.", p);
}
else { log.log(Level.FINE, "Connection closed:{0}.", p);
}

When blocks then  and else  of the operator if  fully coincide, there is , most likely,a logic error.

Here is another similar example from the Hive project:

if (obj instanceof Number) { // widening conversion return ((Number) obj).doubleValue();
} else if (obj instanceof HiveDecimal) { // <= return ((HiveDecimal) obj).doubleValue();
} else if (obj instanceof String) { return Double.valueOf(obj.toString());
} else if (obj instanceof Timestamp) { return new TimestampWritable((Timestamp)obj).getDouble();
} else if (obj instanceof HiveDecimal) { // <= return ((HiveDecimal) obj).doubleValue();
} else if (obj instanceof BigDecimal) { return ((BigDecimal) obj).doubleValue();
}

In this code, there are two identical conditions in a sequence of type if (….) else if (….) else if (….). This code fragment is worth checking for a logical error, or the duplicated code should be removed.

Data-flow Analysis

In addition to the syntax tree and semantic model, the analyzer requires a mechanism for data flow analysis.

Data flow analysis enables you to calculate the possible values of variables and expressions in each point of the program and, thanks to that, find errors. We call these possible values ‘virtual values’.

Virtual values are created for variables, classes’ fields, parameters of methods, and other things at first reference. If it is the assignment, the data flow mechanism computes a virtual value by analyzing the expression at the right. Otherwise, all the valid range of values for this variable type is taken as a virtual value. For example:

void func(byte x) // x: [-128..127]
{ int y = 5; // y: [5] ...
}

At each change of a variable value, the data flow mechanism recalculates the virtual value. For example:

void func()
{ int x = 5; // x: [5] x += 7; // x: [12] ...
}

The data flow mechanism also handles control statements:

void func(int x) // x: [-2147483648..2147483647]
{ if (x > 3) { // x: [4..2147483647] if (x < 10) { // x: [4..9] } } else { // x: [-2147483648..3] } ...
}

In this example, when entering a function, there is no any information about a range of values of the variable x , so the range is set according to the type of a variable (from-2147483648 to 2147483647). Then, the first conditional block places a restriction x > 3 and ranges merge. As a result, the range of values for x  in the then  block is as follows: from 4 up to 2147483647 and in the else block varies from -2147483648 to 3. The second condition x  < 10 is handled similarly.

Besides, there has to be the ability to perform purely symbolic computations. The simplest example:

void f1(int a, int b, int c)
{ a = c; b = c; if (a == b) // <= always true ....
}

Here, the variable a  is assigned a value c , the variable b  is also assigned the value c , and, then, a  and b  are compared. In this case, to find an error, it is enough to just remember the fragment of the tree that corresponds to the right side.

Here is a slightly more complicated example with symbolic computations:

void f2(int a, int b, int c)
{ if (a < b) { if (b < c) { if (c < a) // <= always false .... } }
}

In such cases, we have to do with solving a system of inequalities in a symbolic form.

The data flow mechanism helps the analyzer to find errors that are quite difficult to detect using a pattern-based analysis.

Such errors include:

  • Overflows;
  • Array index out of bounds;
  • Access by null or potentially null reference;
  • Pointless conditions (always true/false);
  • Memory and resource leaks;
  • Division by zero;
  • And some others.

Data flow analysis is especially important when searching for vulnerabilities. For example, if a certain program receives input from a user, there is a chance that the input will be used to cause a denial of service or to gain control over the system. Examples may include errors leading to buffer overflows on some data inputs or, for example, SQL injections. In both cases, you need to track data flow and possible values for variables, so that the static analyzer could be able to detect such errors and vulnerabilities.

I should say that the mechanism of the data flow analysis is a complex and extensive mechanism, but, in this article, I briefly touched on the basics of data flow analysis.

Let’s see some examples of errors that can be detected using the data flow mechanism.

Here, you can see this demonstrated in Hive project:

public static boolean equal(byte[] arg1, final int start1, final int len1, byte[] arg2, final int start2, final int len2) { if (len1 != len2) { // <= return false; } if (len1 == 0) { return true; } .... if (len1 == len2) { // <= .... }
}

The condition  len1 == len2  is always executed, because the opposite check has already been executed above.

Another example from the same project:

if (instances != null) { // <= Set<String> oldKeys = new HashSet<>(instances.keySet()); if (oldKeys.removeAll(latestKeys)) { .... } this.instances.keySet().removeAll(oldKeys); this.instances.putAll(freshInstances);
} else { this.instances.putAll(freshInstances); // <=
}

Here in the block else , the null pointer dereference occurs. Note: instances  are the same thing as  this.instances .

This can be seen in the following example from the JMonkeyEngine project:

public static int convertNewtKey(short key) { .... if (key >= 0x10000) { return key - 0x10000; } return 0;
}

Here, the variable key  is compared with the number 65536. However, it is of the type short, and the maximum possible value for short  is 32767. Accordingly, the condition is never executed.

Here is one more example from the Jenkins project:

public final R getSomeBuildWithWorkspace() { int cnt = 0; for (R b = getLastBuild(); cnt < 5 && b ! = null; b = b.getPreviousBuild()) { FilePath ws = b.getWorkspace(); if (ws != null) return b; } return null;
}

In this code, the variable cnt was introduced to limit the number of steps to five, but a developer forgot to increment it, which resulted in a useless check.

Annotations Mechanism

In addition, the analyzer needs a mechanism of annotations. Annotations are a markup system that provides the analyzer with extra information on the used methods and classes, in addition to the data that can be obtained by the analysis of their signatures. Markup is done manually; this is a long and time-consuming process, because, to achieve the best results, one has to annotate a large number of standard Java classes and methods. It also makes sense to perform the annotation of popular libraries. Overall, annotations can be regarded as a knowledge base of the analyzer about contracts of standard methods and classes.

Here’s a small sample of an error that can be detected using annotations:

int test(int a, int b) { ... return Math.max(a, a);
}

In this example, the same variable (passed as a first argument) was passed as the second argument of the method  Math.max  because of a typo. Such an expression is meaningless and suspicious.

The static analyzer may issue a warning for such code, as it’s aware of the fact that arguments of the method  Math.max  always have to be different.

Going forward, here are a few examples of our markup of built-in classes and methods:

Class("java.lang.Math") - Function("abs", Type::Int32) .Pure() .Set(FunctionClassification::NoDiscard) .Returns(Arg1, [](const Int &v) { return v.Abs(); }) - Function("max", Type::Int32, Type::Int32) .Pure() .Set(FunctionClassification::NoDiscard) .Requires(NotEquals(Arg1, Arg2) .Returns(Arg1, Arg2, [](const Int &v1, const Int &v2) { return v1.Max(v2); }) Class("java.lang.String", TypeClassification::String) - Function("split", Type::Pointer) .Pure() .Set(FunctionClassification::NoDiscard) .Requires(NotNull(Arg1)) .Returns(Ptr(NotNullPointer)) Class("java.lang.Object") - Function("equals", Type::Pointer) .Pure() .Set(FunctionClassification::NoDiscard) .Requires(NotEquals(This, Arg1)) Class("java.lang.System") - Function("exit", Type::Int32) .Set(FunctionClassification::NoReturn)

Explanations:

  • Class is a class being annotated;
  • Function is a method of the annotated class;
  • Pure is the annotation, indicating that a method is pure, i.e. deterministic and does not have side effects;
  • Set is a set of an arbitrary flag for the method.
  • FunctionClassification::NoDiscard is a flag indicating that the return value of the method must be used;
  • FunctionClassification::NoReturn is a flag that indicates that the method does not return control;
  • Arg1, Arg2, , ArgN – method arguments;
  • Returns is the return value of the method;
  • Requires is a contract for a method.

It is worth noting that, in addition to manual markup, there is another approach to annotating, which is an automatic inference of contracts based on bytecode. It is clear that such an approach allows obtaining only certain types of contracts, but, however, it enables it to receive additional information from all dependencies — not just from those that were annotated manually.

By the way, there is already a tool that is able to infer the contracts like @Nullable and @NotNull based on bytecode — FABA. As far as I understand, the derivative of the FABA is used in IntelliJ IDEA.

At the moment, we are also considering the ability to add the bytecode analysis for obtaining contracts of all methods, as these contracts could well complement our manual annotations.

Diagnostic rules often refer to the annotations. In addition to diagnostics, annotations are used by the data flow mechanism. For example, using the annotation method  java.lang.Math.abs, it can accurately calculate the absolute value of a number. And, we don’t have to write any additional code for that. We only need to correctly annotate a method.

Let’s consider the example of an error that can be found due to annotations from the Hibernate project:

public boolean equals(Object other) { if (other instanceof Id) { Id that = (Id) other; return purchaseSequence.equals(this.purchaseSequence) && that.purchaseNumber == this.purchaseNumber; } else { return false; }
}

In this code, the method  equals()  compares the object  purchaseSequence  with itself. Most likely, this is a typo and the that.purchaseSequence  should be written on the right but not purchaseSequence.

How Dr. Frankenstein Assembled the Analyzer From Pieces

Picture 4

Since data flow and annotation mechanisms themselves are not strongly tied to a specific language, it was decided to re-use these mechanisms from our C++ analyzer. This lets us obtain the whole power of the C++ analyzer in our Java analyzer within a short time. In addition, this decision was also influenced by the fact that these mechanisms were written in modern C++ with a bunch of metaprogramming and template magic,. Therefore, these solutions are not very suitable for porting into another language.

In order to connect the Java part with the C++ kernel, we decided to use the SWIG (Simplified Wrapper and Interface Generator), which is a tool for automatic generation of wrappers and interfaces for bounding C and C++ programs with programs written in other languages. SWIG generates code in JNI (Java Native Interface) for Java.

SWIG is great for cases when there is already a large amount of C++ code that needs to be integrated into a Java project.

Let me give you a small example of working with SWIG. Let’s suppose we have a C++ class that we want to use in a Java project:

CoolClass.h:

class CoolClass
{
public: int val; CoolClass(int val); void printMe();
};

CoolClass.cpp:

#include <iostream>
#include "CoolClass.h" CoolClass::CoolClass(int v) : val(v) {} void CoolClass::printMe()
{ std::cout << "val: " << val << '\n';
}

First, you must create a SWIG interface file with a description of all the exported functions and classes. If necessary, you will also need to perform additional settings in this file.

Example:

%module MyModule
%{
#include "CoolClass.h"
%}
%include "CoolClass.h"

After that, you can run SWIG:

$ swig -c++ -java Example.i

It will generate the following files:

  •  CoolClass.java  is a class that we will work with directly in a Java project;
  •  MyModule.java  is a module class that contains all free functions and variables;
  •  MyModuleJNI.java  – Java wrappers;
  • Example_wrap.cxx – C++ wrappers.

Now, you just need to add the resultant  .java  files in the Java project and the .cxx   file in the C++ project.

Finally, you need to compile the C++ project as a DLL and load it in the Java project using  System.loadLibary():

 App.java :

class App { static { System.loadLibary("example"); } public static void main(String[] args) { CoolClass obj = new CoolClass(42); obj.printMe(); }
}

Schematically, this can be represented as follows:

Picture 8

Sure. In a real project, things are not that simple and you have to upscale your efforts:

  • In order to use template classes and methods from C++, you must instantiate them for all template parameters by using the directive %template;
  • In some cases, you may need to catch exceptions that are thrown from the C++ part in the Java part. By default, SWIG doesn’t catch exceptions from C++ (segfault occurs). However, it is possible to do this using the directive %exception;
  • SWIG allows extending the C++ code on the Java side, using the directive %extend.  As for us, in our project, we add the method  toString()  to virtual values to see them in the Java debugger;
  • In order to emulate the RAII behavior from C++, the interface  AutoClosable is implemented.
  • Directors mechanism allows using a cross-language polymorphism;
  • For types that are allocated only inside C++ (in its own memory pool), constructors and finalizers are removed to improve performance. The garbage collector will ignore these types.

You can learn more about all of these mechanisms in the SWIG documentation.

Our analyzer is built using Gradle, which calls CMake and, in turns, calls SWIG and builds the C++ part. For programmers, it happens almost imperceptibly, Because of this, we experience no particular inconvenience when developing.

The core of our C++ analyzer is built under Windows, Linux, and macOS, so the Java analyzer also works in these operating systems.

What Is a Diagnostic Rule?

We write diagnostics themselves and code for analysis in Java. It is stemmed from the close interaction with the Spoon. Each diagnostic rule represents a visitor with overloaded method, where the elements interesting for us are traversed:

Picture 9

For example, this is what a V6004 diagnostic frame looks like:

class V6004 extends PvsStudioRule
{ .... @Override public void visitCtIf(CtIf ifElement) { // if ifElement.thenStatement statement is equivalent to // ifElement.elseStatement statement => add warning V6004 }
}

Plugins

For the simple static analyzer integration in the project, we’ve developed plugins for build systems Maven and Gradle. A user just needs to add our plugin to the project.

For Gradle:

....
apply plugin: com.pvsstudio.PvsStudioGradlePlugin
pvsstudio { outputFile = 'path/to/output.json' ....
}

For Maven:

....
<plugin> <groupId>com.pvsstudio</groupId> <artifactId>pvsstudio-maven-plugin</artifactId> <version>0.1</version> <configuration> <analyzer> <outputFile>path/to/output.json</outputFile> .... </analyzer> </configuration>
</plugin>

After that, the plugin will receive the project structure and start the analysis.

In addition, we have developed a plugin prototype for IntelliJ IDEA.

Picture 1

In addition, this plugin works in Android Studio. The plugin for Eclipse is now under development.

Incremental Analysis

We have provided the incremental analysis mode that allows checking only modified files, which significantly reduces the time for analysis. Thanks to that, developers will be able to run the analysis as often as necessary.

The incremental analysis involves several stages:

  • Caching of the Spoon metamodel;
  • Rebuilding of the modified part of the metamodel;
  • Analysis of the changed files.

Our Testing System

To test the Java analyzer on real projects, we wrote special tools to work with the database of open source projects. It was written in Python + Tkinter and is a cross-platform.

It works in the following way:

  • The tested project of a certain version is loaded from a repository on GitHub;
  • The project is built;
  • Our plugin is added to pom.xml or build.gradle (using git apply);
  • Static analyzer is started using the plugin;
  • The resulting report is compared with the etalon for this project.

Such an approach ensures that good warnings will not disappear because of changes in the analyzer code. The following illustration shows the interface of our utility for testing.

Picture 11

Red highlights are the projects, whose reports have differences with the etalon. The Approve button allows saving the current version of the report as an etalon.

Examples of Errors

Below, I will demonstrate several errors from different open source projects that our Java analyzer has detected. In the future, we plan to write articles with a more detailed report on each project.

Hibernate Project

PVS-Studio warning: V6009 Function ‘equals’ receives odd arguments. Inspect arguments: this, 1. PurchaseRecord.java 57

public boolean equals(Object other) { if (other instanceof Id) { Id that = (Id) other; return purchaseSequence.equals(this.purchaseSequence) && that.purchaseNumber == this.purchaseNumber; } else { return false; }
}

In this code, the method  equals()  compares the object  purchaseSequence  with itself. Most likely, this is a typo and  that.purchaseSequence , not  purchaseSequence, should be written on the right.

PVS-Studio warning: V6009 Function ‘equals’ receives odd arguments. Inspect arguments: this, 1. ListHashcodeChangeTest.java 232

public void removeBook(String title) { for( Iterator<Book> it = books.iterator(); it.hasNext(); ) { Book book = it.next(); if ( title.equals( title ) ) { it.remove(); } }
}

A triggering, similar to the previous one,  book.title , not title, has to be on the right.

Hive project

PVS-Studio warning: V6007 Expression ‘colOrScalar1.equals(“Column”)’ is always false. GenVectorCode.java 2768

PVS-Studio warning: V6007 Expression ‘colOrScalar1.equals(“Scalar”)’ is always false. GenVectorCode.java 2774

PVS-Studio warning: V6007 Expression ‘colOrScalar1.equals(“Column”)’ is always false. GenVectorCode.java 2785

String colOrScalar1 = tdesc[4];
....
if (colOrScalar1.equals("Col") && colOrScalar1.equals("Column")) { ....
} else if (colOrScalar1.equals("Col") && colOrScalar1.equals("Scalar")) { ....
} else if (colOrScalar1.equals("Scalar") && colOrScalar1.equals("Column")) { ....
}

Here, the operators were obviously confused and ‘ &&  was used instead of ‘ || ‘.

JavaParser Project

PVS-Studio warning: V6001 There are identical sub-expressions ‘ tokenRange.getBegin().getRange().isPresent() ‘ to the left and to the right of the ‘ && ‘ operator. Node.java 213  

public Node setTokenRange(TokenRange tokenRange)
{ this.tokenRange = tokenRange; if (tokenRange == null || !(tokenRange.getBegin().getRange().isPresent() && tokenRange.getBegin().getRange().isPresent())) { range = null; } else { range = new Range( tokenRange.getBegin().getRange().get().begin, tokenRange.getEnd().getRange().get().end); } return this;
}

The analyzer has detected that on the left and the right of the operator  &&, there are identical expressions. Besides that, all methods in the chain are pure. Most likely, in the second case,  tokenRange.getEnd()  has to be used rather than the  tokenRange.getBegin().

PVS-Studio warning: V6016 Suspicious access to element of ‘ typeDeclaration.getTypeParameters() ‘ object by a constant index inside a loop. ResolvedReferenceType.java 265  

if (!isRawType()) { for (int i=0; i<typeDeclaration.getTypeParams().size(); i++) { typeParametersMap.add( new Pair<>(typeDeclaration.getTypeParams().get(0), typeParametersValues().get(i))); }
}

The analyzer has detected a suspicious access to the element of a collection by constant index inside the loop. Perhaps, there is an error in this code.

Jenkins Project

PVS-Studio warning: V6007 Expression ‘cnt < 5’ is always true. AbstractProject.java 557

public final R getSomeBuildWithWorkspace() { int cnt = 0; for (R b = getLastBuild(); cnt < 5 && b ! = null; b = b.getPreviousBuild()) { FilePath ws = b.getWorkspace(); if (ws != null) return b; } return null;
}

In this code, the variable cnt  was introduced to limit the number of traverses to five, but a developer forgot to increment it, which resulted in a useless check.

Spark Project

PVS-Studio warning: V6007 Expression  'sparkApplications != null ‘ is always true. SparkFilter.java 127

if (StringUtils.isNotBlank(applications)) { final String[] sparkApplications = applications.split(","); if (sparkApplications != null && sparkApplications.length > 0) { ... }
}

The check of the result, returned by the split  method, for null  is meaningless, because this method always returns a collection and never returns null.

Spoon Project

PVS-Studio warning: V6001 There are identical sub-expressions ‘ !m.getSimpleName().startsWith("set") ‘ to the left and to the right of the ‘ && ‘ operator. SpoonTestHelpers.java 108

if (!m.getSimpleName().startsWith("set") && !m.getSimpleName().startsWith("set")) { continue;
}

In this code, there are identical expressions on the left and right of the &&   operator. In addition to that, all methods in the chain are pure. Most likely, there is a logic error in the code.

PVS-Studio warning: V6007 Expression ‘ idxOfScopeBoundTypeParam >= 0 ‘ is always true. MethodTypingContext.java 243

private boolean
isSameMethodFormalTypeParameter(....) { .... int idxOfScopeBoundTypeParam = getIndexOfTypeParam(....); if (idxOfScopeBoundTypeParam >= 0) { // <= int idxOfSuperBoundTypeParam = getIndexOfTypeParam(....); if (idxOfScopeBoundTypeParam >= 0) { // <= return idxOfScopeBoundTypeParam == idxOfSuperBoundTypeParam; } } ....
}

Here, the author of the code made a typo and wrote  idxOfScopeBoundTypeParam  instead of idxOfSuperBoundTypeParam.

Spring Security Project

PVS-Studio warning: V6001 There are identical sub-expressions to the left and to the right of the ‘ || ‘ operator. Check lines: 38, 39. AnyRequestMatcher.java 38  

@Override
@SuppressWarnings("deprecation")
public boolean equals(Object obj) { return obj instanceof AnyRequestMatcher || obj instanceof security.web.util.matcher.AnyRequestMatcher;
}

The triggering is similar to the previous one — the name of the same class is written in different ways.

PVS-Studio warning: V6006 The object was created but it is not being used. The ‘ throw ‘ keyword could be missing. DigestAuthenticationFilter.java 434

if (!expectedNonceSignature.equals(nonceTokens[1])) { new BadCredentialsException( DigestAuthenticationFilter.this.messages .getMessage("DigestAuthenticationFilter.nonceCompromised", new Object[] { nonceAsPlainText }, "Nonce token compromised {0}"));
}

In this code, a developer forgot to add the throw  before the exception. As a result, the object of the exception  BadCredentialsException  is created, but it is not used, i.e., no exception is thrown.

PVS-Studio warning: V6030 The method located to the right of the ‘ | ‘ operators will be called regardless of the value of the left operand. Perhaps, it is better to use ‘ || ‘. RedirectUrlBuilder.java 38

public void setScheme(String scheme) { if (!("http".equals(scheme) | "https".equals(scheme))) { throw new IllegalArgumentException("..."); } this.scheme = scheme;
}

In this code, the usage of the operator  |  is undue, because the right part will be calculated, even if the left part is already true. In this case, it has no practical meaning, so the operator  |  has to be replaced with  || .

IntelliJ IDEA Project

PVS-Studio warning: V6008 Potential null dereference of ‘editor’. IntroduceVariableBase.java:609

final PsiElement nameSuggestionContext = editor == null ? null : file.findElementAt(...); // <=
final RefactoringSupportProvider supportProvider = LanguageRefactoringSupport.INSTANCE.forLanguage(...);
final boolean isInplaceAvailableOnDataContext = supportProvider != null && editor.getSettings().isVariableInplaceRenameEnabled() && // <=
...

In this code, the analyzer has detected that a dereference of a null pointer editor  may occur. It is necessary to add an additional check.

PVS-Studio warning: V6007 Expression is always false. RefResolveServiceImpl.java:814

@Override
public boolean contains(@NotNull VirtualFile file) { .... return false & !myProjectFileIndex.isUnderSourceRootOfType(....);
}

It is difficult for me to say what the author had in mind, but this looks very suspicious. Even if there is no error here, this place should be rewritten to not to confuse the analyzer and other programmers.

PVS-Studio warning: V6007 Expression ‘ result[0]‘ is always false. CopyClassesHandler.java:298

final boolean[] result = new boolean[] {false}; // <=
Runnable command = () -> { PsiDirectory target; if (targetDirectory instanceof PsiDirectory) { target = (PsiDirectory)targetDirectory; } else { target = WriteAction.compute(() -> ((MoveDestination)targetDirectory).getTargetDirectory( defaultTargetDirectory)); } try { Collection<PsiFile> files = doCopyClasses(classes, map, copyClassName, target, project); if (files != null) { if (openInEditor) { for (PsiFile file : files) { CopyHandler.updateSelectionInActiveProjectView( file, project, selectInActivePanel); } EditorHelper.openFilesInEditor( files.toArray(PsiFile.EMPTY_ARRAY)); } } } catch (IncorrectOperationException ex) { Messages.showMessageDialog(project, ex.getMessage(), RefactoringBundle.message("error.title"), Messages.getErrorIcon()); }
};
CommandProcessor processor = CommandProcessor.getInstance();
processor.executeCommand(project, command, commandName, null); if (result[0]) { // <= ToolWindowManager.getInstance(project).invokeLater(() -> ToolWindowManager.getInstance(project) .activateEditorComponent());
}

Here, I suspect that someone forgot to change the value in result. Because of this, the analyzer reports that the check  if (result[0])  is meaningless.

Conclusion

Java development is very versatile. It includes desktop, Android, web, and much more, so we have plenty of room for activities. First and foremost, of course, we will develop the areas that are in the highest demande.

Here are our plans for the near future:

  • The inference of annotations from bytecode;
  • Integration into Ant projects (does anybody still use it in 2018?);
  • Plugin for Eclipse (currently in the development process);
  • More diagnostics and annotations;
  • Improvement of data flow.

Original Link

Fixing Gradle Dependency Resolution

Maven Central and Bintray have announced that they will discontinue support for TLS v1.1 and below. Here’s what you need to know to correct your Gradle builds if you’re affected.

You will need to take action if you are using Java 6 or 7 and using Gradle versions 2.1 through 4.8.

How to Check if You’re Affected

You may already be getting one of the following errors from your build after an error message saying: “Could not resolve [coordinates]”:

Received fatal alert: protocol_version

or

Peer not authenticated

If not, you can check to see whether you will be affected by running the following code:

gradle --version # Without Gradle Wrapper ./gradlew --version # Using Gradle Wrapper on *nix gradlew.bat --version # Using Gradle Wrapper on Windows

It will print something like this:

------------------------------------------------------------
Gradle 3.5
------------------------------------------------------------ Build time: 2017-04-10 13:37:25 UTC
Revision: b762622a185d59ce0cfc9cbc6ab5dd22469e18a6 Groovy: 2.4.10
Ant: Apache Ant(TM) version 1.9.6 compiled on June 29 2015
JVM: 1.7.0_80 (Oracle Corporation 24.80-b11)
OS: Mac OS X 10.13.5 x86_64

You must take action, if all of these are true:

  • JVM version is Java 7u130 or lower
  • and the Gradle version is between 2.1 and 4.8, inclusive
  • and you have declared a repository {} of mavenCentral() or jcenter()

How to Use TLS 1.2 for Dependency Resolution

You can take any one of the following actions to use TLS v1.2+:

  • Run Gradle with Java 1.7.0_131-b31 or above
  • or upgrade to Gradle 4.8.1 or above
  • or replace mavenCentral() with maven { url = "http://repo.maven.apache.org/maven2" } and jcenter() with maven { url = "http://jcenter.bintray.com" }

The first two solutions are recommended. While, the third opens a possible attack vector.

Other Resources

Posts about discontinued support for old versions of TLS on Maven Central and in the Bintray knowledge base explain the background for the necessity of these changes.

You may also find Gradle-specific details from gradle/gradle#5740 on GitHub.

Original Link

Gradle Goodness: Enable Tasks Based on Offline Command Line Argument

One of the command line options of Gradle is --offline. With this option, we run Gradle in offline mode to indicate we are not connected to network resources like the internet. This could be useful for example if we have defined dependencies in our build script that come from a remote repository, but we cannot access the remote repository, and we still want to run the build. Gradle will use the locally cached dependencies, without checking the remote repository. New dependencies, not already cached, cannot be downloaded of course, so in that scenario we still need a network connection.

We can check in our build script if the --offline command line argument is used. We can use this to disable tasks that depend on network resources so the build will not fail. To see if we invoked our build with the --offline option we can access the property gradle.startParameter.offline. The value is true if the command line argument --offline is used and false if the command line argument is not used.

In the following example build file we use the task type VfsCopy from the VFS Gradle Plugin to define a new task download. The task will download the file index.html from the site http://www.mrhaki.com. We enable the task if the --offline command line argument is not used. If the argument is used the task is disabled.

buildscript { repositories { jcenter() } dependencies { classpath 'org.ysb33r.gradle:vfs-gradle-plugin:1.0' classpath 'commons-httpclient:commons-httpclient:3.1' }
} task download(type: org.ysb33r.gradle.vfs.tasks.VfsCopy) { description = 'Downloads index.html from http://www.mrhaki.com' group = 'Remote' // Only enable task when we don't use // the --offline command line argument. enabled = !gradle.startParameter.offline from 'http://www.mrhaki.com/index.html' into project.file("${buildDir}/downloads")
}

Let’s run the download task with and without the --offline option:

$ gradle download --offline --console=plain
> Task :download SKIPPED BUILD SUCCESSFUL in 0s
$ gradle download --console=plain
> Task :download BUILD SUCCESSFUL in 1s
1 actionable task: 1 executed

Written with Gradle 4.8.

Original Link

Improve the User Experience in Your Custom Dev Tools

Whether you are writing an internal tool for your colleagues at work, or writing something open-source, you should be thinking about the user experience other developers have with your tool. Just like in a client facing app or API, a good user experience in dev tools is incredibly valuable and requires deliberate thought and consideration to get right.

Over the past few months, I’ve been doing a lot of work on dev tooling. My team at work currently produces and maintains a set of Gradle plugins that automate our software packaging and versioning processes. We are also involved in work done on deployment tooling, which is mostly Ansible complemented by Bash and Python scripting. Having gone through several iterations of these tools, and listening to feedback from my colleagues who use them, I am writing this article to share with you 3 simple things you can do to improve the user experience in your own custom dev tools.

1. Take a Design First Approach to User Input

For most non-trivial tools, you will most likely need input from your users. This input could be in the form of a configuration file, a closure in a buildscript, or even just arguments required when invoking a script. In any case, the way this input is specified is basically the API of your tool, and like in API design, I have found that a very good way to decide on what the user input should look like is to take a design first approach.

Before coding anything up for a new feature, ask yourself, “If I were using this tool to work on my own project, how would I prefer to use it? What would be the cleanest and simplest way to specify what I want this to do?” Take your time coming up with an answer, and try out different options. Once you’ve come up with a couple of alternatives, pick the one you think you would like best, maybe even ask for a second opinion from a colleague, and then figure out how you will write the code that accepts the input in your selected format. This way, you are much more likely to end up with a tool that is pleasant and intuitive to use.

As a simple example, imagine a script that can be triggered with some optional parameters. There’s a good chance that if you do take a design first approach, you end up with a script invocation that looks something like the following.

> my_script true 5

However, if you consider how you want to use this script beforehand, you might think it would be intuitive to specify the optional parameters with something like the following.

> my_script --with-option-a --parameter-b=5.

2. Write Good Documentation

When you need help understanding how to use a Bash command you bring up the manual pages, where you know you will find all the information you need. When you need help understanding what some Spring method does, you take a look at the Spring Javadocs, because you know you will find an explanation. Similarily, your colleagues, or any other users of your tool, should have a reliable source of information when they need help properly using and configuring it.

I personally found that a good README for your tools will make life easier for both you and your users. It will help you because your colleagues will stop interrupting you to ask questions about how to use your tools, and it will help your colleagues because they will have quick access to the information they need.

But what is in a good README? It’s probably safe to say that most people do not want to read a giant wall of text when they’re adding a plugin to their build scripts. Build scripts are rarely what you want to spend most of your time on when working on a project. Neither do they want to sift through endless text to find out how they can use your deploy script to deploy their hotfix when a critical bug is in production. Therefore I would suggest as much simplicity as possible (KISS is not just for code, you know). Nothing beats a couple of code snippets showing example usages or configurations that cover the majority of use cases. You can supplement that with some brief sentences or paragraphs indicating how the examples can be customized to meet different needs. If the document is long enough to require scrolling through it, consider a clickable table of contents.

3. Print Out Helpful Error Messages and Logs

Your dev tools should handle errors as gracefully as possible, giving the user helpful information about what went wrong. Remember that dev tools are there to enable users to be more productive. If you are writing a Gradle plugin, for instance, your users are developers trying to set up their build scripts. If something goes wrong in their build, they are going to need to know what happened so they can fix it. In particular, try to identify errors that can happen because of misconfiguration, or in relation to some other user input, and make sure you are printing out a helpful error message explaining what needs to be corrected in order for the tool to work. You don’t want your colleagues to have to try to figure out what went wrong by going through a stack trace of your code, or worse, come disrupt whatever you’re currently working on to ask how to fix their broken builds. That’s not going to help anyone’s productivity.

Apart from error messages, your tools might also output some logs indicating what work is being done. It will be helpful to your users if you are logging only relevant information at the right level, so that you do not clutter logs. For Gradle plugins, for instance, logs at the lifecycle level should include any information you think would be useful in a standard successful build run. These tend to act as a progress indicator for your tasks. Logs that give deeper insight into the specifics of what the plugin is doing should be marked as debug.

Conclusion

A good user experience is as important in your dev tools as it is for your typical client facing applications. In fact, as a developer of dev tools, it helps to consider your colleagues and any other user of your tool as your client. This way you can deliberately start considering the experience they have when using your tool, and what you can do to maximize their efficiency and happiness while using it.

A good dev tool gets the job done. A great dev tool gets the job done while causing no headaches, being intuitive to use, and providing enough information to keep the user as productive as possible.

Original Link

Gradle: Upload a List of JARs Into Nexus/Artifactory

More and more Java projects are starting to use Gradle as their build tool. As a coder, you’ll find the power back in your hands again after using it. Because Gradle itself is a Groovy script, a custom task can be easily and concisely created. Here is an example of loading a list of libraries into a Nexus or Artifactory server. Before trying to load it, let’s start with a custom task.

configurations { oneLib
} artifacts { oneLib file("path-to/one-lib.jar")
} apply plugin: 'maven' project.tasks.create('uploadOneLib', Upload.class) { configuration = configurations.oneLib repositories { mavenDeployer { repository(url: "${protReleaseUrl}") { authentication(userName: "${protReleaseUser}", password: "${protReleasePswd}") } pom.version = "0.0.1" pom.artifactId = "one-lib" pom.groupId = "com.onelib" } }
}

The above Gradle DSL defines an “uploadOneLib” task to load one-lib.jar into the release repository as com.onelib:one-lib-0.0.1.jar (URL and credentials are provided by gradle.properties under ${user.home}/.gradle or the current folder). It extends the built-in task “Upload” to use Maven Deploy. If one-lib-0.0.1.jar has 5 dependent JARs, to load all of them, we can simply duplicate it for each one with slight modifications. But since Gradle has a loop construct, it would be nice to just list the libraries to be uploaded, the target URLs/credentials, some consideration to allow different URLs/credentials for different libraries, and some different classifiers for some libraries. The meta-data structure will look like:

{ "libs": [ { "cfgName": "uploadInternal1" , "groupId": "com.prot.zbank.client" , "artifactId": "client-model" , "version": "0.0.8" , "jarLocation": "client/client-model.jar" } , { "cfgName": "uploadInternal2" , "groupId": "com.prot.zbank.client" , "artifactId": "client-model" , "classifier": "test" , "version": "0.0.8" , "jarLocation": "client/unit-test.jar" } , { "cfgName": "uploadInternal3" , "groupId": "com.prot.zbank.client" , "artifactId": "client-service" , "version": "0.0.9" , "jarLocation": "client/rest-api.jar" } , { "cfgName": "uploadOracleDriver" , "groupId": "com.oracle" , "artifactId": "ojdbc8" , "version": "0.2b" , "jarLocation": "3rd/ojdbc8.jar" , "repoUrl": "http://a.b.com/repository/third-party/" , "repoUser": "david" , "repoPswd": "Passw0rd888" } ] , "repoUrl": "http://a.b.com/repository/zbank-project-release/" , "repoUser": "george" , "repoPswd": "Passw0rd999"
}

The model says to:

  1. Upload from client/client-model.jar to com.prot.zbank.client:client-model-0.0.8.jar

  2. Upload from client/unit-test.jar to com.prot.zbank.client:client-model-0.0.8-test.jar

  3. Upload from client/rest-api.jar to com.prot.zbank.client:client-service-0.0.9.jar

  4. Upload from 3rd/ojdbc8.jar to com.oracle:ojdbc8-0.2b.jar

The first 3 JARs are uploaded to http://a.b.com/repository/zbank-project-release/, and the last one will be uploaded to http://a.b.com/repository/third-party/.

Gradle shines for this kind of task. In about 50 lines of code, a custom task — “batchUpload” — is created for our needs.

apply plugin: 'maven' def f = file("upload-list.json")
def m = f.exists() ? new groovy.json.JsonSlurper().parse(f) : null task batchUpload() {
} if (m == null || m.libs == null || m.libs.isEmpty()) { batchUpload.doLast { println "WARNING: no file to upload" }
} else { m.libs.each { lib -> configurations.create( "${lib.cfgName}" ) if (lib.classifier == null) { artifacts.add("${lib.cfgName}", file("${lib.jarLocation}")) } else { artifacts.add("${lib.cfgName}", file("${lib.jarLocation}")) { setClassifier( "${lib.classifier}" ) } } def repoUrl = lib.repoUrl != null ? lib.repoUrl : m.repoUrl; def repoUser = lib.repoUser != null ? lib.repoUser : m.repoUser; def repoPswd = lib.repoPswd != null ? lib.repoPswd : m.repoPswd; def classifier = lib.classifier == null ? "" : ("-" + lib.classifier) println "define task '${lib.cfgName}' to upload ${lib.jarLocation} to ${repoUrl} as ${lib.groupId}:${lib.artifactId}-${lib.version}${classifier}.jar" project.tasks.create("${lib.cfgName}", Upload.class) { configuration = configurations[lib.cfgName] repositories { mavenDeployer { repository(url: "${repoUrl}") { authentication(userName: "${repoUser}", password: "${repoPswd}") } pom.version = "${lib.version}" pom.artifactId = "${lib.artifactId}" pom.groupId = "${lib.groupId}" } } } batchUpload.dependsOn "${lib.cfgName}" }
}

It loops through each lib from the JSON model, creates a custom task to perform the upload, then tells the “batchUpload” task to depend on this task, so running “gradle batchUpload” will try to upload all libraries. Running “gradle ${lib.cfgName}” will just upload that ${lib} only. I can feel the beauty of this simplicity after using Gradle for one year. 

Original Link

IntelliJ IDEA 2018.2 Early Access Program Is Open!

Today we are excited to announce the start of the IntelliJ IDEA 2018.2 Early Access Program! Here are just a few reasons to download the first EAP build: the MacBook Touch Bar support, improvements in the Gradle support, new icons, Spring Boot goodies, a bunch of new inspections, and much more!

For those of you who were expecting the MacBook Touch Bar support in IntelliJ IDEA, we have some welcome news for you – your days of waiting are over! The upcoming IntelliJ IDEA 2018.2 introduces the Touch Bar support!

Now you can run, build, and debug your project, along with committing changes or updating the project right from the Touch Bar. The Touch Bar shows the controls depending on the context or which modifier keys you press. We support the most popular contexts, and even better – the contexts can be customized!

image27

The IntelliJ IDEA buttons are displayed in the app-specific area in the middle of the Touch Bar interface, while its left part does not change in different contexts, the VCS buttons can be replaced. For example, when the Maven or the Gradle toolbar gets the focus, the refresh (re-import) actions replace the VCS buttons.

image11

To select a configuration, simply press on the run/build configuration popover.

image14

If no configurations have been created, the Touch Bar shows the Add Configuration button.

image32

The debugger is an essential part of the IDE, so we have given it its own special layout, so when the Debug Tool Window becomes active, the Touch Bar shows:

image6

When the debugger is paused or has stopped at a breakpoint, the Pause button is replaced with a Resume button, and the Touch Bar displays the Step and the Evaluate Expression buttons:

image28

Need more debugger actions? Press the Alt key.

image21

For the dialogs, the confirmation buttons are displayed in the center of the Touch Bar.

image16

How to Customize the Touch Bar

You can currently configure both the main context and the debugger context of the Touch Bar. If you have a laptop with a Touch Bar, there is now a new Touch Bar page under Preferences | Appearance & Behaviour | Menus and Toolbars.

image17

Try it right now and see for yourself. We’d love to hear about your experience of using the Touch bar with IntelliJ IDEA; please share your feedback with us!

Java

The code completion in IntelliJ IDEA continues to evolve, and now the IDE shows both all the possible auto-completions and Javadoc, at the same time (without the need to directly invoke Javadoc each time). Please be aware that you need to actually enable this cool new feature. Go to Preferences | Editor | General | Code Completion and turn on the Show the documentation info pop-up in… options.

image15

Screen Shot 2018-05-16 at 19.09.32

The code completion now suggests the Collectors.joining() collector for the String stream.

image31

Additionally, the code completion now suggests members of the known common super-type.

image12

The IDE will also now display known data flow information right inside the editor you just need to invoke the Expression type action (Ctrl+Shift+P by default) for the second time.

image24

We’ve updated some of our existing inspections too due to Java 10 API changes. For starters, the loop to stream API migration inspection now uses the new toUnmodifiableList/Set/Map collectors automatically, if the original result was wrapped with Collections.unmodifiableList/Set/Map.

image4

The reverse conversion – the Stream to loop inspection also supports the Java 10 collectors: toUnmodifiableList, toUnmodifiableSet, toUnmodifiableMap.

image26

The Stream API support was improved with the upcoming IntelliJ IDEA 2018.2, so now the IDE detects cases when a sorted stream collects an unsorted collection. As this indicates that either the sorting is unnecessary or that using the collector or collection is wrong. Now, you’ll get a warning about the redundant distinct() call before collect(toSet()), as when collecting to the Set, the result will be distinct, anyway.

image36

image20

The IDE now suggests folding a boolean expression into a Stream chain, if you have at least three stream elements. It works for a && chains, || chains, and also some of the + chains ( getSomething(“foo”)+”,”+getSomething(“bar”)+”,”+getSomething(“baz”) can be converted into Stream.of(“foo”, “bar”, “baz”).map(this::getSomething).collect(joining(“,”))).

Please note that by default this inspection doesn’t highlight code or issue a warning. You can evoke this inspection by clicking alt+enter and apply the fix. As this is an inspection, you can run it in a scope to find all the possible places where it’s applicable.

image5

In IntelliJ IDEA 2018.1 we’ve merged all the inspections that detect a redundant string operation into a single Redundant String operation inspection. With the upcoming IntelliJ IDEA 2018.2 we’ve continued our work on the detection of the redundant string operations, and now the IDE shows cases when a substring has the length of the string on which the substring is called as a second parameter.

image38

We’ve even updated the Unnecessary call to ‘String.valueof’ inspection and renamed this inspection into Unnecessary conversion to String. Now this inspection detects unnecessary calls to static toString() methods such as Integer.toString(x) or Boolean.toString(b).

image33

The Join Lines (Ctrl+Shift+J on Linux/Windows/macOS) action has been improved; now it can merge multiple methods calls into a chained call. It works for any method call where the return type is the same as the qualifier type. This also works on a declaration or assignment line with a subsequent call.

image22

The Join Lines action now produces a cleaner result with the nested if, and when joining lines with an unnecessary 0.

image2

image19

IntelliJ IDEA 2018.2  will support Java 11! Stay tuned you are not going to want to miss the dedicated blog post about Java 11 features in your favorite IDE!

Gradle

You can now debug a Gradle script in IntelliJ IDEA. Previously, you could debug a build.gradle file only as a Groovy script, but those dark days are over. With IntelliJ IDEA 2018.2 you can now debug a gradle.build file and set a breakpoint not only at the top level of the Gradle build script but also in the Gradle DSL blocks.

image25

We have significantly improved the debugging of Gradle tasks. The debugger can connect to the Gradle daemon, or to any java process that was started by Gradle for tasks that implement the org.gradle.process.JavaForkOptions interface.

Another improvement in this area is the auto-discovery of an included buildSrc Gradle project. Now, the IDE links Gradle’s buildSrc sources and their usages in a build. So you can now navigate from the build scripts to the buildSrc source.

image30
Maven

We’ve improved Maven support in IntelliJ IDEA so now when the IDE highlights dynamically created properties; it also provides a handy quick-fix to suppress this warning.

Furthermore, the IDE also now works correctly with the maven-compiler-plugin version 3.7.0, and supports the <release> option.

Spring and Spring Boot

We have significantly improved the performance of Spring projects.

As you may already know, IntelliJ IDEA provides dedicated Spring diagrams to make it easier to quickly analyze even complex Spring and Spring Boot projects.

You can view the dependencies between beans using Spring Beans diagram or view the relationship between configuration files by utilizing the Spring Model Dependencies diagram. These diagrams are extremely useful. However, these are static diagrams that are based on static XML or Java configuration files. But what if you want to view a dependencies graph for runtime beans?

In the upcoming IntelliJ IDEA 2018.2, you can select the new Diagram Mode option and visualize the dependencies between runtime beans of a Spring Boot application.

Click the new Diagram Mode icon in the right gutter of the Beans tab in the Run Dashboard (after you’ve started your application); then you can view the Spring Runtime Beans diagram for the whole application.

image7

As you may recall, with IntelliJ IDEA 2018.1 we’ve introduced a new runtime feature that gives you the ability to access HTTP request mappings from the editor via the new editor-based REST Client. The upcoming major update v2018.2 has enhanced this feature even further, so you can manage your HTTP requests mappings from the Run Dashboard.

After you run your application, you can select the request you need from the Mappings tab and either run your HTTP request straight away or open it in the editor-based REST client.

image8

For GET methods you have the additional option to open a HTTP request in the browser.

image29

Improvements are also being made for Version Control Systems. Firstly, to make it much easier for you to find files with merge conflicts, the upcoming IntelliJ IDEA 2018.2 groups such files under a new Merge Conflicts node for each changelist. You can invoke and resolve the merge conflicts by simply clicking on the Resolve action.

image3

Favorite branches were added to the Branch filter in the Log tab of the Version Control tool window, so you can now quickly filter commits by your favorite branches.

image23

One more improvement to the VCS Log tab is that you can now open as many Log tabs as needed, set different filters for each Log tab, and monitor the changes in separate tabs. There is no need to switch back and forth between different filters anymore.

image34

Additionally, you can now preview Diff in the VCS Log.

image1

It is now possible to skip the Push dialog when using Commit and Push action or show it only when pushing to protected branches.

image9

The upcoming IntelliJ IDEA 2018.2 brings support for multiple GitHub accounts. Now you could easily work on your company project and your pet project without the need to switch GitHub account – just configure all accounts you use in the Preferences | Version Control | GitHub, and specify the default GitHub account per project.

image37

We’ve also improved the support for the JSON files in the upcoming IntelliJ IDEA v2018.2.
First of all, the IDE now supports JSON schema v4, schema v6, and schema v7.

You can now easily apply the required JSON schema for a chosen file using the brand new JSON schema drop-down menu in the status bar. To edit the existing Schema Mappings or create your own schema – simply click Edit Schema Mappings…

image35

Another useful improvement in this area is the new action to copy JSON editor breadcrumbs. You can copy either a qualified name or a JSON Pointer. To invoke this action right-click in the editor breadcrumbs.

image10

To check the compliance of a JSON file to an applied JSON schema we’ve added a new inspection.

image13

We’ve improved Java formatting and editing in a number of ways. There is a new option that adds a blank line before the class end and which affects classes and interfaces (but not anonymous classes). The IDE also adds an indent that puts the caret in the correct place (with the Enter key) in a binary expression. There are more small, yet very helpful improvements. For more information please refer to: IDEA-57898, IDEA-98552, IDEA-108112, IDEA-115696, IDEA-153628.

One more useful improvement to the upcoming IntelliJ IDEA 2018.2 – the ability to specify the default directory for opening projects. In the Preferences | Appearance & Behavior | System Settings in the Project Opening section there is a new Default directory field where you can set a directory.

We’ve bundled Kotlin 1.2.40 with the first IntelliJ IDEA 2018.2 EAP build. You can learn more here.

Also, we think you’ve probably already noticed our new icons; we hope you like them!

To see the full list of fixed issues – dive into the full release notes available with this link.

That’s it for now! Download the first EAP build and give it a thorough try! As usual, we look forward to your feedback: discussion forum, issue tracker, Twitter or here in the comments!

Happy developing!

Original Link

Gradle Goodness: Command Line Options for Custom Tasks

Gradle added an incubation feature to Gradle 4.6 to add command line options for custom tasks. This means we can run a task using Gradle and add command line options to pass information to the task. Without this feature, we would have to use project properties passed via the -P or --project-property. The good thing about the new feature is that the Gradle help task displays extra information about the command line options supported by a custom task.

To add a command line option we simply use the @Option annotation on the setter method of a task property. We must make sure the argument for the setter method is either a boolean, Boolean, String, enum, List<String>, or List<enum>. The @Option annotation requires an option argument with the name of the option as it must be entered by the user. Optionally, we can add a description property with a description about the option. It is good to add the description, because the help task of Gradle displays this information and helps the user of our custom task.

Let’s start with a custom task that opens a file in the default application associated with the type of file. The task has own property file of type File. Remember, we cannot create a command line option for all property types. To add a command line option, we must overload the setFile method to accept a String value. This setter method is used to expose the file property as an option.

// File: buildSrc/src/main/groovy/mrhaki/gradle/OpenFile.groovy
package mrhaki.gradle import org.gradle.api.DefaultTask
import org.gradle.api.GradleException
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.TaskAction
import org.gradle.api.tasks.options.Option import groovy.transform.CompileStatic
import java.awt.Desktop /** * Open a file or URI with the associated application. */
@CompileStatic
class OpenFile extends DefaultTask { /** * File to open. */ @Input File file /** * Set description and group for task. */ OpenFile() { description = 'Opens file with the associated application.' group = 'Help' } /** * Overload the setter method to support a String parameter. Now * we can add the @Option annotation to expose our file property * as command line option. * * @param path The object to resolve as a {@link File} for {@link #file} property. */ @Option(option = 'file', description = 'Set the filename of the file to be opened.') void setFile(final String path) { this.file = project.file(path) } /** * Check if {@link Desktop} is supported. If not throw exception with message. * Otherwise open file with application associated to file format on the * runtime platform. * * @throws GradleException If {@link Desktop} is not supported. */ @TaskAction void openFile() { if (Desktop.isDesktopSupported()) { Desktop.desktop.browse(new URI("file://${file.absolutePath}")) } else { throw new GradleException('Native desktop not supported on this platform. Cannot open file.') } } }

We use the task in the following build file. We create a task openReadme and set the file property value in the build script. We also create the task open, but don’t set the file property. We will use the command line option file for this task:

task open(type: mrhaki.gradle.OpenFile) task openReadme(type: mrhaki.gradle.OpenFile) { file = project.file('README')
}

To run the open task and set the file command line option, we use a double-dash as prefix:

$ gradle open --file=README

We can get more information about the supported options for our task by invoking the help task. We define the task name with the option task:

$ gradle help --task=open > Task :help
Detailed task information for open Path :open Type OpenFile (mrhaki.gradle.OpenFile) Options --file Set the filename of the file to be opened. Description Opens file with the associated application. Group Help BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

To define a set of valid values for an option we must add a method to our class that returns a list of valid values annotated with the @OptionValues annotation. We must set the task property name as argument for the annotation. Then when we invoke Gradle’s help task we see a list of valid values. Validation of the property value is not done with the annotation, it is only for informational purposes. So validation must be added to the task code by ourselves.

We rewrite our OpenFile task and add the requirement that only files in the project directory can be opened and the filename must be in upper case. The method availableFiles return all files in the project directory with their name in upper case. The annotation @OptionValues is added to the method. In the task action method openFile we check if the file is valid to be opened

package mrhaki.gradle import org.gradle.api.DefaultTask
import org.gradle.api.GradleException
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.TaskAction
import org.gradle.api.tasks.options.Option
import org.gradle.api.tasks.options.OptionValues import groovy.transform.CompileStatic
import java.awt.Desktop /** * Open a file or URI with the associated application. */
@CompileStatic
class OpenFile extends DefaultTask { /** * File to open. */ @Input File file /** * Set description and group for task. */ OpenFile() { description = 'Opens file with the associated application.' group = 'Help' } /** * Check if {@link Desktop} is supported. If not throw exception with message. * Otherwise open file with application associated to file format on the * runtime platform. * * @throws GradleException If {@link Desktop} is not supported or * if filename not all uppercase in project dir. */ @TaskAction void openFile() { if (!fileAllUpperCaseInProjectDir) { throw new GradleException('Only all uppercase filenames in project directory are supported.') } if (Desktop.isDesktopSupported()) { Desktop.desktop.browse(new URI("file://${file.absolutePath}")) } else { throw new GradleException('Native desktop not supported on this platform. Cannot open file.') } } private boolean isFileAllUpperCaseInProjectDir() { getAllUpperCase()(file.name) && project.projectDir == file.parentFile } /** * Overload the setter method to support a String parameter. Now * we can add the @Option annotation to expose our file property * as command line option. * * @param path The object to resolve as a {@link File} for {@link #file} property. */ @Option(option = 'file', description = 'Set the filename of the file to be opened.') void setFile(final String path) { this.file = project.file(path) } /** * Show all files with filename in all uppercase in the project directory. * * @return All uppercase filenames. */ @OptionValues('file') List<String> availableFiles() { project.projectDir.listFiles()*.name.findAll(allUpperCase) } private Closure getAllUpperCase() { { String word -> word == word.toUpperCase() } }
}

Let’s run the help task and see that available values are shown this time:

$ gradle help --task=open > Task :help
Detailed task information for open Path :open Type OpenFile (mrhaki.gradle.OpenFile) Options --file Set the filename of the file to be opened. Available values are: README Description Opens file with the associated application. Group Help BUILD SUCCESSFUL in 2s
1 actionable task: 1 executed

Written with Gradle 4.7.

Original Link

Apache Flink Basic Transformation Example

Apache Flink is a stream processing framework with added capabilities such as batch processing, graph algorithms, machine learning, reports, and trends insight. Using Apache Flink can help you build a vast amount of data in a very efficient and scalable manner.

In this article, we’ll be reading data from a file, transforming it to uppercase, and writing it into a different file.

Gradle Dependencies

dependencies { compile "org.apache.flink:flink-java:1.4.2" compile "org.apache.flink:flink-streaming-java_2.11:1.4.2" compile "org.apache.flink:flink-clients_2.11" }

Core Concept of Flink API

Image title

When working with the Flink API:

  • DataSource represents a connection to the original data source.
  • Transformation represents what needs to be performed on the events within the data streams. A variety of functions for transforming data are provided, including filtering, mapping, joining, grouping, and aggregating.
  • Data sink triggers the execution of a stream to produce the desired result of the program, such as saving the result to the file system, printing it to the standard output, writing to the database, or writing to some other application.

The data source and data sink components can be set up easily using built-in connectors that Flink provides to different kinds of sources and sinks.

Flink transformations are lazy, meaning that they are not executed until a sink operation is invoked.

Code:

package com.uppi.poc.flink; import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.core.fs.FileSystem.WriteMode;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; public class UpperCaseTransformationApp { public static void main(String[] args) throws Exception { DataStream < String > dataStream = null; final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); final ParameterTool params = ParameterTool.fromArgs(args); env.getConfig().setGlobalJobParameters(params); if (params.has("input") && params.has("output")) { //data source dataStream = env.readTextFile(params.get("input")); } else { System.err.println("No input specified. Please run 'UpperCaseTransformationApp --input <file-to-path> --output <file-to-path>'"); return; } if (dataStream == null) { System.err.println("DataStream created as null, check file path"); System.exit(1); return; } //transformation SingleOutputStreamOperator < String > soso = dataStream.map(String::toUpperCase); //data sink soso.writeAsText(params.get("output"), WriteMode.OVERWRITE); env.execute("read and write"); } }

DataSet API Transformation

As you can see, dataStream is initialized as null but later, we will create it.

DataStream<String> dataStream=null;

Initializing Flink Environment

The next step is to initialize stream execution environment by calling this helper method:

StreamExecutionEnvironment. getExecutionEnvironment()

Flink figures out which environment you submitted (whether it is a local environment or cluster environment).

final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

Data Stream Creation 

To get the data stream from the data source, just call the built-in Flink API method readTextFile() from StreamExecutionEnvironment. This method reads file content from a given file and returns it as a dataStream object.

Example:

dataStream.readTextFile(params.get("input"));

ParameterTools

The ParamerterTools class represents user command line arguments. Example:

$ flink run flink-basic-example-1.0.jar --input c:\tools\input.txt--output c:\tools\output.txt

We need to send all command line arguments to execution environment by calling:

env.getConfig().setGlobalJobParameters(params);

Writing Output to Data Sink

printn(): The data stream print method writes each entity of the data stream to a Flink log file.

writeAsText(): This method has two arguments: the first argument is the output file/path and the second argument is writer mode.

Example:

soso.writeAsText(params.get("output"),WriteMode.OVERWRITE);

Trigger Flow Execution

All actions specified earlier will happen. If you don’t call the execute method, your program will complete without doing anything. When calling the execute method, you can specify the name of the job. Example:

env.execute("read and write");

Running Flink Application

Step 1: Clone the project from GitHub and run the Gradle command >  gradlew clean build. Once the build is a success, it generates a flink-basic-example-1.0.jar file in the current project folder’s /build/libs directory.

Step 2: Run the Flink server on Windows with start-local.bat.

Image title

Step 3: Run your own Flink application command line by going to the Flink installation folder and type the following command:

flink run <path-to-jar-file> --input <path-to-file> --output <path-to-file>

Example:

Image title

Image title

input.txt:

training in Big Data Hadoop, Apache Spark, Apache Flink, Apache Kafka, Hbase, Apache Hadoop Admin

output.txt:

TRAINING IN BIG DATA HADOOP, APACHE SPARK, APACHE FLINK, APACHE KAFKA, HBASE, APACHE HADOOP ADMIN

Step 5: Running the Flink application using the Flink web UI. Typelocalhost:8081(by default, Flink runs on port 8081).

Image title

Click Submit New Job > Add New Task and upload generated JAR file. This JAR lets you generate with:

gradlew clean build (jar location - /build/libs)

Image title

Just enter the program arguments input box as:

--input <path-to-file> --output <path-to-file>

And hit Submit. After running the job, you will see the below screen if you hit Completed Jobs.

Image title

Here’s the GitHub link.

Original Link

This Week in Spring: Spring Boot 2.0

Build vs Buy a Data Quality Solution: Which is Best for You? Maintaining high quality data is essential for operational efficiency, meaningful analytics and good long-term customer relationships. But, when dealing with multiple sources of data, data quality becomes complex, so you need to know when you should build a custom data quality tools effort over canned solutions. Download our whitepaper for more insights into a hybrid approach.

Topics:

java ,spring boot 2 ,spring security ,gradle ,microservices

Original Link

Reproducible Builds in Java

When it comes to software, we look at source code to learn what an application will do when executed. However, the source code we read is not what will actually be executed by our computers. In the Java world, it is the Java bytecode that is produced by our compilers that eventually gets executed, normally after having been packaged into an archive. So when we are given code that has already been compiled and packaged, how can we verify that it is indeed the product of the source code we think produced it?

The answer to this is reproducible builds! A reproducible build is a deterministic function that, given our source code as input, will produce an executable artifact that is the same, byte for byte, every time it is run on the same input.

What Is the Value of a Reproducible Build?

I personally discovered the need for a reproducible build when I was tasked with implementing an automated change management system at work. Our production system, which is Java-based, for the most part, is frequently audited. The way this works is that every time the auditors come in, we need to provide them with a checksum of the software artifacts running on production at the time. They then compare these checksums with the checksums from the previous audit to determine which artifacts were modified and follow up with us on the changes detected. We, therefore, want to make sure that the checksums of software artifacts only change when a change in the source code has been made. If someone rebuilds and redeploys an artifact, that should not be flagged as a change in the live system, so as not to waste anybody’s time.

Apart from this, a reproducible build enables you to do more with your build tools. For example, if you have a build that produces multiple artifacts, without being able to detect which of the artifacts has changed after every build, you either automatically deploy all artifacts or none at all. A reproducible build would allow your tools to recognize what has changed and deploy accordingly.

Is the Standard Java Build Reproducible?

We can get the answer to that question by running a simple test with Gradle, one of the two most commonly used Java build tools.

Let’s open up a terminal and create a simple Java project with Gradle.

 > mkdir reproducible-build-test 

 > cd reproducible-build-test 

 > gradle init --type java-application 

This will generate a simple Java command line application that prints out ‘Hello world’. Next, we will build this project and take a checksum of the resulting JAR file.

 > gradle build 

 > md5sum build/libs/reproducible-build-test.jar 

Finally, we clean our project and rebuild it, then take a checksum of the rebuilt JAR.

 > gradle clean 

 > gradle build  

 > md5sum build/libs/reproducible-build-test.jar 

If you have followed these steps, you will note that we got 2 different checksums, even though we built the exact same source code twice. You can try the same experiment with a simple Maven project, but the result will be the same. We can, therefore, conclude that no, standard Java builds are not reproducible.

The Makings of a JAR File

To build our JAR file, Gradle began by creating .class files from our .java   files by compiling them into Java bytecode. Then, these files were packaged together with some metadata to form a JAR archive. This is quite a simple 2-step process, so with one more test, we can find out which part of the build is non-deterministic.

Let’s put aside Gradle and use the Java compiler javac dirctly to compile a class, which we will checksum, recompile, and checksum again.

 > javac src/main/java/App.java  

 > md5sum src/main/java/App.class  

 > rm src/main/java/App.class  

 > javac src/main/java/App.java  

 > md5sum src/main/java/App.class 

This time the checksums are the same. We now know that `javac` is deterministic, so we can, therefore, conclude that the non-determinism in our Java build has to be introduced while the JAR file is being packaged.

To quote the official Java documentation, a JAR file is “essentially a ZIP file that contains an optional META-INF directory,” and this is the root of our problem. It turns out that the specification of ZIP files requires every entry in the ZIP file to include a local file modification timestamp. This means that every time we recompile and repackage our Java project, we will get a different JAR file because there is a difference in file modified timestamps.

Apart from this, additional non-determinism can be introduced if files are put into the archive in different orders. This can happen in the case of parallel builds, or possibly even if you run the same build on different operating systems. 

Are Reproducible Builds in Java Possible?

Yes. To make a Java build reproducible, we need to tweak it so that the files that make up the JAR archive are always packaged in the same order, and that all timestamps are set to a constant date and time.

Luckily for us, Gradle actually introduced support for reproducible builds starting from version 3.4. If you build your Java project using Gradle, you can specify that you want your archive generating tasks to use a reproducible file order and to discard timestamps by adding the following to your build script.

tasks.withType(AbstractArchiveTask) { preserveFileTimestamps = false reproducibleFileOrder = true
}

You can read more about this in the Gradle user guide.

For the Maven users, I am not aware of any way to get the archiver to generate reproducible JARs. However, there is a reproducible build plugin available that will uncompress JARs and repackage them for you, sorting the contents and replacing varying timestamps with a constant.

Note on using Gradle reproducible builds with Spring Boot versions predating version 2: The Spring Boot Gradle plugin repackages your JAR file after Gradle packages it to add in the things it needs to make your project executable. This repackaging step does not support preserving file order and setting constant timestamps. This issue has been solved for Spring Boot 2.

Conclusion

We have discussed what a reproducible build is, and seen that it provides value by enabling verification of software artifacts and supporting automated build tools. Although standard Java builds are non-deterministic because of the specification of ZIP files, there are workarounds that enable us to achieve reproducible builds in Java. If you want to read more about reproducible builds, I suggest you take a look at reproducible-builds.org.

Original Link

Using Gradle Build Caches With Kotlin

The build cache works by storing compiled classes, test outputs, and other build artifacts in a cache, taking into account all task inputs, including input file contents, relevant classpaths, and task configuration.

Build Cache topological diagram

This frequently results in faster builds. The following chart shows aggregated build time with and without the build cache for part of Gradle’s CI:

Build minutes saved with Gradle build cache

In this post, we’ll explain how you can use Gradle’s build cache to avoid unnecessary Kotlin compilation to speed up your builds.

Quick Demo With Spek

You can try out build caching with Gradle right now. Just follow these steps:

Clone Spek

git clone https://github.com/spekframework/spek.git
cd spek

The Spek 2.x branch (which is the default) already has all that we’ll describe later.

Build and Populate Cache

The following command builds Spek and populates the local build cache.

❯ ./gradlew assemble --build-cache BUILD SUCCESSFUL in 10s
21 actionable tasks: 21 executed

Using the --build-cache flag is one way to tell Gradle to store outputs in a separate task output cache.

Remove/Change Build Outputs

This simulates being on another machine or perhaps making a change and stashing it. The quickest way to demonstrate is use the clean task.

Rebuild and Resolve From Build Cache

This time when we re-build, all Kotlin compiled sources are pulled from the build cache.

❯ ./gradlew assemble --build-cache BUILD SUCCESSFUL in 2s
21 actionable tasks: 11 executed, 10 from cache

Voilà! You just used Gradle’s build cache to reuse Kotlin compiled classes instead of recompiling again! The build was about 5x faster!

You can see from this build scan that Kotlin compile tasks were pulled from the build cache; :jar and :processResources tasks were not because it’s faster to generate JARs and copy files locally than pull from a cache. Note that caching :test tasks is also supported.

The Gradle build cache is particularly effective when a CI instance populates a shared build cache, which developers can pull from. for achieving this are listed below.

Enabling the Build Cache for Your Projects

I hope you’re excited to try this out on your project — you can follow these steps to enable the build cache.

First, you need to ensure you’re using Gradle 4.3 or above, so the Kotlin Gradle Plugin can opt-in to using new APIs in Gradle. You can upgrade Gradle easily using the Gradle wrapper.

Next, we need to ensure we are compiling with Kotlin version 1.2.20 or above. You might have something like this declared in your buildscript {} block in build.gradle:

dependencies { classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:1.2.21"
}

Next, we need to tell Gradle to use the build cache. There are 3 ways to do this:

  • Enable for the current build using --build-cache on the command-line.
  • Enable for the project by adding org.gradle.caching=true to $PROJECT_ROOT/gradle.properties.
  • Enable for all builds for the current user by adding org.gradle.caching=true to $GRADLE_HOME/gradle.properties.

NOTE: Android developers still need to do this even if android.enableBuildCache=true is set, because Gradle’s build cache is separate from the Android build cache.

We can optionally take advantage of the build cache from IDEs by delegating run and test actions to Gradle.

Enabling Build Caching in IntelliJ

If you use IntelliJ to execute Gradle actions, you will need to “Delegate IDE build/run actions to Gradle” in your IDE settings to take advantage of build caching when building and running tests from IntelliJ.

Delegate IDE build/run to Gradle

NOTE: Android Studio does this by default.

Caching kapt Tasks

Caching for kapt is currently disabled by default, even with --build-cache, because Gradle does not yet have a way to map inputs and outputs for annotation processors. You can explicitly enable use of the build cache for Kotlin annotation processing tasks by setting useBuildCache to true in kapt configuration.

kapt { useBuildCache = true
}

Further Reading

You can learn more about leveraging the Gradle build cache through these resources:

Conclusion

Compiling Kotlin code using kotlin-gradle-plugin version 1.2.20 and above can take advantage of Gradle’s --build-cache feature to speed up your development cycle. Work continues to expand the set of tasks that support build caching.

Onward!

Original Link

Gradle and Its New Kotlin DSL [Video]

A couple of months ago, the Gradle team decided that you can now also write your build.gradle files with Kotlin instead of Groovy. Although this whole effort is still in its infancy, it can turn out to be quite exciting. Why? 

Because with  Kotlin build.gradle.kts files, you get compile errors if you specify incorrect options. You get autocompletion. Basically, everything a statically typed language offers you. That does not mean that your Groovy scripts will be deprecated in the future — they will still be supported. Rather, it means you now have the choice to choose whatever suits you best and you think makes you more productive.

To get you started, I compiled a small, practical screencast on how to turn a simple Java Gradle project, into a Java Gradle project with the Kotlin DSL. After watching it, you should have a good grasp of how the Kotlin DSL works and what the current advantages and disadvantages of using it are. Enjoy!

Original Link

Publishing to Your Own Bintray Maven Repository Using Gradle

When working on an open source Java project, you always come to the point where you want to share your work with the developer community (at least that should be the goal). In the Java world, this is usually done by publishing your artifacts to a publicly accessible Maven repository. This article gives a step-by-step guide on how to publish your artifacts to your own Maven Repository on Bintray.

Bintray vs. Maven Central

You might be asking why you should publish your artifacts to a custom repository and not to Maven Central, because Maven Central is THE Maven repository that is used by default in most Maven and Gradle builds and thus is much more accessible. The reason for this is that you can play around with your publishing routine in your own repository first and THEN publish it to Maven Central from there (or JCenter, for that matter, which is another well-known Maven repository). Publishing from your own Bintray repository to Maven Central is supported by Bintray, but will be covered in a follow-up article.

Another reason for uploading to Bintray and not to Maven Central is that you still have control over your files even after uploading and publishing your files whereas in Maven Central you lose all control after publishing (however, you should be careful with editing already-published files!).

Create a Bintray Account

To publish artifacts on Bintray, you naturally need an account there. I’m not going to describe how to do that since if you’re reading this article you should possess the skills to sign up on a website by yourself.

Create a Repository

Next, you need to create a repository. A repository on Bintray is actually just a smart file host. When creating the repository, make sure that you select the type “Maven” so Bintray knows that it’s supposed to handle the artifacts we’re going to upload as Maven artifacts.

Obtain Your API key

When signed in on Bintray, go to the “edit profile” page and click on “API Key” in the menu. You will be shown your API key which we need later in the Gradle scripts to automatically upload your artifacts.

Set Up Your build.gradle

In your build.gradle set up some basics:

The important parts are the bintray plugin and the maven-publish plugin.

The two repositories closures simply list the Maven repositories to be searched for our project’s dependencies and have nothing to do with publishing our artifacts.

Build Sources and Javadoc Artifacts

When publishing an open source project, you will want to publish a JAR containing the sources and another JAR containing the Javadoc together with your normal JAR. This helps developers using your project since IDEs support downloading those JARs and displaying the sources directly in the editor. Also, providing sources and Javadoc is a requirement for publishing on Maven Central, so we can as well do it now.

Add the following lines to your build.gradle:

A note on javadoc.failOnError = false: by default, the Javadoc task will fail on things like empty paragraphs (</p>) which can be very annoying. All IDEs and tools support them, but the Javadoc generator still fails. Feel free to keep this check and fix all your Javadoc “errors”, if you feel masochistic today, though :).

Define What to Publish

Next, we want to define what artifacts we actually want to publish and provide some metadata on them.

In the pomConfig variable, we simply provide some metadata that is put into the pom.xml when publishing. The interesting part is the publishing closure which is provided by the maven-publish plugin we applied before. Here, we define a publication called BintrayPublication (choose your own name if you wish). This publication should contain the default JAR file (components.java) as well as the sources and the javadoc JARs. Also, we provide the Maven coordinates and add the information from pomConfig above.

Provide Bintray-Specific Information

Finally, the part where the action is. Add the following to your build.gradle to enable the publishing to Bintray:

The user and key are read from system properties so that you don’t have to add them in your script for everyone to read. You can later pass those properties via command line.

In the next line, we reference the BintrayPublication we defined earlier, thus giving the bintray plugin (almost) all the information it needs to publish our artifacts.

In the pkg closure, we define some additional information for the Bintray “package”. A package in Bintray is actually nothing more than a “folder” within your repository which you can use to structure your artifacts. For example, if you have a multi-module build and want to publish a couple of them into the same repository, you could create a package for each of them.

Upload!

You can run the build and upload the artifacts on Bintray by running

Publish!

The files have now been uploaded to Bintray, but by default, they have not been published to the Maven repository yet. You can do this manually for each new version on the Bintray site. Going to the site, you should see a notice like this:

Notice

Click on publish and your files should be published for real and be publicly accessible.

Alternatively, you can set up the bintray plugin to publish the files automatically after uploading, by setting publish = true. For a complete list of the plugin options have a look at the plugin DSL.

Access Your Artifacts From a Gradle Build

Once the artifacts are published for real you can add them as dependencies in a Gradle build. You just need to add your Bintray Maven repository to the repositories. In the case of the example above, the following would have to be added:

You can view the URL of your own repository on the Bintray site by clicking the button “Set Me Up!”.

What Next?

Now you can tell everyone how to access your personal Maven repository to use your library. However, some people are hesitant to include custom Maven repositories into their builds. Also, there’s probably a whole lot of companies out there which have a proxy that simply does not allow any Maven repository to be accessed.

So, as a next step, you might want to publish your artifacts to the well-known JCenter or Maven Central repositories. And to have it automated, you may want to integrate the publishing step into a CI tool (for example, to publish snapshots with every CI build). These issues will be addressed in upcoming blog posts.

Original Link

Spring 5, Embedded Tomcat 8, and Gradle

In this article, we are going to learn how to use Gradle to structure a Spring 5 project with Tomcat 8 embedded. We will start from an empty directory and will analyze each step needed to create an application that is distributed as an über/fat jar. This GitHub repository contains a branch called complete with the final code that we will have after following the steps described here.

Why Spring

Spring is the most popular framework available for the Java platform. Developers using Spring can count on a huge, thriving community that is always ready to help. For example, the framework contains more than 11k forks on GitHub and more than 120k questions asked on StackOverflow are related to it. Besides that, Spring provides extensive and up-to-date documentation that covers the inner workings of the framework.

As such, when starting a new Java project, Spring is an option that must be considered.

Spring vs. Spring Boot

In the past, Spring was known for being hard to set up and for depending on huge configuration files. This was not a big problem as most of the applications out there were monoliths. This kind of application usually supports many different areas and solves a wide variety of problems inside companies. Therefore, it was quite common to know companies that had only one or two applications to support their daily operations. In scenarios like that, having these huge configuration files and a hard process to set up a new project was not a problem.

However, this paradigm is getting outdated. Nowadays, many companies around the world are relying more on the microservices architecture and its benefits. As this architecture relies on multiple applications, each one specialized in a particular subject, using a framework that is hard to setup was something that developers were starting to avoid. This is why the team responsible for Spring decided to create a new project called Spring Boot.

As described in the official site of the Spring Boot framework, this framework makes it easy to create stand-alone, production-grade Spring based applications that “just run”. They decided to take an opinionated view of the Spring platform and third-party libraries so we can get started with minimal work.

Why Embedded Tomcat 8

First of all, let’s understand what embedded means. For a long time, Java developers shipped their applications as war (Web ARchive) and ear (Enterprise ARchive) files. These files, after being bundled, were deployed on application servers (like Tomcat, WildFly, WebSphere, etc.) that were already up and running on production servers. For the last couple of years, developers around the world started changing this paradigm. Instead of shipping applications that had to be deployed on running servers, they started shipping applications that contain the server inside the bundle. That is, they started creating jar (Java ARchive) files that are executable and that starts the server programmatically.

What triggered this change is that the new approach has many advantages. For example:

  1. To run a new instance of the application, it is just a matter of executing a single command.
  2. All dependencies of the application are declared explicitly in the application code.
  3. The responsibility for running the application isn’t spread across different teams.
  4. The application is guaranteed to be run in the correct server version, mitigating issues.

Also, as this approach fits perfectly in the microservices architecture that is eating the software development world, it makes sense to embed application servers. That’s why we will learn how to embed Tomcat 8, the most popular Java server, on Spring applications.

Why Gradle

When it comes to dependency management and build tools on Java projects, there are two mainstream solutions to choose from: Gradle and Maven. Both solutions are supported by huge communities, are constantly being developed, and are stable and extensible. Besides that, both Maven and Gradle fetch dependencies on similar ways and from similar sources (usually from Maven repositories). In the end, choosing one solution or another is normally just a matter of taste or familiarity. There are certain edge scenarios that one solution performs better than the other. However, in most cases, both solutions will attend all our needs.

In this article, we are going to use Gradle for one singular reason: brevity. Maven configuration files are usually too verbose (they are expressed on XML files). On the other hand, Gradle configuration files are expressed on Groovy, a JVM dynamic programming language known for having a concise and tidy syntax.

Creating the Project

Now that we understand why we chose to use Gradle, Spring 5, and an embedded Tomcat 8 server, let’s see how to put all these pieces together. The first thing that we will do is to clone an empty Gradle project. After that, we will explore adding the Tomcat 8 dependency and how to bootstrap it programmatically. Lastly, we will see how to configure and secure a Spring 5 project that works as a RESTful API and that handles JSP (JavaServer Pages) files.

Cloning the Gradle Project

There are multiple ways we can create a new Gradle project. For example, if we have Gradle installed on our machines, we could easily issue gradle init to get the basic files created for ourselves. However, to avoid having to install Gradle everywhere, we will clone a GitHub repository that already contains these files. The following commands will clone the repository for us and create the main package:

# clone basic files
git clone https://github.com/auth0-blog/spring5-app.git # change working directory to it
cd spring5-app # create the main package
mkdir -p src/main/java/com/auth0/samples/

After executing the last command, we will have the com.auth0.sample package and all the Gradle files that we will need.

Embedding Tomcat 8

To embed and bootstrap an instance of Tomcat 8, the first thing we need to do is to add it as a dependency to our project. We do that by adding a single line to the dependencies section of the build.gradle file, as shown below:

// ...
dependencies { compile('org.apache.tomcat.embed:tomcat-embed-jasper:8.0.47')
}

After adding the Tomcat 8 dependency, we have to create a class called Main in the com.auth0.samples package to bootstrap the server:

package com.auth0.samples; import org.apache.catalina.startup.Tomcat; import java.io.File;
import java.io.IOException; public class Main { private static final int PORT = 8080; public static void main(String[] args) throws Exception { String appBase = "."; Tomcat tomcat = new Tomcat(); tomcat.setBaseDir(createTempDir()); tomcat.setPort(PORT); tomcat.getHost().setAppBase(appBase); tomcat.addWebapp("", appBase); tomcat.start(); tomcat.getServer().await(); } // based on AbstractEmbeddedServletContainerFactory private static String createTempDir() { try { File tempDir = File.createTempFile("tomcat.", "." + PORT); tempDir.delete(); tempDir.mkdir(); tempDir.deleteOnExit(); return tempDir.getAbsolutePath(); } catch (IOException ex) { throw new RuntimeException( "Unable to create tempDir. java.io.tmpdir is set to " + System.getProperty("java.io.tmpdir"), ex ); } }
}

As we can see, running an instance of Tomcat 8 programmatically is quite easy. We just create a new instance of the Tomcat class, set a few properties on it, and call the start() method. Two things worth mentioning are:

  1. The server port is hardcoded in the code above (8080).
  2. Even though we won’t use it, the latest version of Tomcat requires us to define a base directory. Therefore, we simply create a temporary directory (through the createTempDir() method) that is marked to be excluded when the JVM ends its execution.

Bootstrapping Spring 5

Having the Tomcat 8 dependency configured and the code to initialize the server created, we can now focus on configuring Spring 5 in our project. The first step is to add the spring-webmvc dependency. To do that, let’s open the build.gradle file and add the following line to the dependencies section:

// ...
dependencies { // ... tomcat dependency compile('org.springframework:spring-webmvc:5.0.1.RELEASE')
}

This is the only Spring dependency that we will need for the time being. We don’t need to add spring-core explicitly because spring-webmvc declares it as a transitive dependency. Gradle downloads this kind of dependency and makes it available in the scopes needed automatically.

The next step is to create the class that we will use to configure Spring 5 programmatically. We will call this class SpringAppConfig and create it in the com.auth0.samples package with the following code:

package com.auth0.samples; import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.WebApplicationInitializer;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;
import org.springframework.web.servlet.DispatcherServlet;
import org.springframework.web.servlet.config.annotation.EnableWebMvc; import javax.servlet.ServletContext;
import javax.servlet.ServletRegistration; @Configuration
@EnableWebMvc
@ComponentScan(basePackages = {"com.auth0.samples"})
public class SpringAppConfig implements WebApplicationInitializer { @Override public void onStartup(ServletContext container) { // Create the 'root' Spring application context AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext(); rootContext.register(SpringAppConfig.class); // Manage the lifecycle of the root application context container.addListener(new ContextLoaderListener(rootContext)); // Create the dispatcher servlet's Spring application context AnnotationConfigWebApplicationContext dispatcherContext = new AnnotationConfigWebApplicationContext(); // Register and map the dispatcher servlet ServletRegistration.Dynamic dispatcher = container .addServlet("dispatcher", new DispatcherServlet(dispatcherContext)); dispatcher.setLoadOnStartup(1); dispatcher.addMapping("/"); }
}

To better understand what this class does, let’s take a look at its key concepts. First, let’s analyze the three annotations that we added to the class:

  • @Configuration: This annotation indicates that the class in question might create Spring beans programmatically. This annotation is required by the next one.
  • @EnableWebMvc: This annotation, used alongside with @Configuration, makes Spring import the configuration needed to work as an MVC framework.
  • @ComponentScan: This annotation makes Spring scan the packages configured (i.e., com.auth0.samples) to assemble Spring beans (like MVC controllers) for us.

The next important concept that we need to understand is the WebApplicationInitializer interface. Implementing this interface makes Spring automatically detect our configuration class and also makes any Servlet 3.0+ environment (like Tomcat 8) run it through the SpringServletContainerInitializer class. That is, we are capable of bootstrapping a Spring 5 context only by implementing this class.

The last important thing that we need to analyze is the implementation of the onStartup method. When Spring executes our WebApplicationInitializer extension, this is the method that it calls. In this method, we do two things. First, we start a Spring context that accepts annotated classes and register our main Spring configuration class on it. Like that, every other component that we define through annotations will be properly managed. Second, we register a DispatcherServlet instance to handle and dispatch incoming requests to the controllers that we create.

We now have a project that bootstraps a Spring 5 context automatically when started. To test it, let’s create a new package called com.auth0.samples.controller and add to it a class named HelloWorldController with the following code:

package com.auth0.samples.controller; import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController; @RestController
@RequestMapping("/hello")
public class HelloWorldController { @GetMapping public String sayHello() { return "Hello from Spring 5 and embedded Tomcat 8!"; }
}

The only way to run this project now is through an IDE. As we don’t want to be dependent on IDEs, it is a good time to learn how to package the application in a single, executable jar file.

Creating an Executable Distribution

To make Gradle package our application as an executable jar file (also called fat/über jar), we will take advantage of a popular Gradle plugin called Shadow. This plugin is easy to use, well supported by the community, and has a great, thorough documentation. To configure it, let’s replace the contents of the build.gradle file with the following code:

group 'com.auth0.samples'
version '1.0-SNAPSHOT' apply plugin: 'java' // 1 - apply application and shadow plugins
apply plugin: 'application'
apply plugin: 'com.github.johnrengelman.shadow' sourceCompatibility = 1.8
targetCompatibility = 1.8
mainClassName = 'com.auth0.samples.Main' // 2 - define the dependency to the shadow plugin
buildscript { repositories { jcenter() } dependencies { classpath 'com.github.jengelman.gradle.plugins:shadow:2.0.1' }
} // 3 - merge service descriptors
shadowJar { mergeServiceFiles()
} repositories { jcenter()
} dependencies { compile group: 'org.apache.tomcat.embed', name: 'tomcat-embed-jasper', version: '8.0.47' compile group: 'org.springframework', name: 'spring-webmvc', version: '5.0.1.RELEASE'
}

There are three things that we need to understand in the script above:

  1. To use Shadow, we need to apply two plugins. The first one is the application plugin, which adds useful tasks to package compiled Java classes (note that this plugin doesn’t add dependencies to the package created). The second one is the Shadow plugin itself, which is responsible for defining the main class to be executed and also adds all runtime dependencies to the final jar file.
  2. We need to declare the dependency to the Shadow plugin in the buildscript block of our Gradle configuration file.
  3. We need to configure the Shadow plugin to merge service descriptor files. This is needed because the SpringServletContainerInitializer class mentioned before is a service declared in a service descriptor inside Spring 5. The servlet container (Tomcat 8 in our case) knows that it needs to execute this class due to this service descriptor.

The Shadow plugin, when correctly configured, adds a few tasks to our build configuration. Among them, there are two that we will use frequently:

# compile, package, and run the application
./gradlew runShadow # compile and package the application
./gradlew shadowJar # run the packaged application
java -jar build/libs/spring5-app-1.0-SNAPSHOT-all.jar

The runShadow does three things: it compiles our source code, packages our application on an executable fat/über jar file, and then executes this jar. The second one, shadowJar, is pretty similar but it does not execute the application. It simply prepares our application to be distributed. That is, it creates the executable jar file. The last command included in the code snippet above shows how to execute the fat/über jar without Gradle. Yes, after packaging the application, we don’t need Gradle anymore.

Let’s run the application now and issue a GET HTTP request to the endpoint created in the HelloWorldController class:

# run the application
./gradlew runShadow # issue a GET request
curl localhost:8080/hello

The response to this request will be the message defined in the sayHello method of our controller: “Hello from Spring 5 and embedded Tomcat 8!”.

Supporting JSON Content on Spring 5

Without question, one of the most used message formats on applications today is JSON. RESTful APIs usually use this kind of message format to communicate with front-end clients written for a wide variety of devices (e.g. Android and iOS phones, web browsers, wearable devices, etc.). In Spring 5, adding support to JSON is very easy. It’s just a matter of adding Jackson as a dependency and we are ready to start writing controllers that exchange JSON messages.

To see this in action, let’s open the build.gradle file and add the following dependency:

// ...
dependencies { // ... Tomcat 8 and Spring 5 dependencies compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: '2.9.2'
}

After that, let’s create a new package called model inside the com.auth0.samples package and add a class called Product to it:

package com.auth0.samples.model; import java.math.BigDecimal; public class Product { private String title; private BigDecimal price; public Product() { } public Product(String title, BigDecimal price) { this.title = title; this.price = price; } public String getTitle() { return title; } public BigDecimal getPrice() { return price; }
}

We will use this class to do two things: to send and accept JSON messages that contain product details. Let’s create a new controller called ProductController in the com.auth0.samples.controller package to exchange products as JSON messages:

package com.auth0.samples.controller; import com.auth0.samples.model.Product;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController; import java.math.BigDecimal;
import java.util.ArrayList;
import java.util.List; @RestController
@RequestMapping("/api/products")
public class ProductController { private final List<Product> products = new ArrayList<>(); public ProductController() { products.add(new Product("Coca-cola", BigDecimal.valueOf(2.36))); products.add(new Product("Bread", BigDecimal.valueOf(1.7))); } @GetMapping public List<Product> getProducts() { return products; } @PostMapping public void addProduct(@RequestBody Product product) { products.add(product); } @DeleteMapping("/{index}") public void deleteProduct(@PathVariable int index) { products.remove(index); }
}

This class is quite simple; it’s just a Spring MVC @RestController that exposes three methods/endpoints:

  • getProducts: This endpoint, when hit by a HTTP GET request, sends all products in a JSON array.
  • addProduct: This endpoint, when hit by a HTTP POST request, accepts new products as JSON messages.
  • deleteProduct: This endpoint, when hit by a HTTP DELETE request, removes a product from the array of products based on the index sent by the user.

After creating this controller, we are ready to send and receive JSON messages. To test this new feature, let’s start our application (./gradlew runShadow) and issue the following commands:

# get the array of products
curl localhost:8080/api/products # remove product in the second position of the array
# (arrays are 0 indexed)
curl -X DELETE localhost:8080/api/products/1 # add a new product to the array
curl -X POST -H "Content-Type: application/json" -d '{ "title": "Milk", "price": 0.95
}' localhost:8080/api/products

Securing Spring 5 Applications With Auth0

Another feature that serious applications cannot overlook is security. In modern applications, personal and sensitive data is being exchanged between clients and servers like never before. Luckily, with the help of Auth0, adding a production-ready security layer to a Spring 5 project is easy. We just need to use and configure an open-source library, provided by Auth0, which tightly integrates with Spring Security (the security module of Spring). Let’s see how to do this now.

The first step is to open our build.gradle file and do four things: Add a new maven repository, add the Auth0 library dependency, add the spring-security-config library, and add the spring-security-web library:

// ... repositories { jcenter() maven { url 'http://repo.spring.io/milestone/' }
} dependencies { // ... tomcat, spring, jackson compile('com.auth0:auth0-spring-security-api:1.0.0-rc.3') { exclude module: 'spring-security-config' exclude module: 'spring-security-core' exclude module: 'spring-security-web' } compile('org.springframework.security:spring-security-config:5.0.0.RC1') compile('org.springframework.security:spring-security-web:5.0.0.RC1')
}

Adding a new Maven repository was needed because we are going to use Spring Security 5, a version of this module that hasn’t reached General Availability yet. Besides that, we explicitly removed spring-security-* transitive dependencies from the Auth0 library because they reference the fourth version of Spring Security.

After changing our build file, we have to create a class to configure Spring Security and the Auth0 library. Let’s call this class WebSecurityConfig and add it in the com.auth0.samples package:

package com.auth0.samples; import com.auth0.spring.security.api.JwtWebSecurityConfigurer;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.config.http.SessionCreationPolicy; @EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter { private static final String TOKEN_AUDIENCE = "spring5"; private static final String TOKEN_ISSUER = "https://bkrebs.auth0.com/"; private static final String API_ENDPOINT = "/api/**"; private static final String PUBLIC_URLS = "/**"; @Override protected void configure(HttpSecurity http) throws Exception { JwtWebSecurityConfigurer .forRS256(TOKEN_AUDIENCE, TOKEN_ISSUER) .configure(http) .authorizeRequests() .mvcMatchers(API_ENDPOINT).fullyAuthenticated() .mvcMatchers(PUBLIC_URLS).permitAll() .anyRequest().authenticated().and() .sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS); }
}

This class contains four constants:

  • TOKEN_AUDIENCE is the JWT audience that we expect to see in JWT claims.
  • TOKEN_ISSUER is the issuer that we expect to see on these JWTs.
  • API_ENDPOINT is a regular expression that we use to restrict access to all URLs under /api.
  • PUBLIC_URLS is a regular expression that we use to identify every other URL.

Besides these constants, the WebSecurityConfig class contains only one method. This method is used to fine-tune the Spring Security module to use Auth0 and to configure how different URLs must be treated. For example, .mvcMatchers(API_ENDPOINT).fullyAuthenticated() configures Spring Security to accept only authenticated requests (requests with JWTs) to URLs that start with /api.

The last thing we need to do is to create a class that extends the AbstractSecurityWebApplicationInitializer class provided by Spring Security. This is needed to apply the springSecurityFilterChain filter for every URL in our application. Therefore, let’s call our class SecurityWebApplicationInitializer and add it in the com.auth0.samples package:

package com.auth0.samples; import org.springframework.security.web.context.AbstractSecurityWebApplicationInitializer; public class SecurityWebApplicationInitializer extends AbstractSecurityWebApplicationInitializer {
}

Now we can restart our application (e.g. ./gradlew runShadow) and issue requests as follows:

# issuing requests to unsecured endpoints
curl localhost:8080/hello # issuing requests to secured endpoints
CLIENT_ID="d85mVhuL6EPYitTES37pA8rbi716IYCA"
CLIENT_SECRET="AeeFp-g5YGwxFOWwLVMdxialnxOnoyuwGXoE5kPiHs8kGJeC2FJ0BCj6xTLlNKkY" JWT=$(curl -X POST -H 'content-type: application/json' -d '{ "client_id": "'$CLIENT_ID'", "client_secret": "'$CLIENT_SECRET'", "audience":"spring5", "grant_type":"client_credentials"
}' https://bkrebs.auth0.com/oauth/token | jq .access_token) curl -H "Authorization: Bearer "$JWT http://localhost:8080/products

As we can see in the code snippet above, issuing requests to unsecured endpoints has not changed. Besides that, we can see that issuing requests to secured endpoints now need an Authorization header with a JWT. In this case, we need to fetch a valid JWT from Auth0 (note that we use a command-line JSON processor called jq to extract the JWT to a bash variable). After that, we append this JWT to the Authorization of every request we issue to secured endpoints.

Another important thing that we need to note is that the commands above are using two bash variables: CLIENT_ID and CLIENT_SECRET. These variables were extracted from an API configured on a free Auth0 account. To learn more about APIs and Auth0, take a look at the official documentation.

Conclusion

Throughout this article, we learned about some interesting topics like embedded application servers and how to configure a Spring 5 project to use one. We also learned how to create a Java executable file using a Gradle plugin called Shadow and how to add support to JSON messages on Spring 5. Lastly, we saw that configuring a security layer on a Spring 5 project is straightforward.

Having managed to address all these topics in a short article like that is proof that Spring 5 is becoming more like Spring Boot. Although Spring Boot is a few miles ahead when talking about ease of use, we can see that it’s quite easy to bootstrap Spring 5 applications that support essential features like JSON messages and security.

Original Link

The Power of the Gradle Kotlin DSL

The following is based on Gradle 4.3.1.

A few weeks ago, I started migrating most of my Groovy-based gradle.build scripts to Kotlin-backed gradle.build.kts scripts using the Kotlin DSL.

Why would I do that?

Kotlin is my language of choice, and I love the idea of using a single language to do all my work. I never learned programming with Groovy and only know the bloody basics, which always makes me think: “This can’t be the best way to do things…”.

Kotlin, on the other hand, is a language I use on a daily basis and, therefore, I know how to use the language appropriately. Additionally, Kotlin is a statically typed language, whereas Groovy isn’t. IDEs are having hard times offering code completion and error detection at compile time when a Groovy build script is being edited. As for the Kotlin DSL, this isn’t true. Especially IntelliJ knows how to help us with Kotlin development, even in gradle.build.kts files. All these reasons made me take a deeper look at the new style Gradle offers.

Minor Impediments

It can sometimes be a bit tedious to rewrite your gradle.build into gradle.build.kts files, especially in the IDE with all its caches malfunctioning during that process. I often had to reopen my project or even reimport it before IntelliJ understood what was going on. It also often helps to use the “Refresh all Gradle projects” button in the Gradle view.

Let’s Take a Look

The following snippet shows the first part of a working example. It was taken from one of my projects, which is a Kotlin web application based on the Vert.x toolkit. Learn more about the technology in this post I wrote earlier.

The script first defines a few global variables, mostly containing version numbers, which are used throughout the build file. Next, we can observe the plugins block that simply defines a few plugins used for the build. Most importantly, the Kotlin Gradle plugin for JVM applications is included, which we can do with the DSL-specific function kotlin(module: String), that takes its module argument and appends it to "org.jetbrains.kotlin.", which then is put into the id(plugin: String) method, the default api for applying plugins.

Last but not least, we can see the listing of dependencies, which again provides a kotlin convenience method we can use to reduce redundant declarations. A similar approach can be seen with the definition of the io.vertx dependencies. In order to write the "io.vertx.vertx" String only once, which is part of every single Vert.x dependency, it’s used as a receiver of let. The first example of real idiomatic code within the build script is:

//imports //taken from the `plugins` defined later in the file
val kotlinVersion = plugins.getPlugin(KotlinPluginWrapper::class.java).kotlinPluginVersion
val kotlinCoroutinesVersion = "0.19.3" val vertxVersion = "3.5.0" //
val nexusRepo = "http://x.x.x.x:8080/nexus/content/repositories/releases" plugins { kotlin("jvm").version("1.2.0") application java `maven-publish`
} dependencies { compile(kotlin("stdlib", kotlinVersion)) compile(kotlin("reflect", kotlinVersion)) compile("org.jetbrains.kotlinx:kotlinx-coroutines-core:$kotlinCoroutinesVersion") "io.vertx:vertx".let { v -> compile("$v-lang-kotlin:$vertxVersion") compile("$v-lang-kotlin-coroutines:$vertxVersion") compile("$v-web:$vertxVersion") compile("$v-mongo-client:$vertxVersion") compile("$v-health-check:$vertxVersion") compile("$v-web-templ-thymeleaf:$vertxVersion") } compile("org.slf4j:slf4j-api:1.7.14") compile("ch.qos.logback:logback-classic:1.1.3") compile("com.fasterxml.jackson.module:jackson-module-kotlin:2.9.0.pr3") testCompile(kotlin("test", kotlinVersion)) testCompile(kotlin("test-junit", kotlinVersion)) testCompile("io.vertx:vertx-unit:$vertxVersion") testCompile("org.mockito:mockito-core:2.6.2") testCompile("junit:junit:4.11")
} // Part 2
} 

The second part of the example project starts with defining repositories, which are used to find dependencies and plugins declared earlier. Again, we see an example of simplifying the code with the help of using the language: The custom Maven repositories are defined using the functional method forEach, and thus shortens the boilerplate.

After that, the plugins are being configured, which, for instance, is necessary for enabling coroutine support or defining the application properties. Finally, we can observe a sequence of task configurations that control the behavior of single build steps, e.g. tests.

// ...Part 1 repositories { mavenCentral() jcenter() listOf("https://www.seasar.org/maven/maven2/", "https://plugins.gradle.org/m2/", nexusRepo).forEach { maven { url = uri(it) } }
} kotlin { experimental.coroutines = Coroutines.ENABLE
} application { group = "de.swirtz" version = "1.0.0" applicationName = "gradle-kotlindsl" mainClassName = "de.swirtz.ApplicationKt"
} publishing { repositories { maven { url = uri(nexusRepo) } } if (!project.hasProperty("jenkins")) { println("Property 'jenkins' not set. Publishing only to MavenLocal") } else { (publications) { "maven"(MavenPublication::class) { from(components["java"]) } } }
} tasks { withType<KotlinCompile> { kotlinOptions.jvmTarget = "1.8" } withType<Test> { testLogging.showStandardStreams = true } withType<Jar> { manifest { attributes["Main-Class"] = application.mainClassName } from(configurations.runtime.map { if (it.isDirectory) it else zipTree(it) }) } withType<GradleBuild> { finalizedBy("publishToMavenLocal") }
}

We’ve seen a rather simple build script written with the Gradle Kotlin DSL. I made use of a few idiomatic Kotlin functions in order to show the power of such .kts files. Especially for Kotlin developers, it can make a lot of sense to completely switch to the approach shown. IntelliJ does support the creation of new build.gradle.kts files by default when you open the “New” option in “Project” view.

There will be situations that make you want to ask somebody for help. I recommend reaching out directly in the corresponding Kotlin Slack channel: Gradle.

I hope I could inspire you to give it a try! Good Luck

Also, the whole script, as a Gist, can be found here.

Original Link

IntelliJ IDEA 2017.3 EAP: Better Setting Synchronoization

We recently published a new IntelliJ IDEA 2017.3 EAP build, which includes some very interesting features. Let’s have a look!

Better Synchronization of Your Settings Across Devices

If you work with JetBrains IDEs on different computers, then you have probably faced an annoying issue. For each computer you use, you need to specify the IDE settings, such as: keyboard shortcuts, syntax highlighting, appearance, plugins and other options.

The problem was partially solved with the built-in plugin, Settings Repository. For the plugin to synchronize the settings, it is not very convenient, you have to create a Git repository (on GitHub or with another service) and specify it in the IDE.

To make the process of synchronizing the settings even safer and more user-friendly, we are developing a new mechanism. This mechanism relies partly on the Settings Repository, but uses the repository on the JetBrains side to store the settings, access to this repository would then be possible through a JetBrains Account (JBA).

With this new approach, you won’t need to spend time on creating and configuring a Git repository. And now, your settings won’t be available to other users.

In addition to this more convenient method of storage, the new mechanism allows you to synchronize not only settings, but also all your installed plugins.

The new plugin will in time be made available for all our paid products (IntelliJ IDEA Ultimate, PhpStorm, PyCharm, CLion, RubyMine, Rider, etc.).

What is the JetBrains Account? The JetBrains Account allows you to manage licenses, and to access forums and the JetBrains plugin repository. If you do not have the JBA, you can easily create it on the JetBrains Account website. For registration, we recommend you use the same mailbox that you purchase licenses with.

You can learn more about the JetBrains Account here.

Starting with this EAP, the new IDE Settings Sync plugin comes already built-in.

We expect the plugin to be completed this fall. It will be released with the upcoming IntelliJ IDEA 2017.3.

But if you are eager to test the new functionality right away and don’t want to wait until the plugin becomes publicly available; then you can request access the IDE Settings Sync plugin or be personally invited.

Here’s how to activate the IDE Settings Sync plugin:

Download the new EAP build from the EAP page on our website, or via Toolbox App.

Receive an invitation letter. You can receive an invitation from your colleagues and friends. Or you can request an invitation by sending an email to idea-cloudconfig@jetbrains.com. In this case, the email must be sent from the address that was used to register your JetBrains Account.

  1. Log into the IDE (or Toolbox App) using your JetBrains Account.
  2. Enable the sync. Look at the bottom of the IDE window, locate the Status Bar, and click the Gear icon.
  3. Send the invitation to a good friend through a form on the JetBrains Account website.

The settings you can sync with the IDE Settings Sync plugin include:

  1. Look And Feel (Preferences | Appearance & Behavior | Appearance | Theme)
  2. Keymap (Preferences | Keymap)
  3. Color Scheme (Preferences | Editor | Color Scheme)
  4. General Settings (Preferences | Appearance & Behavior | System Settings)
  5. UI Settings (Preferences | Appearance & Behavior | Appearance)
  6. Menus and Toolbars (Preferences | Appearance & Behavior | Menus and Toolbars)
  7. ProjectView Settings (Project Tool Window (syncs only IDE settings, all other settings are automatically saved in a project))
  8. Editor Settings (Preferences | Editor | General)
  9. Code Completion (Preferences | Editor | General | Code Completion)
  10. Parameter Name Hints (Preferences | Editor | General)
  11. Live Templates (Preferences | Editor | Live Templates)
  12. Code Style Schemes (Preferences | Editor | Code Style)
  13. Plugins (Preferences | Plugins)

We hope you find our new IDE Settings Sync plugin useful. Please keep in mind that this new plugin is under heavy development. If something isn’t working as expected, don’t hesitate to share your feedback with us through the comments. And don’t forget to submit bug reports to the issue tracker if you encounter any.

If you think any functionality is missing, please let us know. Your suggestions are very welcome!

In addition to the plugin, IntelliJ IDEA 2017.3 will have some interesting new features.

Support for Gradle Test Runner

Starting with this new EAP, you can run tests with coverage with Gradle Test Runner, or even when you Delegate IDE build/run actions to Gradle.

Now, you can choose from the main editor how you want to run your test with coverage: with platform test runner or with Gradle Test runner.

Even if you want to delegate a test to Gradle, you can still run the test with coverage from the editor.

Support for Spring Boot 2.0 Actuator Endpoints

Spring Boot 2.0 brings important changes to the actuator, and this change is already supported by the new IntelliJ IDEA 2017.3 EAP.

Happy developing!

Original Link

”Refined” Gradle

Maven and Gradle are the most widespread build automation tools. I use both of them in my projects.

Gradle is based on Groovy and is more flexible, than Maven. Thus, every developer can customize Gradle scripts to meet some requirements.

I decided to define a set of conventions to use Gradle. My goal was to make Gradle scripts for each project more unified, concise, manageable, and readable. In this article, I would like to share the results.

The GitHub repo with the template helper scripts and sample scripts is here.

I have three types of dependencies in the projects:

  • Artifacts from remote repositories

  • Modules that are shared among several projects

  • Modules that are used in one project only

Accordingly, I use a separate Gradle script with helper closures and methods for each type of dependency.

These scripts contain code or serve as a facade for other scripts. You can find samples of these scripts in my GitHub repo here. In fact, the helper scripts are Gradle Script plugins.

Eventually, project Gradle scripts look the following.

Root project settings.gradle:

ext.commonDir = '../Common'
apply from: '../vlfsoftCommon.gradle' // AND, if there are many internal sub-projects in the project with dependencies of each other
apply from: 'projectCommon.gradle' includeCommonSdAnnotations()
includeCommonUtil() //include ':vlfsoft.refined.gradle.app'
//include ':vlfsoft.refined.gradle.module' // OR, if there are many internal sub-projects in the project with dependencies of each other includeProjectRefinedGradleApp()
includeProjectRefinedGradleModule()

Root project build.gradle:

buildscript { ext.commonDir = '../Common' apply from: '../common.gradle' repositories buildscriptCommonRepo dependencies buildscriptKotlinPluginDep apply from: '../vlfsoftCommon.gradle' // AND, if there are many internal sub-projects in the project with dependencies of each other apply from: 'projectCommon.gradle' } allprojects allprojectsCommon
allprojects allprojectsKotlin task wrapper(type: Wrapper) { gradleVersion = customGradleVersion
}

Sub-project build.gradle:

apply plugin: 'application' dependencies kotlinStdlibDep dependencies commonSdAnnotationsDep
dependencies commonUtilDep /*
dependencies { compile project(':vlfsoft.refined.gradle.module')
}
*/ // OR, if there are many internal sub-projects in the project with dependencies of each other
dependencies projectRefinedGradleModuleDep apply from: '../../jarProperties.gradle'
// https://stackoverflow.com/questions/26469365/building-a-self-executable-jar-with-gradle-and-kotlin
mainClassName = 'vlfsoft.refined.gradle.ApplicationKt'
jarAppProp()

To avoid “version hell,” I use a single shared Gradle script with versions of the artifacts.

versions.gradle:

//--BEG: gradle ext.customGradleVersion = '4.3.1'
//--END: gradle ... //--BEG: html parsers // https://mvnrepository.com/artifact/org.jsoup/jsoup ext.jsoupVersion = '1.10.3' //--END: html parsers ...

Original Link

Eclipse Oxygen.1a: Java 9, JUnit 5, and Gradle [Video]

Oxygen.1a (4.7.1a) was released two weeks after Oxygen.1 (4.7.1) on October 11, 2017. Oxygen.1 includes bug fixes and minor improvements. Since Oxygen.1a, the Eclipse IDE runs out of the box with Java 9 and supports development for Java 9 as well as testing with JUnit 5. Many thanks to all of you who have contributed in any way.

Original Link

State of Gradle Java 9 Support

What Gradle Supports as of Version 4.2.1

As of Gradle 4.2.1, building and running Java applications using major distributions of JDK 9 such as Oracle JDK9, OpenJDK9, and Azul JDK9 is fully supported. Further, cross-compilation (built by JDK9 but runs on JDK8) is supported.

Some builds will break when upgrading to Java 9, regardless of build tool used. The Java team has made good and necessary changes to the JDK to facilitate better software architecture and security, but this has meant removing access to some APIs. Even if your project is ready, some tools and Gradle plugins have not yet been updated to work with Java 9.

There are no convenience methods for consuming and assembling Multi-Release JARs, but you can take a look at this MRJAR-gradle example if you want to use them.

Java Modules AKA Jigsaw Support

If you’re not yet familiar with the Java 9 Platform Module System, also known as Project Jigsaw, you should read Project Jigsaw: Module System Quick-Start Guide. The motivation and terminology are well explained in The State of the Module System.

A module is defined as “a named, self-describing collection of code and data” whereby packages are treated as code boundaries and are explicitly exported and required. Non-exported packages are not visible to module consumers, and furthermore, two modules cannot export the same packages, nor can they have the same internal packages. This means that packages cannot be “split” or duplicated between multiple modules, or compilation will fail.

Here is a guide that shows how to use Java modules with Gradle today. It walks you through the steps needed to tell Gradle to use the modulepath and not classpath when compiling Java sources and patch modules for testing purposes.

A bottom-up approach (converting libraries with no dependencies first) is recommended if you wish to incrementally convert to Java 9 modules. After all, modules are consumable as regular JARs. Be mindful of automatic modules when “legacy” JARs are added to the modulepath.

Achieving Encapsulation With the Java Library Plugin

One of the two major goals of the Java 9 module system is to provide better software architecture through strong encapsulation. Gradle 3.4 introduced the Java Library Plugin that enforces strong encapsulation for libraries by separating api dependencies (those meant to be exposed to consumers) from implementation dependencies whose internals are not leaked to consumers.

This, of course, does not eliminate the use of Java classpaths, another stated goal of Java modules. You can learn about the motivation and usage of the Java Library Plugin in this post. It’s worth noting that the Java Library Plugin is useful for projects using Java 7 and above — you do not need to migrate to Java 9 to have some stronger encapsulation.

Here’s what this means in practice, given this example library:

apply plugin: 'java-library' name = 'mylibrary'
group = 'com.mycompany' dependencies { api project(':model') implementation 'com.google.guava:guava:18.0'
}

Let’s presume we have an application that uses mylibrary.

public class MyApplication { public static void main(String... args) { // This does not compile using 'java-library' plugin Set<String> strings = com.google.common.collect.ImmutableSet.of("Hello", "Goodbye"); // This compiles and runs Foo foo = com.mycompany.model.internal.Foo(); // This also compiles and runs Class clazz = MyApplication.class.getClassLoader().loadClass("com.mycompany.model.internal.Foo"); Foo foo = (Foo) clazz.getConstructor().newInstance(); }
}

You can see that you get some of the benefits by adopting the Gradle’s Java Library plugin. If you are migrating to Java modules, you can use this rough mapping:

  • implementation dependency => requires module declaration
  • api dependency => requires transitive module declaration
  • runtimeOnly dependency => requires static module declaration

Next Steps

Stay tuned for updates on first-class Java modules support in Gradle.

You can use the Building Java 9 Modules guide to learn how to use Java modules with Gradle today.

Original Link

Jersey Client Dependencies for JAX-RS 2.1

Jersey is the reference implementation of JAX-RS 2.1. The following Jersey dependencies are required in order to run a JAX-RS 2.1 client with JSON-P and JSON-B mapping outside of an enterprise container.

Jersey client version 2.6 implements the JAX-RS 2.1 API. The following dependencies add the client runtime to a project:

<dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.26</version>
</dependency>
<dependency> <groupId>org.glassfish.jersey.inject</groupId> <artifactId>jersey-hk2</artifactId> <version>2.26</version>
</dependency>

If JSON objects should be mapped using JSON-P, the following dependency is required as well:

<dependency> <groupId>org.glassfish.jersey.media</groupId> <artifactId>jersey-media-json-processing</artifactId> <version>2.26</version>
</dependency>

This already adds an implementation for JSON-P 1.1, namely Glassfish javax.json.

If JSON objects should be mapped using JSON-B, the following dependency is added instead of or additionally to the previous one:

<dependency> <groupId>org.glassfish.jersey.media</groupId> <artifactId>jersey-media-json-binding</artifactId> <version>2.26</version>
</dependency>

This transitively adds the Yasson dependency, the reference implementation of JSON-B.

These dependencies enable the project to use the JAX-RS 2.1 client together with JSON-P or JSON-B binding:

Client client = ClientBuilder.newClient();
WebTarget target = client .target("http://localhost:8080/jersey-test/resources/tests"); Response response = target.request(MediaType.APPLICATION_JSON_TYPE).get();
JsonArray customers = response.readEntity(JsonArray.class); response = target.path("123").request(MediaType.APPLICATION_JSON_TYPE).get();
Customer customer = response.readEntity(Customer.class); ... public class Customer { @JsonbTransient private long id; private String name; // getters & setters
}

And for our Gradle users, here is the equivalent of the Maven declarations:

compile 'org.glassfish.jersey.core:jersey-client:2.26'
compile 'org.glassfish.jersey.inject:jersey-hk2:2.26' compile 'org.glassfish.jersey.media:jersey-media-json-processing:2.26'
compile 'org.glassfish.jersey.media:jersey-media-json-binding:2.26'

Original Link

Modular Java 9 Apps With Gradle and Chainsaw

For the last few months, I have observed the development and adoption of the Java 9 module system, also known as Project Jigsaw. The final result is impressive, however, I also see a lot of confusion among regular developers in how to actually use modules. The tool support does not help, either. The final days of Jigsaw development were really hot, and some important decisions were made at the last minute. No doubt the authors of many popular tools had little time to make the necessary changes and, in my opinion, on September 21, we woke up a bit unprepared for Jigsaw.

My first attempt to add a module descriptor to an existing, small application was unsuccessful. I spent four hours figuring out how to solve package splits among third-party dependencies and how to add the necessary CLI switches to my Gradle build to make everything work. I found the experimental-jigsaw Gradle plugin, but I realized, that it has its own limitations, too (e.g. I could not add a mocking library to my tests, and I was restricted to JUnit 4!). However, the experience was worth it — I realized that I can use it to make a better Jigsaw plugin for Gradle and bring modules closer to developers. So, here we go with the Gradle Chainsaw Plugin!

Sample Project

Let’s begin with a sample project structure:

/src/main/java/com/example/foo Source code of our application. You can create a couple of classes here, their content is not relevant.

/src/test/java/com/example/foo

Directory for unit tests.
build.gradle Our Gradle build script.
settings.gradle Gradle build script cont.

The initial build script allows us to work on a regular Java 9 application without modules, using JUnit 5 for testing:

buildscript { repositories { mavenLocal() mavenCentral() } dependencies { classpath 'org.junit.platform:junit-platform-gradle-plugin:1.0.0' }
} plugins { id 'java' id 'idea'
} apply plugin: 'org.junit.platform.gradle.plugin' group 'com.example.foo'
version '0.1.0-SNAPSHOT' ext.log4jVersion = '2.6.2'
ext.junitVersion = '5.0.0'
ext.mockitoVersion = '2.10.0' sourceCompatibility = 1.9
targetCompatibility = 1.9 repositories { mavenLocal() jcenter()
} junitPlatform { filters { engines { include 'junit-jupiter' } } logManager 'org.apache.logging.log4j.jul.LogManager'
} dependencies { testCompile('org.junit.jupiter:junit-jupiter-api:' + junitVersion) testCompile('org.mockito:mockito-core:' + mockitoVersion) testRuntime('org.junit.jupiter:junit-jupiter-engine:' + junitVersion) testRuntime('org.apache.logging.log4j:log4j-core:' + log4jVersion) testRuntime('org.apache.logging.log4j:log4j-jul:' + log4jVersion)
}

The last file is settings.gradle:

rootProject.name = 'modular'

Creating a Module Descriptor

The first step to introducing modules is creating a module descriptor in our /src/main/java/module-info.java file:

module com.example.foo { exports com.example.foo;
}

Module descriptors use a Java-like meta-language. From the language point of view, a module is a collection of related packages and controls the visibility rules for them. Because of that, the module name should always be derived from the root package — it’s just like saying ,“I’m taking control over this package name space.” This is also the official recommendation for naming modules given by chief Java architect Mark Reinhold and other Java experts. In our example, all the classes are in the com.example.foo package, so this will also be the name of our module.

Note that this recommendation is not enforced by the Java compiler. The first reason is that modules are added to a language with a 20+ year history, and they must work with the existing code base. The second reason is that, originally, the idea for naming modules was different, and it was changed in the final months of development. We’ll get back to that later.

When choosing the module name and creating packages, we must pay attention to a couple of rules:

  • In Java 9, a package cannot exist in more than one module. If we try to use two modules with the same package inside them, we get a compilation error. This is the so-called package split.

  • Modules allow you to control the visibility of individual packages. If you make the package com.example.foo available to other modules, the subpackages are not opened — unless you tell Java to open them, too.

  • We should not change the name of our module once we release it or we risk a module hell.

To explain module hell, imagine the following situation:

  1. You release foo-1.0 that uses com.example.abc as a module name.

  2. To use the classes of another module, we must declare it in the module descriptor with the requires clause. Someone creates a project, bar, that depends on foo-1.0 and requires com.example.abc in the module descriptor.

  3. We release foo-1.1 and \ change the module name to com.example.def.

  4. Someone else creates a project, joe, and requires com.example.def in the module descriptor.

  5. Yet another person creates a project, moo, that depends both on bar and joe.

The build system should deal with the version conflict and perhaps use foo-1.1. However, the module descriptors refer to foo as both com.example.abc and com.example.def. From the perspective of Java, these are two distinct modules with exactly the same packages inside. What’s this? It’s a package split, and we know it’s illegal. Project moo doesn’t compile, and its authors can do nothing about it… So, never change the name of a released module.

Let’s see what we can do within module descriptors. There are a couple of available constructs, summarized by the following table:

requires <module name>; Our module can use the exported content of another module.
requires transitive <module name>; Other modules that build on our module can use the contents of the required module, too.
requires static <module name>; Optional dependency — the module is required for compilation, but unless we use some class that uses its content, it doesn’t have to be present at runtime.
exports <package name>; Package export — the contents of the given package are visible to other modules during compilation and runtime (reflection).
exports <package name> to <module list>; We can also export the package to specific modules. The list of module names uses a comma as a separator.
opens <package name>; Weaker version of export — the package content is not available at a compile time, but runtime access (reflection) is possible.
opens <package name> to <module list>; Opening access to certain modules.
uses <service interface>; Hook for ServiceLoader: Our module exposes an extension point, where other modules can provide implementations.
provides <service interface> with <list of implementations>; Hook for ServiceLoader: Our module provides implementations to extension points exposed by other modules.

The most commonly used statements will be, of course, requires and exports. If we want to use some class from another module, we must require it (and make sure that the class is in the exported package). If we want to publish some API for another module, we must export the package with it.

Keeping private things private is one of the main reasons for getting interested in modules. Ask yourself: How many times you released a new version of a library and someone else complained that you broke something, because he used some internal API? How many times did you import ImmutableList from jersey.repackaged.com.google.guava instead of the “official” Guava implementation? How many times did you fix such imports in others’ code? How many times did something break out because of it? Modules solve this issue in a similar way — the private keyword keeps class internals hidden from the outside world.

Now it should be obvious what our module does — and what to do if we want to extend it.

How to Build It

OK, it’s finally time for Gradle. Currently, it does not offer any official support for compiling Jigsaw modules, so we need an extra plugin: Gradle Chainsaw. All we need to do is to load it and select the module name:

plugins { id 'java' id 'idea' id 'com.zyxist.chainsaw' version '0.1.3'
} // ... javaModule.name = 'com.example.foo'

The plugin needs the module name to configure the compiler and the JVM with the necessary CLI switches. This name can be — of course — extracted from the module descriptor. The only issue is that Gradle must run on earlier Java versions, and we don’t have the access to the official JDK APIs for modules.

The plugin does one extra thing that is not done by JVM — it verifies that the module name matches the root package name, effectively enforcing the official recommendation.

Let’s Add Tests

Unit tests are an interesting use case for the Java module system. The /src/test/java directory doesn’t contain any module descriptor. In the test phase, both the compiler and JVM see tests as a part of our module (of course, they are not packaged into the final JAR). Thanks to that, the tests can use the same package names and have unrestricted access to all the classes and interfaces.

There is, however, a small problem with using additional testing libraries. They are modules, too, so we need to require them, but we don’t want to put them into our module descriptor for obvious reasons.

Chainsaw helps us here in two ways. Firstly, it allows you to specify additional test modules in your build.gradle file. They are added dynamically to the compiler and the JVM so that we can use their APIs in our tests. Secondly, it automatically detects JUnit 4 and JUnit 5 and configures the necessary modules for us.

Let’s try to add a Mockito module to our tests. The necessary dependency is already present in the build file, so we just need to configure the module:

// note: this module name was auto-generated by Java
// starting from version 2.10.5, Mockito will use org.mockito module name.
javaModule.extraTestModules = ['mockito.core'] dependencies {
// ...
testCompile('org.mockito:mockito-core:' + mockitoVersion) // ...
}

Let’s write a simple unit test to make sure everything works:

package com.example.foo; import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertTrue;
import static org.mockito.Mockito.mock; public class FooTest { @Test public void shouldDoSomething() { Object obj = mock(Object.class); assertTrue(true); }
}

The project should compile and we should see the test report after running it.

Problematic Dependencies

One could say: Okay, all problems solved! Unfortunately, not this time. Remember the section about package splits and adding modules to a language with 20+ years of history? It means that there are plenty of JARs out there that break Jigsaw’s rules, and if we try to use them in our application, we can run into trouble.

When we try to use a non-modular JAR archive in our modular application, Java creates a so-called automatic module from it:

  • It exports all the packages to everyone and has the access to all other modules (+ the good old classpath, which is not normally used in Jigsaw).

  • The module name is chosen automatically:

    • From the Automatic-Module-Name entry in the JAR manifest.

    • Or (when missing) it is generated from the archive name — this is the remnant of the old idea for naming modules I mentioned earlier.

The first issue that can break our code is missing the Automatic-Module-Name entry in your JAR manifest. It was added to Jigsaw at the last minute, and many developers simply didn’t hear about it. Basically, it allows you to choose a stable module name for the future before you migrate to Jigsaw. If it is missing, Java generates the module name from the JAR archive, which should be considered as pretty random, and it has nothing to do with the actual content of the module. Even worse, if the JAR archive uses some name that is a restricted keyword in Java, e.g. foo-native, we won’t be able to require such a module in our descriptor because the compiler would complain that native cannot be used as an identifier. And if we make a dependency on such a name, and later the authors choose another name, we’ll run into module hell.

So, here’s the simple rule we should follow:

Never publish a module in public repositories (Maven Central, JCenter, corporate Artifactory, etc.) that depends on JARs with neither module descriptors, nor Automatic-Module-Name entries in the manifest.

Unfortunately, Chainsaw won’t help us with module naming issues because it would be strange to extract JAR archives and modify them on the fly. However, there is one more issue with legacy third-party dependencies. They can produce package splits! The most unfortunate example of such a dependency is jsr305.jar. The artifact was produced by the team behind the FindBugs static analysis tool and is a random collection of annotations inspired by JSR-305. The JSR itself was rejected and abandoned, but the archive “illegally” inherited the claimed package name javax.annotation. Unfortunately, there is another JSR (250) that was released and uses the same package, too. Java 9 refuses to compile and run our application if both JARs appear on the module path. JSR-305 is a dependency of several popular libraries, such as Google Guava, so there is a good chance that your application will have it.

To deal with this issue, we must patch JSR-250 module with the annotations from jsr305.jar:

javaModule.patchModules 'com.google.code.findbugs:jsr305': 'javax.annotation:jsr250-api' dependencies { patch 'com.google.code.findbugs:jsr305:1.3.9' compile 'javax.annotation:jsr250-api:1.0' compile 'com.google.guava:guava:23.1-jre'
}

How it works:

  1. We instruct Chainsaw to patch the jsr250-api dependency with jsr305.

  2. We add the jsr305 dependency to a special configuration called patch. Gradle still needs the actual JAR, but it must be removed from all other configurations so that, e.g., the compiler couldn’t see it.

  3. The plugin generates the necessary CLI switches to the compiler and JVM.

  4. The annotations from jsr305 are visible as a part of the jsr250.api module.

The last thing we need is to require the module in our descriptor:

module com.example.foo { requires jsr250.api; // note: auto-generated, unstable name! requires guava; // note: auto-generated, unstable name!
}

Gradle remembers the patch during compilation, test execution, and when running the application with the run task. However, if we are going to deploy our JAR somewhere and start it manually (e.g. with some scripts), we must remember that the patch is not preserved in the final archive and we must add the necessary –patch-module CLI switches on our own. Chainsaw helps us during the build because the –patch-module switch requires the full path to the JAR archive, and it’s hard to imagine hardcoding them into the build script.

Patching is intended as a temporary solution until the authors of third-party libraries solve the issues with their code. It is expected that the need for patching is going to decrease over time. However, for now, it’s the only way to use many popular tools.

Summary

It will take some time until the Java ecosystem gets used to modules. I expect that the tool support will improve over time and eventually, Gradle will get first-class support for Jigsaw. For now, you can use the Chainsaw plugin to play with modules and see how they fit into your applications.

The Polish version of the article is also published on my blog!

Original Link