ALU

rest api

Kotlin: How to Implement a REST API With Spring Boot, Spring Data, and H2 DB

In this article, we are going to talk about Kotlin. I have developed a very simple REST API in Kotlin using Spring Boot, Spring Data, and the H2 in-memory DB.

Kotlin and Spring Boot work well together.

Original Link

How to Verify API Responses in Katalon Studio

Verifying an API response is always a challenging task in API testing. Some testers may find it hard to understand the JSON/XML response format; while some others struggle with getting the value of a specific key to verify. It is even harder when the response is big enough with the complex data structure.

Starting from version 5.8.3, Katalon Studio has released a new feature that targets solving those issues with a simple step. In this tutorial, you will learn how to use this feature to verify API responses.

Original Link

Using the Spring Boot Rest Service

I have used many frameworks in the past, even Spring. Yesterday, I was trying to help out one of the new joiners, and while trying to showcase the power of Spring Boot, I wrote this blog.

What I am using:

Original Link

Simple Apache NiFi Operations Dashboard (Part 2): Spring Boot

If you missed Part 1, you can check it out here.

Simple Apache NiFi Operations Dashboard – Part 2

To access data to display in our dashboard we will use some Spring Boot 2.06 Java 8 microservices to call Apache Hive 3.1.0 tables in HDP 3.0 on Hadoop 3.1.

Original Link

Spring Boot and Swagger: Documenting RESTful Services

This guide will help you use Swagger with Spring Boot to document your RESTful services. We will learn how to expose automated Swagger documentation from your application. We will also add documentation to the REST API with swagger annotations.

You Will Learn

  • What is the need for documenting your RESTful services?
  • How do you document RESTful web services?
  • Why Swagger?
  • How can you use Swagger UI?
  • How do you automate the generation of Swagger Documentation from RESTful Web Services?
  • How do you add custom information to Swagger Documentation generated from RESTful Web Services?
  • What is Swagger UI?

10 Reference Courses

Project Code Structure

The following screenshot shows the structure of the project we will create. Image

Original Link

Create REST API Requests Manually With Katalon Studio

Katalon Studio offers great UI support for creating REST API requests, but if you are an advanced Katalon user, you can do it manually and benefit from the large library of Katalon support methods for API requests. This tutorial will show how to create REST API requests manually and handle responses to make your code robust and effective.

Requirements

You should be familiar with Katalon Studio and know the basics of Java/Groovy.

Original Link

Test-Driven Development With a Spring Boot REST API

I deal with integration tests for RESTful applications a lot, however, I had not particularly tried Test Driven Development (TDD) methodologies. Therefore, I decided to give it a try, and I say now tell that I quite like it. I shall assume you already have some basic ideas of TDD, therefore I shall forgo an introduction to TDD.

In this article, let us look at how we can adopt TDD methodology in implementing a Spring Boot RESTful application.

Original Link

A Front End Developer’s Guide for Creating Serverless Applications

At some point, while each of us were growing up, we wished that the adults in our lives would just disappear. They made our lives miserable with their arbitrary rules and restrictions, but they got to do all kinds of fun things. After all, how hard could it be to drive a car, and why did we need them to watch an R-rated movie or to cross the street?

Well, front-end developers have a similar fantasy. Their wish is that one day, all back-end developers will move out of their way and let them take control. Front-end developers are responsible for the things people see and use. All the back end developers need to do is create REST APIs and HTTP endpoints that work and return well-formed JSON, and the front end developers do the rest.

Original Link

Auto-Generate a REST API From a Database With Spring

If you have an existing database and you want to write a front-end to work with it, you often find yourself spending hours setting up the glue between the database and the front-end. It would be a much more efficient use of your time if you could simply press a button and generate the entire REST API directly.

Speedment is a tool that uses code generation to produce a tailored domain model based on an existing database structure. Queries can either be sent directly to the database or served from in-memory for better performance. In this article, we will use the official plugin called "spring-generator" for Speedment Free to generate a complete Spring application to serve a simple REST API. We will add support for paging, remote filtering and sorting, without writing a single line of code.

Original Link

How to Invoke an External REST API from a Cloud Function

In a previous blog post, I showed how to create your first cloud function (plus a video). It’s very likely that your cloud function will need to invoke an external REST API. The following tutorial will show you how to create such a function (it’s very easy).

  1. Sign into an IBM Cloud account
  2. Click Catalog
  3. Remove the label:lite filter and type
  4. Click on Functions box
  5. Click the Start Creating button
  6. Click Create Action
  7. For Action Name, enter “ajoke” and click the Create button. A new cloud function will be created with Hello World message
  8. Replace the function code with the following code which invokes a 3rd party REST API which returns a random joke:
    
    var request = require("request"); function main(params) { var options = { url: "https://api.icndb.com/jokes/random", json: true }; return new Promise(function (resolve, reject) { request(options, function (err, resp) { if (err) { console.log(err); return reject({err: err}); } return resolve({joke:resp.body.value.joke}); }); });
    }
    
    • The code is simple. It uses the request Node.js package to connect to an external REST API
    • The external REST API returns a random joke
    • A JavaScript Promise is used for invoking the REST API
    • At the end, the cloud function returns a response in JSON format
  9. Now click the Save button to save the code. Once the code is saved, the button will change to Invoke. Click the button to invoke the function. In the right-hand panel you should see output with a random joke:
    { "joke": "Project managers never ask Chuck Norris for estimations... ever."
    }

This is how it looks inside the IBM Cloud Functions editor:

Image title

Of course, you can also build and test a cloud function using the CLI. I’ll cover that in another blog post.

For now, let’s expose this cloud function as a REST API so we can invoke it outside the console. In fact, you will be able to invoke it directly from the browser once we make it a Web Action.

  1. On the left-hand side, click Endpoints
  2. Check Enable as Web Action and click Save
  3. Copy the URL and enter into a browser’s address bar

Here is how it looks in Firefox:

Image title

That was easy, right?

In this blog post, you learned how to create a cloud function which invokes an external (3rd party) API. It’s very likely that even the simplest application will need to get data from an external API so this a good example/template to have.

Original Link

Universal Server Side for Mobile Platforms: Myth or Reality?

Haven’t you ever thought about how to make the mobile server-side universal? Our mobile team found out a way to make it effective.

Among the custom projects we work on, mobile application development is one of the most popular options. Quite often, the mobile front-end is accompanied by server-side functionality to share data between different platforms. Needless to say, our mobile team experts seek ways of optimizing their work on this process. In this post, we would like to share our experience on how we apply a more general attitude to creating a universal server side for different platforms.

Here, we should mention cloud solutions like Parse, but in practice, only a few clients are fine with such solutions, and there are a lot of reasons for that. Saying nothing of the project’s specifics that can be quite different, some tasks come up over and over again in some way or another, like

  1. Build the REST API for a mobile platform.
  2. Create user management for both REST and web.
  3. Prepare a platform for the Web UI.

Usually, the REST API is the most wanted architecture when a customer defines his demands to the server side. A dedicated server side is often combined with a Web UI for functionality similar to mobile platforms.

Looking for the best result, finally, we have optimized our approach and learned how to develop a universal server side. Below are the tips gained from the experience of completed projects.

Tip one: Partially get rid of the second task. We do not create user management for the web separately.

Tip two: Delete the third task completely. There is no need to prepare a platform for the Web UI. That’s not a mistake — we discarded preparation of the platform for the Web UI, as well as we stopped supporting non-REST authentication.

Tip three: Use a REST API to develop the web part instead.

What is the point? Server-side developers are working only on building the REST API. Further, this API will serve mobile clients and web UI developers, too. It is both focusable and hitting targets, a win-win-win approach. Everybody is satisfied.

The Benefits

Resource Saving

We significantly decrease the load on the developers’ server side. According to a rough estimate, we can save up to 20% of the time when we build the API only, and we can save even more if we’re generating APIs or using open-source ready solutions as the base for API creation.

Versatility

We get an adaptable solution for universal use. It can be used not only for a mobile application but also for easy-to-integrate social network applications, for example.

Flexibility

The solution does not depend on the Web UI. A customer can even outsource the creation of the Web UI, restricting the access to the server architecture. Aa another option, a client’s team can develop the Web UI itself, not interfering with server-side logic, which we are responsible for, also.

Extra Security

Either a web UI or a server side can host on different servers, providing additional safety. Hypothetical hackers will not have any idea where your server logic is hosted.

Adaptability

We can use different technologies to create the Web UI and server logic. The project’s server side can be developed on Java/Grails, while the Web UI creation may be performed using PHP/Node.js/PR. The developer should not think of technologies’ compatibility anymore.

Focus

In addition, a server-side developer can concentrate on logic, processes, and optimization, having no limitations from the UI.

Dependability

Due to such an approach, the delivered universal server side is completely trustful. The developer can perform any kind of tests covering 100% of the solution without dealing with UI testing at all.

The make-it-universal approach is advantageous from a technical point of view as it is more flexible for managing, especially when the project is divided into stages and outsourced to several companies. The result is secure, independent, and more easily checkable and scalable. Now you know a piece of our secret magic.

Original Link

Why Is Swagger JSON Better Than Swagger Java Client?

  • It’s the old way of creating web-based REST API documents through the Swagger Java library.

  • It’s easy for Java developers to code. 

  • All API description of endpoints will be added in the Java annotations parameters.

  • Swagger API dependency has to be added to the Maven configuration file POM.xml.

  • It creates overhead on the performance because of extra processing time for creating Swagger GUI files (CSS, HTML, JS etc). Also, parsing the annotation logic on the controller classes creates overhead on the performance, as well. It makes the build a little heavy to deploy on microservices, where build size should be smaller.

  • The code looks dirty because the extra code has to be added to the Spring MVC Controller classes through the Spring annotations. Sometimes, if the description of the API contract is too long, then it makes code unreadable and maintainable.

  • Any change in an API contract requires Java to build changes and re-deployment, even if it’s only simple text changes, like API definition text.

  • The biggest challenge is to share with the clients/QA/BA teams before the actual development and to make frequent amendments. The service consumers may change their requirements frequently. Then, it’s very difficult to make these changes in code and create the Swagger GUI HTML pages by redeploying and sharing the updated Swagger dashboard on the actual deployed dev/QA env.  

  • You can copy and paste swagger_api_doc.json JSON file content on https://editor.swagger.io/. It will help you modify content and create an HTML page like the following.  Swagger GUI will provide the web-based interface like Postman. 

    Original Link

    Creating an Oracle Rest Data Services Docker Image

    Oracle has added Oracle Rest Data Services (ORDS) to the Docker build files family on GitHub, which means that you can now easily dockerize ORDS. If you don’t know yet what ORDS is, it’s a free technology from Oracle that allows you to REST-enable your Oracle databases. More specifically, with ORDS you can just fire off regular REST calls to modify or retrieve data from one or many Oracle databases without having to know how to write SQL (not that knowing SQL is a bad thing!). In modern application and microservices architectures REST has become more and more popular for exchanging information. ORDS enables you to easily exchange data from and to Oracle databases via REST without having to write lines and lines of code yourself. For more information on what ORDS is and what it can do, check out Jeff Smith’s blog post about ORDS.

    What You Need

    • The ORDS install zip file, you can download it from the Oracle Technology Network
    • An Java Server JRE 8 Docker base image
    • An Oracle Database running somewhere that ORDS should expose via REST

    Environment

    My environment as of writing this blog is as follows:

    • Oracle Linux 7.4 (4.1.12-112.14.15.el7uek.x86_64)
    • Docker 17.06.2-ol (docker-engine.x86_64 17.06.2.ol-1.0.1.el7)
    • Java Server JRE 1.8.0_172
    • Oracle Rest Data Services 18.1.1

    Building the Oracle Rest Data Services Docker Image

    Obtaining the Required Files

    As with all the GitHub build files from Oracle you first have to download them. There are various ways of how you can download them. For example, you can just clone the Git repository directly. Or you can just download a zip file from GitHub containing all the required files. This option is best for people who don’t know Git. The URL to the zip file is https://github.com/oracle/docker-images/archive/master.zip, which you can download via wget or with your browser and then unzip:

    
    $ wget https://github.com/oracle/docker-images/archive/master.zip
    --2018-05-04 14:20:26-- https://github.com/oracle/docker-images/archive/master.zip
    Resolving github.com (github.com)... 192.30.255.113, 192.30.255.112
    Connecting to github.com (github.com)|192.30.255.113|:443... connected.
    HTTP request sent, awaiting response... 302 Found
    Location: https://codeload.github.com/oracle/docker-images/zip/master [following]
    --2018-05-04 14:20:26-- https://codeload.github.com/oracle/docker-images/zip/master
    Connecting to codeload.github.com (codeload.github.com)|192.30.255.120|:443... connected.
    Proxy request sent, awaiting response... 200 OK
    Length: unspecified [application/zip]
    Saving to: 'master.zip' [ <=> ] 7,601,317 5.37MB/s in 1.3s 2018-05-04 14:20:28 (5.37 MB/s) - 'master.zip' saved [7601317] $ unzip master.zip
    Archive: master.zip
    184cded65f147766c41b2179ee31ba5551185507
    creating: docker-images-master/
    extracting: docker-images-master/.gitattributes
    inflating: docker-images-master/.gitignore
    extracting: docker-images-master/.gitmodules
    inflating: docker-images-master/CODEOWNERS
    inflating: docker-images-master/CONTRIBUTING.md
    ...
    ...
    ...
    creating: docker-images-master/OracleRestDataServices/
    extracting: docker-images-master/OracleRestDataServices/.gitignore
    inflating: docker-images-master/OracleRestDataServices/COPYRIGHT
    inflating: docker-images-master/OracleRestDataServices/LICENSE
    inflating: docker-images-master/OracleRestDataServices/README.md
    creating: docker-images-master/OracleRestDataServices/dockerfiles/
    ...
    ...
    ...
    inflating: docker-images-master/README.md
    $
    
    

    Next you will have to download the ORDS Installer zip file. As said above, you can get it from the Oracle Technology Network:

    
    $ ls -al ords.18*.zip
    -rw-r--r--. 1 oracle oracle 61118609 May 4 14:38 ords.18.1.1.95.1251.zip
    
    

    Building the Java Server JRE Base Image

    The ORDS Docker image is built on the oracle/serverjre:8 base image. That image is not on the Docker Hub so Docker cannot just pull the image automatically. Instead, you first have to build that image before you can proceed to building the ORDS image. Building the Java Server JRE image is straight forward. First you need to download the latest server-jre-8*linux-x64.tar.gz file from the Oracle Technology Network:

    
    $ ls -al server-jre*.tar.gz
    -rw-r--r--. 1 oracle oracle 54817401 May 4 14:30 server-jre-8u172-linux-x64.tar.gz
    
    

    Once you have the file, copy it into the OracleJava/java-8 folder and run the Java Server JRE Docker build.sh script:

    
    $ cd docker-images-master/OracleJava/java-8
    $ cp ~/server-jre-8u172-linux-x64.tar.gz .
    $ ./build.sh
    Sending build context to Docker daemon 54.82MB
    Step 1/5 : FROM oraclelinux:7-slim
    ---> 9870bebfb1d5
    Step 2/5 : MAINTAINER Bruno Borges <bruno.borges@oracle.com>
    ---> Running in b1847c1a647e
    ---> 3bc9baedf526
    Removing intermediate container b1847c1a647e
    Step 3/5 : ENV JAVA_PKG server-jre-8u*-linux-x64.tar.gz JAVA_HOME /usr/java/default
    ---> Running in 50998175529b
    ---> 017598682688
    Removing intermediate container 50998175529b
    Step 4/5 : ADD $JAVA_PKG /usr/java/
    ---> 6704a281de8b
    Removing intermediate container b6b6a08d3c38
    Step 5/5 : RUN export JAVA_DIR=$(ls -1 -d /usr/java/*) && ln -s $JAVA_DIR /usr/java/latest && ln -s $JAVA_DIR /usr/java/default && alternatives --install /usr/bin/java java $JAVA_DIR/bin/java 20000 && alternatives --install /usr/bin/javac javac $JAVA_DIR/bin/javac 20000 && alternatives --install /usr/bin/jar jar $JAVA_DIR/bin/jar 20000
    ---> Running in 281fe2343d2c
    ---> f65b2559f3a5
    Removing intermediate container 281fe2343d2c
    Successfully built f65b2559f3a5
    Successfully tagged oracle/serverjre:8
    $
    
    

    That’s it! Now you have a brand new oracle/serverjre:8 Docker image:

    
    $ docker images
    REPOSITORY TAG IMAGE ID CREATED SIZE
    oracle/serverjre 8 f65b2559f3a5 45 seconds ago 269MB
    oracle/database 12.2.0.1-se2 323887b92e8f 3 weeks ago 6.38GB
    oracle/database 12.2.0.1-ee 08d230aa1d55 3 weeks ago 6.39GB
    oracle/database 12.1.0.2-se2 b4999e09453e 3 weeks ago 5.08GB
    oracle/database 12.1.0.2-ee aee62bc26119 3 weeks ago 5.18GB
    oracle/database 11.2.0.2-xe 50712d409891 3 weeks ago 809MB
    oraclelinux 7-slim 9870bebfb1d5 5 months ago 118MB
    $
    
    

    Building the ORDS Docker Image

    Once you have the oracle/serverjre:8 Docker image on your machine you can now go ahead and build the actual ORDS Docker image. This is also a rather easy task, just put the installer zip file into the OracleRestDataServices/dockerfiles/ folder and run the buildDockerImage.sh script:

     $ cd ../../OracleRestDataServices/dockerfiles/
    $ mv ~/ords.18.1.1.95.1251.zip .
    $ ./buildDockerImage.sh
    Checking if required packages are present and valid...
    ords.18.1.1.95.1251.zip: OK
    ==========================
    DOCKER info:
    Containers: 2
    Running: 0
    Paused: 0
    Stopped: 2
    Images: 10
    Server Version: 17.06.2-ol
    Storage Driver: btrfs
    Build Version: Btrfs v4.9.1
    Library Version: 102
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
    Volume: local
    Network: bridge host ipvlan macvlan null overlay
    Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
    Swarm: inactive
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
    runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
    init version: 949e6fa
    Security Options:
    seccomp
    Profile: default
    selinux
    Kernel Version: 4.1.12-112.14.15.el7uek.x86_64
    Operating System: Oracle Linux Server 7.4
    OSType: linux
    Architecture: x86_64
    CPUs: 2
    Total Memory: 7.795GiB
    Name: localhost.localdomain
    ID: GZ5A:XQWB:F5TE:56JE:GR3J:VCXJ:I7BO:EGMY:K52K:JAS3:A7ZC:BOHQ
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Registry: https://index.docker.io/v1/
    Experimental: true
    Insecure Registries:
    127.0.0.0/8
    Live Restore Enabled: false ==========================
    Proxy settings were found and will be used during build.
    Building image 'oracle/restdataservices:18.1.1' ...
    Sending build context to Docker daemon 61.14MB
    Step 1/10 : FROM oracle/serverjre:8
    ---> f65b2559f3a5
    Step 2/10 : LABEL maintainer "gerald.venzl@oracle.com"
    ---> Running in d90ea0da50ae
    ---> 8959f49c8b7b
    Removing intermediate container d90ea0da50ae
    Step 3/10 : ENV ORDS_HOME /opt/oracle/ords INSTALL_FILE ords*.zip CONFIG_PROPS "ords_params.properties.tmpl" STANDALONE_PROPS "standalone.properties.tmpl" RUN_FILE "runOrds.sh"
    ---> Running in 84ae509c3b08
    ---> 595f8228d224
    Removing intermediate container 84ae509c3b08
    Step 4/10 : COPY $INSTALL_FILE $CONFIG_PROPS $STANDALONE_PROPS $RUN_FILE $ORDS_HOME/
    ---> 992d15c48302
    Removing intermediate container 3088dd27464e
    Step 5/10 : RUN mkdir -p $ORDS_HOME/doc_root && chmod ug+x $ORDS_HOME/*.sh && groupadd dba && useradd -d /home/oracle -g dba -m -s /bin/bash oracle && cd $ORDS_HOME && jar -xf $INSTALL_FILE && rm $INSTALL_FILE && mkdir -p $ORDS_HOME/config/ords && java -jar $ORDS_HOME/ords.war configdir $ORDS_HOME/config && chown -R oracle:dba $ORDS_HOME
    ---> Running in a84bc25b8eb3
    May 04, 2018 6:42:22 PM
    INFO: Set config.dir to /opt/oracle/ords/config in: /opt/oracle/ords/ords.war
    ---> 6ccfc92744ed
    Removing intermediate container a84bc25b8eb3
    Step 6/10 : USER oracle
    ---> Running in c41c77b49add
    ---> 2bd11f2f8008
    Removing intermediate container c41c77b49add
    Step 7/10 : WORKDIR /home/oracle
    ---> bc31d79cfb4a
    Removing intermediate container efef9dccf774
    Step 8/10 : VOLUME $ORDS_HOME/config/ords
    ---> Running in dfbc7ee6f967
    ---> 0ee4e7ed71b1
    Removing intermediate container dfbc7ee6f967
    Step 9/10 : EXPOSE 8888
    ---> Running in deaebbf2950b
    ---> 25d777caccca
    Removing intermediate container deaebbf2950b
    Step 10/10 : CMD $ORDS_HOME/$RUN_FILE
    ---> Running in 0c2270a7fac4
    ---> 4ee1ac73e1f9
    Removing intermediate container 0c2270a7fac4
    Successfully built 4ee1ac73e1f9
    Successfully tagged oracle/restdataservices:18.1.1 Oracle Rest Data Services version 18.1.1 is ready to be extended: --> oracle/restdataservices:18.1.1 Build completed in 19 seconds. $
    
    

    And now you have a brand new ORDS Docker image, in my case containing ORDS 18.1.1:

    
    $ docker images
    REPOSITORY TAG IMAGE ID CREATED SIZE
    oracle/restdataservices 18.1.1 4ee1ac73e1f9 52 seconds ago 395MB
    oracle/serverjre 8 f65b2559f3a5 10 minutes ago 269MB
    oracle/database 12.2.0.1-se2 323887b92e8f 3 weeks ago 6.38GB
    oracle/database 12.2.0.1-ee 08d230aa1d55 3 weeks ago 6.39GB
    oracle/database 12.1.0.2-se2 b4999e09453e 3 weeks ago 5.08GB
    oracle/database 12.1.0.2-ee aee62bc26119 3 weeks ago 5.18GB
    oracle/database 11.2.0.2-xe 50712d409891 3 weeks ago 809MB
    oraclelinux 7-slim 9870bebfb1d5 5 months ago 118MB
    
    

    There is one last thing to add here: by default the buildDockerImage.sh script runs a md5sum checksum on the ORDS zip file just to make sure that the file is intact. You see this as the very first output of the build script. You can skip that checksum by passing on the -i option. In general there is no need to skip the checksum step, however, ORDS releases on a quarterly basis and it may be the case that the GitHub repo hasn’t been updated with the latest checksum files yet. In such case you can still build your latest and greatest ORDS Docker image by bypassing the checksum via -i.

    Now that you have an ORDS Docker image it’s time to run an actual container of it. As ORDS is a REST server in front of an Oracle Database you will need an Oracle Database that ORDS can REST-enable for you. I already do have my Oracle Database Docker images on the same machine and so I will go ahead and REST-enable a database within a Docker container. However, I should point out that having an Oracle Database inside a Docker container is no requirement for running ORDS inside Docker! In fact, you can quite happily manage many Oracle databases with ORDS, regardless where your Oracle databases are running, Docker, locally, on a server, in the cloud, etc.

    Setting Up the Docker Network

    If your Oracle Database is not running inside a Docker container, you can skip this step!

    Because the database and ORDS are both running within Docker I first have to setup a Docker network that these two containers can use to communicate with each other. Creating the network is easily done with just a simple command docker network create:

    
    $ docker network create ords-database-network
    b96c9fd9062f3aa0fb37db3dcd6319c6e3daefb99f52b82819e06f84b3ad38a0
    $ docker network ls
    NETWORK ID NAME DRIVER SCOPE
    0cad4fa350c8 bridge bridge local
    0e6f604bfce9 host host local
    26709c337f2f none null local
    b96c9fd9062f ords-database-network bridge local
    $
    
    

    Running an Oracle Database Docker Container

    If your Oracle Database is not running inside a Docker container, you can skip this step! See Creating an Oracle Database Docker image for how to run Oracle Database in Docker.

    Once you have the network defined you can now start a new Oracle Database container. The --network option in the docker run command will allow you to attach your database container to the Docker network:

    
    $ docker run -d --name ords-db --network=ords-database-network -v /home/oracle/oradata:/opt/oracle/oradata oracle/database:12.2.0.1-ee
    074e2e85ccdaf00d4ae5d93f56a2532154070a8f95083a343e78a753d748505b
    $
    
    

    Running an Oracle Rest Data Services Docker Container

    To run an ORDS Docker container you will have to know the following details:

    • ORACLE_HOST: Host on which the Oracle Database is running (default: localhost)
    • ORACLE_PORT: Port on which the Oracle Database is running (default: 1521)
    • ORACLE_SERVICE: Oracle Database service name that ORDS should connect to (default: ORCLPDB1)
    • ORACLE_PWD: SYS password of the Oracle Database
    • ORDS_PWD: ORDS user password you want to use
    • A volume where to store the ORDS configuration files in

    Once you have all of these you can go ahead and run your ORDS Docker container via the docker run command.

    Note: Because my database is also running inside Docker, I will have to specify the --network parameter in order to allow the two containers to communicate. The hostname for my database host inside the Docker network is the same as my Database Docker container name. I will therefore use -e ORACLE_HOST=ords-db.

    If you do not have the Oracle Database running in Docker, you can skip the --network parameter!

    
    $ docker run --name ords \
    > -p 8888:8888 \
    > --network=ords-database-network \
    > -e ORACLE_HOST=ords-db \
    > -e ORACLE_PORT=1521 \
    > -e ORACLE_SERVICE=ORCLPDB1 \
    > -e ORACLE_PWD=LetsDocker \
    > -e ORDS_PWD=LetsORDS \
    > -v /home/oracle/ords:/opt/oracle/ords/config/ords:rw \
    > oracle/restdataservices:18.1.1
    May 07, 2018 3:47:43 AM
    INFO: Updated configurations: defaults, apex_pu
    May 07, 2018 3:47:43 AM oracle.dbtools.installer.InstallerBase log
    INFO: Installing Oracle REST Data Services version 18.1.1.95.1251
    May 07, 2018 3:47:43 AM oracle.dbtools.installer.Runner log
    INFO: ... Log file written to /opt/oracle/ords/logs/ords_install_core_2018-05-07_034743_00762.log
    May 07, 2018 3:47:44 AM oracle.dbtools.installer.Runner log
    INFO: ... Verified database prerequisites
    May 07, 2018 3:47:45 AM oracle.dbtools.installer.Runner log
    INFO: ... Created Oracle REST Data Services schema
    May 07, 2018 3:47:45 AM oracle.dbtools.installer.Runner log
    INFO: ... Created Oracle REST Data Services proxy user
    May 07, 2018 3:47:45 AM oracle.dbtools.installer.Runner log
    INFO: ... Granted privileges to Oracle REST Data Services
    May 07, 2018 3:47:48 AM oracle.dbtools.installer.Runner log
    INFO: ... Created Oracle REST Data Services database objects
    May 07, 2018 3:47:56 AM oracle.dbtools.installer.Runner log
    INFO: ... Log file written to /opt/oracle/ords/logs/ords_install_datamodel_2018-05-07_034756_00488.log
    May 07, 2018 3:47:57 AM oracle.dbtools.installer.Runner log
    INFO: ... Log file written to /opt/oracle/ords/logs/ords_install_apex_2018-05-07_034757_00832.log
    May 07, 2018 3:47:59 AM oracle.dbtools.installer.InstallerBase log
    INFO: Completed installation for Oracle REST Data Services version 18.1.1.95.1251. Elapsed time: 00:00:15.397 2018-05-07 03:48:00.688:INFO::main: Logging initialized @1507ms to org.eclipse.jetty.util.log.StdErrLog
    May 07, 2018 3:48:00 AM
    INFO: HTTP and HTTP/2 cleartext listening on port: 8888
    May 07, 2018 3:48:00 AM
    INFO: The document root is serving static resources located in: /opt/oracle/ords/doc_root
    2018-05-07 03:48:01.505:INFO:oejs.Server:main: jetty-9.4.z-SNAPSHOT, build timestamp: 2017-11-21T21:27:37Z, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
    2018-05-07 03:48:01.524:INFO:oejs.session:main: DefaultSessionIdManager workerName=node0
    2018-05-07 03:48:01.525:INFO:oejs.session:main: No SessionScavenger set, using defaults
    2018-05-07 03:48:01.526:INFO:oejs.session:main: Scavenging every 600000ms
    May 07, 2018 3:48:02 AM
    INFO: Creating Pool:|apex|pu|
    May 07, 2018 3:48:02 AM
    INFO: Configuration properties for: |apex|pu|
    cache.caching=false
    cache.directory=/tmp/apex/cache
    cache.duration=days
    cache.expiration=7
    cache.maxEntries=500
    cache.monitorInterval=60
    cache.procedureNameList=
    cache.type=lru
    db.hostname=ords-db
    db.password=******
    db.port=1521
    db.servicename=ORCLPDB1
    db.username=ORDS_PUBLIC_USER
    debug.debugger=false
    debug.printDebugToScreen=false
    error.keepErrorMessages=true
    error.maxEntries=50
    jdbc.DriverType=thin
    jdbc.InactivityTimeout=1800
    jdbc.InitialLimit=3
    jdbc.MaxConnectionReuseCount=1000
    jdbc.MaxLimit=10
    jdbc.MaxStatementsLimit=10
    jdbc.MinLimit=1
    jdbc.statementTimeout=900
    log.logging=false
    log.maxEntries=50
    misc.compress=
    misc.defaultPage=apex
    security.disableDefaultExclusionList=false
    security.maxEntries=2000 May 07, 2018 3:48:02 AM
    WARNING: *** jdbc.MaxLimit in configuration |apex|pu| is using a value of 10, this setting may not be sized adequately for a production environment ***
    May 07, 2018 3:48:02 AM
    WARNING: *** jdbc.InitialLimit in configuration |apex|pu| is using a value of 3, this setting may not be sized adequately for a production environment ***
    May 07, 2018 3:48:02 AM
    INFO: Oracle REST Data Services initialized
    Oracle REST Data Services version : 18.1.1.95.1251
    Oracle REST Data Services server info: jetty/9.4.z-SNAPSHOT 2018-05-07 03:48:02.710:INFO:oejsh.ContextHandler:main: Started o.e.j.s.ServletContextHandler@48eff760{/ords,null,AVAILABLE}
    2018-05-07 03:48:02.711:INFO:oejsh.ContextHandler:main: Started o.e.j.s.h.ContextHandler@402f32ff{/,null,AVAILABLE}
    2018-05-07 03:48:02.711:INFO:oejsh.ContextHandler:main: Started o.e.j.s.h.ContextHandler@573f2bb1{/i,null,AVAILABLE}
    2018-05-07 03:48:02.721:INFO:oejs.AbstractNCSARequestLog:main: Opened /tmp/ords_log/ords_2018_05_07.log
    2018-05-07 03:48:02.755:INFO:oejs.AbstractConnector:main: Started ServerConnector@2aece37d{HTTP/1.1,[http/1.1, h2c]}{0.0.0.0:8888}
    2018-05-07 03:48:02.755:INFO:oejs.Server:main: Started @3576ms
    
    

    Now that ORDS is up and running, you can start REST-enabling your database. Note that all configuration files are within a volume, in my case -v /home/oracle/ords:/opt/oracle/ords/config/ords:rw. If you would like to change any of the ORDS configuration, you can just do so in the volume and then restart the container, if needed.

    Original Link

    Quick Start With Finagle

    Finagle is an extensible RPC system for the JVM, used to construct high-concurrency servers. Finagle implements uniform client and server APIs for several protocols, and is designed for high performance and concurrency. Most of Finagle’s code is protocol agnostic, simplifying the implementation of new protocols.

    Today, I am going to implement a Finagle example using Scala, where I am sending the request with some message and get a future response using Finagle.

    First, let’s define a service. Here, we define a service to receive an HTTP request:

    def apply(request: Request) = { request.method match { case Method.Post => request.uri match { case "/" => log.error("in post") val str = request.getContentString() //any business logic val response = Response(Version.Http11, Status.Ok) response.contentString = "Hello..!! " + str Future.value(response) case _ => log.error("REQUEST NOT FOUND") Future.value(Response(Version.Http11, Status.NotFound)) } case Method.Get => request.uri match { case "/" => val str = request.getContentString() //any business logic val response = Response(Version.Http11, Status.Ok) response.contentString = "Thank You " + str Future.value(response) case _ => log.error("REQUEST NOT FOUND") Future.value(Response(Version.Http11, Status.NotFound)) } }
    }
    

    Here, we get the request. After that, we are simply matching what type of request it is, either GET or POST. Request.uri tell us about the endpoint of the request.

    Then, initiate and start our server:

    import java.net.InetSocketAddress
    import com.twitter.finagle.Http
    import com.twitter.finagle.builder. { Server, ServerBuilder
    } class ComputeServerBuilder { val response = new ComputeResponse val address = new InetSocketAddress(10000) def start: Server = ServerBuilder() .stack(Http.server) .bindTo(address) .name("HttpServer") .build(response)
    }
    

    Last, let’s define a client to consume this server.

    computeServerBuilder = new ComputeServerBuilder
    server = computeServerBuilder.start
    client = ClientBuilder() .stack(Http.client) .hosts(computeServerBuilder.address) .hostConnectionLimit(1) .build()
    

    Now, we can send any number of requests using this client to our server.

    Here, I made some test cases to hit the server can check whether we are getting a response successfully or not like shown below:

    val postRequest = Request(Version.Http11, Method.Post, "/")
    postRequest.setContentString("Knoldus")
    val postFutureResponse = client(postRequest).asScala
    postFutureResponse.map(response => println(response.getContentString()))
    postFutureResponse.map(response => assert(response.status === Status.Ok && response.contentString.contains("Knoldus")))
    

    To run it by yourself, you can also clone my sample example from my git repo.

    When I start implementing the Finagle, the challenge that I face in using Finagle is in handling the future between two different APIs, i.e. Scala and Twitter itself. To deal with this, I used some implicit conversion of futures between these two APIs. That code is also available on my repo.

    To learn about the core of Finagle, you can also read this great blog: Finagle: Controlling the Future Of RPC systems which helped me a lot while I was learning.

    References

    Original Link

    Rethinking REST Practices: An Introduction to GraphQL With AWS AppSync

    A Better Way to Think About Data

    The REST Is History

    This is the basic premise of data transfer and involves requesting and receiving lists. This is simplistic, but it gets to the root of why we’ve developed the technologies and best practices to pass data using web services.

    RESTful APIs have grown to serve the needs of numerous individuals, startups, and enterprise companies across the world. They are useful and productive, and the concepts surrounding them are relatively standardized. If you don’t know how to create one, you can quickly find information building a great API that can grow to fit your needs. That’s when things get complicated…

    If you start digging into REST, you’ll realize there’s quite a bit more to throwing lists. There are common threads that many people encounter when developing an API, and you begin to encounter many of the same questions so many others have before, such as:

    • How strictly should you adhere to the principles of REST? How do you decide which ones count?
    • How should you handle versioning? Should you bother?
    • How do you want to structure your objects? What is the shape of the data that works best for the clients of your API?
    • Are you sending the appropriate data to your users? Are you sending them the information they don’t need?
    • Concerning related or hierarchical data, are they able to efficiently query for what they need from nested structures?
    • Are users able to easily figure out what API endpoints are available and how they should be used?

    There are many ways to approach these. It boils down to communicating the structures that a given endpoint will return or accept. The cascade of questions that results from the choices made here will ripple through from the back-end to the client. The secondary issue is that these questions and choices are not at all uncommon. There are answers to these issues that follow best practices. But there is still plenty of ambiguity involved when attempting to build a flexible API that works well. These are the commonly tolerated situations.

    If you hadn’t already guessed, there is a solution that frees us from the dogma of REST and allows us to solve all these issues in a declarative, powerful, andfun way. That solution is GraphQL. In this blog, I’ll provide an introduction to the GraphQL specification with code examples.

    Specification and Structure

    GraphQL is a specification, first and foremost. It enables your data interactions to be declarative. The implementation of this spec entails creating a schema that describes the types of data (more concretely, the shape) that is exposed to the client of your API. It is not a replacement for a database, it is not an object-relational mapping system — it is a set of tools to replace (and as we’ll see later, augment) a traditional REST API. This can be used in combination with all the business layers and software tiers you may already use to interact with your data.

    The interactions take the form of queries that are sent to what is traditionally a single GraphQL endpoint. You may often see this endpoint resemble something similar to www.mysite.com/graphql.

    A client (your application) can send requests to the server that contain Queries (fetching data) and Mutations (manipulating data). A Query or Mutation is received by the GraphQL server and broken down into its constituent parts, and the data is resolved and sent back. It can be broken down because the GraphQL server uses a schema to know the different types of data it can resolve and which resolvers to use for those types.

    Whoa! There’s a lot there. Let’s break this down, starting with the basics.

    Building From Types

    The basis of all the great abilities that are unlocked when using GraphQL come down to Types. This enables the structured nature of your API calls. This allows the server to intelligently return data that adheres to that structure. This allows the client to introspect that data to discover the structure that it is allowed to consume. This introspection provides a development experience and tooling that is way beyond generated documentation (think Swagger) allowed by a traditional REST API.

    Introspection, tooling, and lack of ceremony means less silliness, by default.

    Many of the issues present in a REST API are either not possible or, at least, not easily reproduced. Many of the Commonly Tolerated Situations simply disappear. Let’s go deeper into these ideas by looking at some basic examples of types.

    Here is an example of a basic type for a Game that includes a title, description, and rating.

    type Game { title: String! description: String rating: Int!
    }
    

    This type declaration declares the types of the object that can be returned when referencing a game in a Query or Mutation. The title is declared as a String! — this means that the value should be a String and the exclamation point means that the value is non-nullable (aka required). This says that when the Game type is retrieved or created, this contract will be enforced and an error will be generated saying that the value doesn’t exist.

    There are five scalar types in GraphQL, by default. These include String, Int, Float, Boolean, and ID. All types declared in the schema boil down to these (and in custom server implementations, you can implement your own). This means that all user-defined types, like our Game type above, must have resolvers provided so that the data can be gathered from a service, database, or other data source.

    Resolvers

    Resolvers are used to retrieve or manipulate the data defined by Types.

    For example, our Game type listed above might have a resolver that queries a database for all games from a table. It might call a service that in turn calls a DAO layer that queries the table. It might even have a resolver that calls a REST API that returns a list of games. All a resolver should do is worry about the type of data it is defined to resolve. In this way, they can adapt any data source or service into one unified API.

    Resolvers provide an adaptable API surface by resolving types from a service, DAO, or even another REST API.

    This is key. The power of this type-based approach lets you think of your data more as buckets or lists, rather than relations between structures. The relations between types can be established in a loosely coupled possibility, not a strongly coupled outcome. We’ll come back to this distinction later.

    Queries and Mutations

    Queries and mutations are the two types of interactions possible with a GraphQL implementation. Queries request data from the server. Mutations interact with data by causing side-effects and returning results. They depend on resolvers to do the operations on the data. They provide the API interface the client uses to interact with the data. Let’s look at some basics.

    Queries

    Let’s look at a basic example of a query to get a list of games using the Game type we defined earlier.

    allGames { title description rating
    }
    

    This is the structure of a basic request that can be parsed by a GraphQL server. This tells the server to run the allGames query that is defined in the schema and return the title, description, and rating properties for each game. This uses the resolver defined by the schema for the allGames query to actually fetch data from a data source.

    Mutations

    Here is an example of a basic mutation that creates a game.

    createGame ( title: String!, description: String, rating: Int! ) { title description rating
    }
    

    This example shows how to define a mutation that accepts title, description, and rating as variables. The resolver for this createGame mutation would be responsible for taking the variables and inserting them into a table, calling a service, or otherwise handling the operation that the mutation entails. This operation would complete and return the data that was inserted.

    We will take a deeper look at queries and mutations in a little bit. Next, let’s discuss how all of these above concepts come together on the client and server.

    Client and Server

    Client

    The client is a little out of scope for this article. Suffice it to say, it can range from a simple HTML page sending POST or GET requests to the use of a GraphQL client library such as Apollo. We will cover that in a future post…

    Server

    A GraphQL server is the backbone of your API. This is what receives requests and processes them to return data or perform mutations. This can be a custom implementation (Node, .NET, Java, etc..) or a hosted service instance (like AppSync or Graphcool).

    With a traditional REST API, you might have a series of endpoints that define the data that can be requested by the client. When you send a request to a specific endpoint, the data you receive as a response or actions that are performed are defined ahead of time by the server.

    This is in sharp contrast to a GraphQL endpoint, as any specific request you send is processed based on what is contained in that request. The server handles this by describing the possibility of the response, not the response itself. The client is responsible for requesting what it needs or the operations it wants to perform based on the queries and mutations that are allowed.

    A query doesn’t describe what is returned. Rather, it describes what can be returned. The contract is not a set of data, the contract is a shape of data.

    The implementation you choose will be responsible for parsing these operations and reacting to them. Let’s take a look at what this means in practice by starting our own instance with AWS AppSync.

    More to come in the next installment… stay tuned!

    In Closing

    I think for most cases, GraphQL is an objectively better alternative to RESTful APIs. Its flexibility shines most in its type system and the technology this enables. This lends to great tooling, a wonderful development experience, and a rapid iteration cycle that can’t be matched by traditional API development. We’ll soon check out AWS AppSync to see how this compares with a traditional REST API — follow me on twitter at @mwarger if you would like to know when the next article drops.

    For more on API development with GraphQL, check HowToGraphQL. If you have any questions or comments, please leave them below.

    Editor’s Note: If you like this post, you won’t want to miss Mat Warger’s upcoming presentation at the Nebraska.Code() Conference on Friday, June 8th: Bootstrap Your App With AWS Amplify!

    Original Link

    Streaming ETL Lookups With Apache NiFi and Apache HBase

    When we are ingesting tabular/record-oriented data, we often want to enrich the data by replacing IDs with descriptions or vice versa. There are many transformations that may need to happen before the data is in a happy state. When you are denormalizing your data in Hadoop and building very wide tables, you often want descriptions or other data to enhance its usability. Only one call to get everything you need is nice, especially when you have 100 trillion records.

    We are utilizing a lot of things built already. Make sure you read Abdelkrim‘s first three lookup articles. I added some fields to his generated data for testing.

    I want to do my lookups against HBase, which is a great NoSQL store for lookup tables and generating datasets.

    First, I created an HBase table to use for lookups.

    Create HBase table for lookups:

    create 'lookup_', 'family'
    

    Table with data:

    Most people would have a pre-populated table for lookups. I don’t, and since we are using a generator to build the lookup IDs, I am building the lookup descriptions with a REST call at the same time. We could also have a flow (if you don’t find the lookup, add it). We could also have another flow ingesting the lookup values and add/update those when needed.

    Here’s a REST API to generate product descriptions.

    I found this cool API that returns a sentence of meat words. I use this as our description because MEAT!

    Call the Bacon API!

    Let’s turn our plain text into a clean JSON document:

    Then, I store it in HBase as my lookup table. You probably already have a lookup table. This is a demo and I am filling it with my generator. This is not a best practice or a good design pattern. This is a lazy way to populate a table.

    Example Apache NiFi flow (using Apache NiFi 1.5):

    Generate some test data: 

    Generate a JSON document (not the empty prod_desc):

    { "ts" : "${now():format('yyyymmddHHMMSS')}", "updated_dt" : "${now()}", "id_store" : ${random():mod(5):toNumber():plus(1)}, "event_type" : "generated", "uuid" : "${UUID()}", "hostname" : "${hostname()}", "ip" : "${ip()}", "counter" : "${nextInt()}", "id_transaction" : "${random():toString()}", "id_product" : ${random():mod(500000):toNumber()}, "value_product" : ${now():toNumber()}, "prod_desc": ""
    }
    

    Look up your record:

    This is the magic. We take in our records; in this case, we are reading JSON records and writing JSON records. We could choose CSV, AVRO, or others. We connect to the HBase Record Lookup Service. We replace the current prod_desc field in the record with what is returned by the lookup. We use the id_product field as the lookup key. There is nothing else needed to change records in stream.

    HBase record lookup service:

    HBase client service used by HBase record lookup service:

    We can use UpdateRecord to clean up, transform, or modify any field in the records in the stream.

    Original file:

    { "ts" : "201856271804499", "updated_dt" : "Fri Apr 27 18:56:15 UTC 2018", "id_store" : 1, "event_type" : "generated", "uuid" : "0d16967d-102d-4864-b55a-3f1cb224a0a6", "hostname" : "princeton1", "ip" : "172.26.217.170", "counter" : "7463", "id_transaction" : "5307056748245491959", "id_product" : 430672, "value_product" : 1524855375500, "prod_desc": ""
    }
    

    Final file:

    [ { "ts" : "201856271804499", "prod_desc" : "Pork chop leberkas brisket chuck, filet mignon turducken hamburger.", "updated_dt" : "Fri Apr 27 18:56:15 UTC 2018", "id_store" : 1, "event_type" : "generated", "uuid" : "0d16967d-102d-4864-b55a-3f1cb224a0a6", "hostname" : "princeton1", "ip" : "172.26.217.170", "counter" : "7463", "id_transaction" : "5307056748245491959", "id_product" : 430672, "value_product" : 1524855375500
    } ]
    

    Original Link

    A Compendium of Testing Apps

    I bundled up a bunch of web pages into a testing app. I have now restructured the code for that application and added in a REST API Test application as well.

    I’ve also moved the code to a new repo to make it easier to download. You can find the ” Evil Tester’s Compendium of Testing Apps” at this link and download from the releases page.

    What’s New?

    Ths new release has the “REST Listicator” which is a small REST API I created for training people in REST APIs. So if you downloaded the previous version, this has a whole new app in it.

    Why Do I Have to Download It?

    When you are practicing, you might not be online.

    There might be no wifi you can see. You might be:

    • On a train,
    • on a plane,
    • on a boat,
    • or even in a box afloat.

    Up a tree, or in a car?

    It does not matter; where you are, or where you be.

    Once you download the jar, you can test it near or far.

    Here, or there, or anywhere.

    • In the dark,
    • or in the park.
    • With a mouse,
    • or in the house.

    Flexibility, you see.

    For where you test is not up to me.

    What Changed?

    Technically…

    • The project is now an aggregated maven project with multiple modules.
    • I’ve split some code into reusable libraries that can be released individually.
    • I can configure the modules to run as individual apps if necessary.
    • Started adding tests for some of the sub-modules (more to do).

    All of this means that I have more to blog and write about and more opportunities for approaching the testing in more interesting ways.

    I might have gone overboard with the module splitting up but it seems to impose a good discipline on my development process and helps keep the abstractions clean so I’ll probably do more of that in the future.

    With any bulk upgrade and system merge, there is the chance that something goes wrong. I know that I do not have enough automated functional verification coverage in the build yet. But I have:

    I think its good enough for a version one.

    Have fun.

    Original Link

    Creating a REST API: Handling POST, PUT and DELETE Requests

    In the last post, you added logic to the API for GET requests, which retrieved data from the database. In this post, you will finish building out the basic CRUD functionality of the API by adding logic to handle POST, PUT, and DELETE requests on the employees endpoint.

    Adding the Routing Logic

    To keep the routing logic simple, you will route all HTTP verbs through the existing route path (with the optional id parameter). Open the services/router.js file and replace the current routing logic (lines 5-6) with the following code:

    Original Link

    Creating a REST API With Node.js and Oracle Database

    Node.js and REST APIs go hand in hand. In fact, Ryan Dahl (the creator of Node.js) once described the focus of Node.js as “doing networking correctly.” But where should you start when building a REST API with Node.js? What components should be used and how should things be organized? These are difficult questions to answer — especially when you’re new to the Node.js ecosystem.

    You could choose to use low-level packages and lots of custom code to build an API that’s highly optimized for a specific workload. Or you could use an all-in-one framework like Sails.js, where many of the decisions have been made for you. There is no right or wrong answer: the best option will depend on the type of project you’re working on and where you want to invest your time.

    In this series, I’ll assume you’re new to Node.js and show you how to build a REST API that attempts to balance granular control and magical black boxes. The goal will be to create an API that supports basic CRUD functionality on the EMPLOYEES table in the HR sample schema. By the end of this series, you should be able to make decisions about what’s best for your own projects.

    This post will provide the links to all the posts in the series, details on the target environment, and an overview of the high-level components that I’ll use in the project. The series will include the following parts (links will become active as I publish the content).

    1. Web Server Basics
    2. Database Basics
    3. Handling GET requests
    4. Handling PUT, POST, and DELETE requests
    5. Adding pagination, sorting, and filtering to GET request

    The sample code for each part will be made available in the javascript/rest-api directory of the oracle-db-examples repo on GitHub.

    Target Environment

    For consistency, the instructions I use throughout the series will assume that you’re working with the Oracle Database Developer VM and that Node.js version 8 or higher is installed in the VM. See this post for instructions setting up this type of environment.

    I generally prefer to run Node.js in my host OS and communicate with the database in the guest VM. You may adapt the instructions to do this if you want — just be aware that doing so will require additional installation steps to get the driver working on your platform.

    High-Level Components

    I like to organize my REST APIs into four core components or layers. Incoming HTTP requests will usually touch each of these layers in turn. There may be a lot more going on depending on the features an API supports, but these components are required.

    • Web server: The role of the web server is to accept incoming HTTP requests and send responses. There are many options for web servers in Node.js. At the lowest level, you could use the built-in modules, such as http, https, and http2. For most folks, those modules will be too low-level. I like to use Express for the web server as it’s very popular, flexible, and easy to use. There are many other options, such as restify, kracken, and hapi. You might consider some of these options as you experiment more with APIs over time.
    • Router: Routing logic is used to define the URL endpoints and HTTP methods that the API will support. At runtime, this layer will map incoming requests to the appropriate controller logic. The implementation of the routing logic is almost always tied to the choice web server as most include a means to define routes. I’ll use the Router class that comes with Express to create a single router for the app.
    • Controllers: The controller layer will be comprised of one JavaScript function for each URL path/HTTP method combination defined in the router. The function will inspect the request and pull data from it (URL, body, and HTTP headers) as needed, interact with appropriate database APIs to fetch or persist data, and then generate the HTTP response.
    • Database APIs: The database APIs will handle the interactions with the database. This layer will be isolated from the HTTP request and response. Some developers will prefer to use an Object Relational Mapper (ORM) such as Sequelize to abstract away the database as much as possible. For those folks, this layer is often called the model layer because ORMs work by defining models in the middle tier. I’m going to stay lower level and work directly with the Oracle Database driver for Node.js (node-oracledb).

    Click here to get started building a REST API with Node.js.

    Original Link

    Mocking SecurityContext in Jersey Tests

    Jersey has a great possibility to write integration tests for REST APIs, written with Jersey. Just extend the class JerseyTest and go for it.

    I ran into an issue where I had to mock a SecurityContext so that the SecurityContext includes a special UserPrincipal. The challenge is that Jersey wraps the SecurityContext in an own class SecurityContextInjectee in tests. So I have to add my SecurityContext Mock to this Jersey’s wrapper class. Let me demonstrate it in an example.

    Let say I have the following Jersey Resource:

    @Path("hello/world")
    public class MyJerseyResource { @GET public Response helloWorld(@Context final SecurityContext context) { String name = context.getUserPrincipal().getName(); return Response.ok("Hello " + name, MediaType.TEXT_PLAIN).build(); } }
    

    In my test, I have to mock the SecurityContext, so that a predefined user principal can be used during the tests. I use Mockito as mocking framework. My mock looks like the following one

    final SecurityContext securityContextMock = mock(SecurityContext.class);
    when(securityContextMock.getUserPrincipal()).thenReturn(new Principal() { @Override public String getName() { return "Alice"; }
    });
    

    For adding this mocked SecurityContext to the wrapper class SecurityContextInjectee, I have to configure a ResourceConfig with a modified ContainerRequestContext in my Jersey Test. The mocked SecurityContext can be set in this modified ContainerRequestContext and then it will be used in the wrapper class:

    @Override
    public Application configure() { final SecurityContext securityContextMock = mock(SecurityContext.class); when(securityContextMock.getUserPrincipal()).thenReturn(new Principal() { @Override public String getName() { return "Alice"; } }); ResourceConfig config = new ResourceConfig(); config.register(new ContainerRequestFilter() { @Override public void filter(final ContainerRequestContext containerRequestContext) throws IOException { containerRequestContext.setSecurityContext(securityContextMock); } }); return config;
    }
    

    Then, the whole test for my resource looks like the following one:

    public class MyJerseyResourceTest extends JerseyTest { @Test public void helloWorld() throws Exception { Response response = target("hello/world").request().get(); assertThat(response.getStatus()).isEqualTo(HttpStatus.SC_OK); assertThat(response.getEntity()), isEqualTo("Hello Alice"); } @Override public Application configure() { final SecurityContext securityContextMock = mock(SecurityContext.class); when(securityContextMock.getUserPrincipal()).thenReturn(new Principal() { @Override public String getName() { return "Alice"; } }); ResourceConfig config = new ResourceConfig(); config.register(new ContainerRequestFilter() { @Override public void filter(final ContainerRequestContext containerRequestContext) throws IOException { containerRequestContext.setSecurityContext(securityContextMock); } }); return config; }
    

    Do you have a smarter solution for this problem? Let me know it and write a comment below.

    Original Link

    Creating a REST API: Database Basics

    With the web server in place, it’s time to look into some database basics. As mentioned in the parent post, this series will use the Oracle Database driver/API for Node.js (node-oracledb) to interact with the database. In this post, you’ll create a module that’s responsible for starting up and shutting down a database connection pool. You’ll also add a function that simplifies executing simple statements by getting and releasing connections from the pool automatically.

    Please note: This post is part of a series on creating a REST API with Node.js on Oracle Database. See that post for details on the project and links to other parts. Get the code here.

    Starting Up the Connection Pool

    Generally speaking, there’s some overhead involved with establishing a connection to a database. When apps use many connections for short periods of time, Oracle recommends using a connection pool. Connection pools reduce overhead by establishing groups of connections which are reused many times – this can dramatically increase performance and scalability.

    Because node-oracledb is built on top of the OCI client libraries, it has built-in support for creating OCI pools, which work client-side and have excellent performance characteristics. To create a connection pool, start by creating a new configuration file name database.js in the config directory. Copy and paste the following code into the file and save your changes.

    module.exports = { hrPool: { user: process.env.HR_USER, password: process.env.HR_PASSWORD, connectString: process.env.HR_CONNECTIONSTRING, poolMin: 10, poolMax: 10, poolIncrement: 0 }
    };
    

    As was the case with the config/webserver.js file, this file allows some properties to be set via environment variables. Using environment variables provides flexibility when deploying the app to different environments and helps keep passwords and other sensitive information out of source code. Run the following commands from a terminal to set the required environment variables and ensure they’re available in future terminal sessions.

    echo "export HR_USER=hr" >> ~/.bashrc
    echo "export HR_PASSWORD=oracle" >> ~/.bashrc
    echo "export HR_CONNECTSTRING=0.0.0.0/orcl" >> ~/.bashrc
    source ~/.bashrc
    

    You may have noticed that the poolMin and poolMax values were the same and that poolIncrement was set to 0. This will create a pool with a fixed size that requires fewer resources to manage – a good idea for pools that get consistent usage.

    While Node.js is often described as single-threaded, it does have a thread pool available for certain operations that would otherwise block the main-thread running the JavaScript code. This thread pool is used by node-oracledb to run all of its asynchronous operations, such as getting connections and executing SQL and PL/SQL code. However, the default size of the thread pool is 4. If you want all ten connections in the pool to be able to work at the same time, you’d have to increase the number of threads accordingly.

    The environment variable UV_THREADPOOL_SIZE can be used to adjust the size of the thread pool. The value of UV_THREADPOOL_SIZE can be set before running the Node.js app or from within, but it must be set before the first call that uses the thread pool is made. This is because the thread pool is created when it’s first used and once created, its size is fixed. Open the index.js file in the root of the application and add the following lines after the first line (which brings in the web server module).

    // *** line that requires services/web-server.js is here ***
    const dbConfig = require('./config/database.js');
    const defaultThreadPoolSize = 4; // Increase thread pool size by poolMax
    process.env.UV_THREADPOOL_SIZE = dbConfig.hrPool.poolMax + defaultThreadPoolSize;
    

    Now that the thread pool is sized appropriately, you can move on to the database module. Create a new file in the services directory named database.js. Copy and paste the following code into it and save your changes.

    const oracledb = require('oracledb');
    const dbConfig = require('../config/database.js'); async function initialize() { const pool = await oracledb.createPool(dbConfig.hrPool);
    } module.exports.initialize = initialize;
    

    This module first brings in node-oracledb and the configuration file. Next, an async function named initialize is defined and later exposed via the module.exports object. The initialize function creates a connection pool which is stored in an internal connection pool cache as the “default” pool.

    Now you need to wire things up so that the connection pool is started before opening the web server. Return to the index.js file and add the following line below line 1.

    // *** line that requires services/web-server.js is here ***
    const database = require('./services/database.js');
    

    Then, add the following try block within the startup function just before the existing try block that starts the web server.

     try { console.log('Initializing database module'); await database.initialize(); } catch (err) { console.error(err); process.exit(1); // Non-zero failure code } // *** existing try block in startup here ***
    

    At this point, you can install node-oracledb and test the code so far. Run the following commands in the terminal from the hr_app directory.

    npm install oracledb -s
    node .
    

    If you see the messages indicating that the database module and the web server started up then, congratulations — you now have a connection pool running! I’ll show you that it’s working in the last part of this post, but before that, you need to add some code to keep the application shutting down cleanly.

    Shutting Down the Connection Pool

    If you shut down the application now (using ctrl + c as before), the Node.js process will be killed before the connection pool is closed. While all the related database processes should be cleaned up automatically, it’s best to explicitly close the connection pool before exiting the Node.js process.

    Return to the services/database.js file, add the following lines of code to the end, and then save your updates.

    // *** previous code above this line *** async function close() { await oracledb.getPool().close();
    } module.exports.close = close;
    

    The close function uses the oracledb.getPool() method to synchronously retrieve the default pool and then invokes the close method on the pool to close it.

    To invoke the close function at the right time, add the following lines of code to the index.js file inside the shutdown function, just after the existing try block that stops the web server.

     // *** existing try-catch block in shutdown here *** try { console.log('Closing database module'); await database.close(); } catch (err) { console.log('Encountered error', e); err = err || e; }
    

    If you rerun and shut down the application again, you should see that the database module closes after the web server closes, but before the process exits.

    Simplifying Simple CRUD Operations

    Executing SQL or PL/SQL code with node-oracledb is typically a three-step process: get a connection, execute the code, then release the connection. If all you want to do is a single call to execute (no multi-step transaction needed), then getting and releasing a connection can feel like boilerplate code. I like to create a function that does all three operations with a single call. Return to the services/database.js file, add the following code to the bottom, then save your changes.

    // *** previous code above this line *** function simpleExecute(statement, binds = [], opts = {}) { return new Promise(async (resolve, reject) => { let conn; opts.outFormat = oracledb.OBJECT; opts.autoCommit = true; try { conn = await oracledb.getConnection(); const result = await conn.execute(statement, binds, opts); resolve(result); } catch (err) { reject(err); } finally { if (conn) { // conn assignment worked, need to close try { await conn.close(); } catch (err) { console.log(err); } } } });
    } module.exports.simpleExecute = simpleExecute;
    

    Typically, you wouldn’t use the database module in the web server module, but you’ll add it now just to ensure it’s working correctly. Open the services/web-server.js file and add the following line under the existing constant declarations at the top.

    // line that requires ../config/web-server.js here
    const database = require('./database.js');
    

    Next, use the following code to replace the entire app.get handler that responds with “Hello, World!” (all three lines).

     // *** line that adds morgan to app here *** app.get('/', async (req, res) => { const result = await database.simpleExecute('select user, systimestamp from dual'); const user = result.rows[0].USER; const date = result.rows[0].SYSTIMESTAMP; res.end(`DB user: ${user}\nDate: ${date}`); });
    

    The new handler is using the database module’s simpleExecute function to fetch the current user and systimestamp values from the database. The values are then used in a template literal to respond to the client with a dynamic message.

    Start the application again and navigate Firefox to localhost:3000. You should see something like the following image.

    If you see a message like that, then your database module is in good shape. In the next post, you will continue to build out the API by adding routing, controller, and database logic for a GET request.

    Original Link

    Automated Testing for REST APIs

    Integration testing is the phase of software testing in which individual software modules are combined and tested as a group instead of testing each class independently. This can be achieved easily by using JUnit for backend code and Selenium for UI. Both of these tests can be part of Build/CI system to view the report and fail/pass the build/CI system.

    Since all of us are now writing or maintaining RESTful microservices and these services/APIs are exposed to the web and distributed over different networks, they are vulnerable to risks and security threats which affect the processes based on them. Hence, testing becomes necessary to ensure they perform correctly. To test these APIs, it’s very important to automate REST API test cases instead of relying on manual testing. This tutorial focuses on the basic principles, mechanics, and few ways of testing a REST API. For simplicity, the GitHub REST API will be used here.

    There are various technologies and tools available; a few of them are Apache HTTP client, rest-assuredsoapUIPostman, etc. Out of all these, I will be describing Apache HTTP client, rest-assured, and soapUI.

    This kind of testing will usually run as a late step in a Continuous Integration process, consuming the REST API after it has already been deployed.

    When testing a REST API, the tests should focus on:

    • HTTP response code

    • Response body – JSON, XML

    •  HTTP headers in the response

    1. Writing Test Cases With Apache HTTP Client

    HttpClient provides an efficient, up-to-date, and feature-rich package implementing the client side of the most recent HTTP standards and recommendations.

    • HTTP response code

    public void validStatusCode() throws IOException { HttpUriRequest request = new HttpGet( "https://api.github.com/events" ); HttpResponse httpResponse = HttpClientBuilder.create().build().execute( request ); Assert.assertThat(httpResponse.getStatusLine().getStatusCode(), equalTo(HttpStatus.OK));
    }
    
    • Response body & header

     public void responseBody() IOException { String jsonMimeType = "application/json";
    HttpUriRequest request = new HttpGet( "https://api.github.com/events" );
    HttpResponse response = HttpClientBuilder.create().build().execute( request );
    String mimeType = ContentType.getOrDefault(response.getEntity()).getMimeType(); Event[] events = new ObjectMapper().readValue(response.getEntity(). getContent(), Event[].class); Assert.assertEquals( jsonMimeType, mimeType ); // more assert starments can be added here }
    @JsonIgnoreProperties(ignoreUnknown = true)// this is added since new ObjectMapper().readValue(response.getEntity().getContent(), Event[].class);
    //throw will exception if don't have all properties(part of the response) present in this class
    class Event { private String type; private long id; private Repo repo; // setters and getters for all properties goes here
    }
    @JsonIgnoreProperties(ignoreUnknown = true)
    class Repo { private long id; private String name;
    // setters and getters for all properties goes here }
    

    2. Writing Test Cases With rest-assured

    REST-assured is a Java DSL (domain specific language) for simplifying testing of REST-based services built on top of HTTP Builder. It supports POST, GET, PUT, DELETE, OPTIONS, PATCH, and HEAD requests and can be used to validate and verify the response of these requests

    • HTTP response code, response body & header.

    @Test
    public void getStatusWithRestAssured() { Event[] events = RestAssured.get("https://api.github.com/events").then() .statusCode(200).assertThat().contentType(ContentType.JSON) .body("", CoreMatchers.notNullValue()) .extract().as(Event[].class); // more assert statement goes here.
    }
    

    With rest-assured, various test scenarios can be covered in a very simple way. More details about rest-assured are available here.

    3. Writing Test Cases With SoapUI

    SoapUI is an open source, cross-platform testing tool. It can automate functional, regression, compliance and load testing of both SOAP and REST web services. It comes with an easy-to-use graphical interface and supports industry-leading technologies and standards to mock and stimulate the behavior of web services.

    Below are the steps needed to set it up and details about each step are available here.

    1. Creation of soapUI test project.
    2. Defining endpoints.
    3. Test case and test suite creation.
    4. Addition of test steps for endpoints.
    5. Generating the project descriptor.

    Once we are done with the above steps, create a maven project with below plugin added in the pom. The below snippet assumes that the name of the project descriptor file is project.xml.

    <plugin>
    <groupId>com.smartbear.soapui</groupId> <artifactId>soapui-maven-plugin</artifactId> <version>5.2.1</version> <configuration> <projectFile>${basedir}/project.xml</projectFile>
    </configuration> <executions> <execution>
    <id>soapui-test</id> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> </execution>
    </executions>
    </plugin>
    

     If it is not available under the default Maven repo, you would need to add the following repository:

    <pluginRepositories>
    <pluginRepository> <id>smartbear-sweden-plugin-repository</id> <url>http://www.soapui.org/repository/maven2</url> </pluginRepository>
    </pluginRepositories>
    

    Run the following Maven command to run all your tests:

    mvn clean integration-test
    
    mvn clean integration-test
    

    Original Link

    Accessing Relational, Big Data, and SaaS data from NativeScript

    Most of us have our data either residing in relational databases like SQL Server, DB2, Oracle, or MySQL, in a Big Data ecosystem utilizing any of the flavors available like Apace or Cloudera, or have it in cloud-based systems like Salesforce or Google Analytics. While building your NativeScript application, you may need to access these data sources from anywhere and you need a REST API to do that. Well, of course you can go and build your own REST API, but then you need to focus on scaling, performance issues, maintaining the API, and you end up unnecessarily spending lot of energy.

    What if you could generate a standard based REST API, by just configuring your connection parameters and not worry about scaling, security, or performance issues? Progress DataDirect Hybrid Data Pipeline (HDP) exactly does that. Progress DataDirect Hybrid Data Pipeline is a lightweight software self-service that is designed to allow applications to access data from data sources that are in the cloud or on-premises. It offers:

    • A standard interface – ODBC, JDBC, or OData (REST) – to access any of the data source types we support – cloud, SQL, Big Data, and NoSQL.
    • Firewall-friendly access to any on-premises data source using DataDirect’s On-Premise connector without changing any firewall policies.
    • Highly Secure – All customer-sensitive data elements (including remote credential or database pairings stored) are protected by encryption, both at rest (AES-256) and in transit (SSL/TLS)

    To learn more about Hybrid Data Pipeline, I recommend you watch this video.

    In this tutorial, we will be showing you how you can generate an OData API for your own database using HDP and use it in your NativeScript application to get real-time access to your data. I will be using a SQL Server database and using the open source chinook dataset. Chinook also has scripts for other databases like Oracle, DB2, and MySQL. If you use the scripts from this project, regardless of database, the NativeScript application in this tutorial will work.

    To learn more about OData, please visit odata.org

    Generating a REST API for your database

    To get started, you need to download and install Hybrid Data Pipeline. You can do this on your local machine, VM, or a server.

    In case you have trouble with the installation, please follow this tutorial on how to install it or visit the documentation.

    Once you have completed the installation, go to http://localhost:8080 and you should see a login page as shown below:

    The default credentials are d2cadmin/d2cadmin. Login in to the portal and Go to the Data Sources tab -> and click on the New Data Source button. You should see a bunch of supported data stores:

    Click on SQL Server (or your own database) and you should now see a connection configuration page as shown below. Fill it in with the connection information for your database and click on the Test Connect button to verify the connection:

    Now that you have a successful connection, let’s work on generating an OData API for your database. Go to the OData tab and click on Configure Schema:

    On the next screen, you will be asked to choose your schema. If you are using SQL Server choose dbo as your schema. As you select the schema, you should now see a list of all the tables from the Chinook dataset as shown below. Select all tables and click on Save & Close button:

    Note: If you are not seeing the tables as shown below, the NativeScript application in this tutorial will not work for you.

    After you have saved it, you should be back at the OData tab page. Copy the OData Access URI and click on Save to save all the changes made to this data source.

    That’s it! You now have an OData REST API for your database, without having to write single piece of code.

    Open your browser or Postman and try a GET Request and use basic authentication, with the credentials being the same as your login credentials for Hybrid Data Pipeline. You should see a response like below, showing all the tables available via this endpoint:

    { "@odata.context": "http://<host>:8080/api/odata4/pocketmusic/$metadata", "value": [{ "name": "Albums", "url": "Albums" }, { "name": "Artists", "url": "Artists" }, { "name": "Customers", "url": "Customers" }, { "name": "Employees", "url": "Employees" }, { "name": "Genres", "url": "Genres" }, { "name": "Invoices", "url": "Invoices" }, { "name": "InvoiceLines", "url": "InvoiceLines" }, { "name": "MediaTypes", "url": "MediaTypes" }, { "name": "Playlists", "url": "Playlists" }, { "name": "PlaylistTracks", "url": "PlaylistTracks" }, { "name": "Tracks", "url": "Tracks" } ]
    }
    

    Note: You will probably see security errors due to lack of an SSL certificate in your local HDP installation and that’s normal. For testing purposes, you can fall back to http and use port 8080 to circumvent the issue. Do not use this workaround for production as it is insecure.

    Creating the NativeScript app

    Now that we have a backend API needed for the app, let’s start with creating the application. With the dataset we currently have, let’s build a Music Store app, where you can Browse, Search, and Buy/Refund music.

    You can find the source code for this application on GitHub and you can refer to it when developing your own app.

    I didn’t want to start it from the scratch as this was my first time building the app and I wanted a side drawer navigation application. The best way to do this is to use NativeScript Sidekick, which helps you generate starter templates to get started easily. Once it had generated the template, I opened the project using my favorite editor, Visual Studio Code, to do the coding.

    Using that template, start renaming your side drawer navigation items to Library, Browse Store, Search Store, and Settings in /shared/MyDrawer.xml and your app should now look like this:

    Let’s start with Browse Store as it will list all the Albums available and when you click on each album, it will list all the Tracks available in that Album. To give you an idea, below are screenshots of what we will implement:

    When you open Browse Store – Display All Albums

    When you Open Album -> Display all Tracks

    For the first interaction of “browse store”, it needs to display all the Albums available in a ListView. The easiest way to implement this is using Listview component in NativeScript UI. You will have to install the NativeScript UI package by running the following command:

    npm i nativescript-pro-ui
    

    Once NativeScript UI is installed, in browse/browse-page.xml, use the following code to display Album name, Artist Name, a hidden Album Id and Album art for that Album:

    <lv:RadListView id="listview" class="list-group" items="{{ items }}" selectionBehavior="Press" multipleSelection="false" itemSelected="onItemSelected"> <lv:RadListView.listViewLayout> <lv:ListViewLinearLayout scrollDirection="Vertical"/> </lv:RadListView.listViewLayout> <lv:RadListView.itemTemplate> <GridLayout rows="auto" columns="auto, *" class="album-browse"> <Image src="{{ '~/images/' + AlbumId + '.jpg' }}" row="0" col="0" width="50" height="50" class="thumb img-rounded"/> <StackLayout class="list-group-item" row="0" col="1"> <Label text="{{ Title }}" class="list-group-item-heading label-track-name" /> <Label text="{{ Name }}" textWrap="true" class="list-group-item-text" /> <Label text="{{ AlbumId }}" class="list-group-item-text list-albumid" /> </StackLayout> </GridLayout> </lv:RadListView.itemTemplate>
    </lv:RadListView>
    

    Next step is to get the data and bind it with the Listview. If you observe the schema we have Table Album with Album ID and Album name and another Table Artist with Artist Id and Artist Name. For this view, we need both Album and Artist data. In general, if you were dealing with any other API, you would have to get Album data first, then get the Artist data and blend them to get the result.

    But with OData, it offers a nifty feature called $expand, which lets you expand to related entities, if you have defined foreign key relationships between these tables in your database. Here is the OData query that I used to fetch Album and Artist data in a single request using the $expand option.

    http://<host>:8080/api/odata4/pocketmusic/Albums?$expand=Artist
    

    And the response should be like this:

    In your browse-page.js file, you make a request to this endpoint and bind the data to the listview as shown below in the code:

    fetch(odata_URL, init).then(function (response) { if (!response.ok) { var toast = Toast.makeText(response.status); toast.show(); } return response.json().then(function(json){ var albumData = json.value; for(var i=0; i< albumData.length; i++) { var album = albumData[i]; listItems.push({ AlbumId: album.AlbumId, Title: album.Title, Name: album.Artist.Name }) } pageData.set("items", listItems); appSettings.setString("albumData", JSON.stringify(listItems._array)); });
    }).catch(function (error) { var toast = Toast.makeText("Something bad happened: " + error); toast.show();
    });
    

    Once you have done this, you should now see the listview populated with Albums and Artists. In an equivalent way, you can implement the next action, where you click on Album and you need to show all the Tracks in that Album.

    Another interesting feature of OData is text $search on columns. To do this, you would have to create indexes on text type column on which you intended to perform a search. For this application, I wanted to have a feature where users can search Album names. To do that you must create a non-clustered index on title column in the Albums table if using SQL Server.

    To enable $search, head back to the Hybrid Data Pipeline “configure OData” page and enable advanced settings as shown below.

    Under the Settings tab, choose search options as “Substring” as shown below as well:

    Now go back to the Columns tab and click on tiny search button next to the Title column to enable search as shown below:

    Save & Close the configuration to enable the OData search. To test it out run this OData query where it searches for an Album with “Billy” in its title:

    http://<host>:8080/api/odata4/pocketmusic/Albums?$search=Billy 
    

    Using this endpoint, now you can implement search feature in this application and display the results using a ListView as shown below:

    Reminder: You can find the source code for this application on GitHub!

    Summary

    I hope this tutorial helped you to understand how you can RESTify any of your databases using Progress DataDirect Hybrid Data Pipeline and use it with a NativeScript application. Feel free to contact us in GitHub or in the comments if you have any questions.

    Original Link

    Testing REST APIs Using the ZeroCode JSON-Based BDD Test Framework

    Get rid of a lot of boilerplate code! Learn how the ZeroCode testing library will make your life easier.

    Why?

    • It’s simple and easy to use, no clutter or boilerplate code for testers and developers or any stakeholders to understand what’s being tested. A great time saver!

    • Automate and write your end-to-end tests and integration-tests at the speed of writing unit tests.

    Imagine you have a  REST api  to test, with the following behavior:

    Usecase scenario: REST API to get an empoyee details,
    URL: http://host:port/api/v1/persons/1001,
    Operation: GET,
    Expected JSON Response body as-
    { "id": 1001, "name": "Larry P", "job": "Full Time"
    },
    Expected Response status: 200
    

    And your happy scenario test case code looks like below:

    { "name": "get_emp_details", "url": "http://host:port/api/v1/persons/1001,", "operation": "GET", "request": {}, "assertions": { "status": 200, "body": { "id": 1001, "name": "Larry P", "job": "Full Time" } }
    }
    

    Your negative scenario test case code looks like below:

    { "name": "get_not_existing_emp_details", "url": "http://host:port/api/v1/persons/9999", "operation": "GET", "request": {}, "assertions": { "status": 404, "body": { "message": "No such employee exists" } }
    }
    

    And if you need them together as a scenario, then the code looks like below:

    { "scenarioName": "GET Employee Details Happy and Sad path", "steps": [ { "name": "get_emp_details", "url": "http://host:port/api/v1/persons/1001,", "operation": "GET", "request": {}, "assertions": { "status": 200, "body": { "id": 1001, "name": "Larry P", "job": "Full Time" } } }, { "name": "get_non_existing_emp_details", "url": "http://host:port/api/v1/persons/9999", "operation": "GET", "request": {}, "assertions": { "status": 404, "body": { "message": "No such employee exists" } } } ]
    }
    
    

    Then you just stick these into a JSON file, for example, named "get_happy_and_sad.json"anywhere in the test/resources  folder. Then run the code like below, pointing to that JSON fileand then you are done with testing.

    @RunWith(ZeroCodeUnitRunner.class)
    @HostProperties(host="http://localhost", port=8088, context = "")
    public class MyRestApiTest{ @Test @JsonTestCase("get_happy_and_sad.json") public void testGetHappyAndSad() throws Exception { } }
    

    How?

    <dependency> <groupId>org.jsmart</groupId> <artifactId>zerocode-rest-bdd</artifactId> <version>1.1.17</version>
    </dependency>
    
    • Hello World and samples are available to download or clone.
      • You can organize and arrange the tests to suit your requirements, by folder/feature/release.
      • You can add as many tests as you want by just annotating the test method. See here for some examples.IDE screen shot of samples

      • You can assert the entire JSON in the assertion block, however complex and hierarchical the structure might be, with a copy paste of the entire JSON. Hassle free, no serialize/deserialize as needed!

      • You can also use only the particular section or even an element of a JSON using a JSON path like  $.get_emp_details.response.body.id, which will resolve to 1001 in the above case.

      • You can test the consumer contract APIs by creating runners specific to clients.

    • Examples

      • Working examples of various use cases are here. 

      • You can use placeholders for various outcomes if you need.

      • Examples of some features are here:

      Test Report

      Test reports are generated into the /target folder every time the tests are run. Sample reports are here in the .html spike chart and .csv tabular format.

      Test Logs

      Test logs are generated in the console as well as into the log file in a readable JSON format,  target/logs/zerocode_rest_bdd_logs.log. In case of a test failure, it lists which field or fields didn’t match with their JSON Pathin a tree structure.

      For example, if the test passedTest Passed.

      If the test failedTest Failed.

      Source Code in GitHub

      Visit the source here in GitHub ZeroCode.

      Contribute

      Raise issues and contribute to improve the ZeroCode library and add more essential features you need by talking to the author.

    Original Link

    How to Easily Build Angular2 Database Apps

    Angular2 is an updated framework for dynamic web apps built upon and expanding principles of Angular JS. The CData API Server lets you generate a REST APIs for 80+ data sources, including both on-premises and cloud-based databases. This article will walk through setting up the CData API Server to create a REST API for a SQLite database and creating a simple single-page application (SPA) that has live access to database data. The SPA will dynamically build and populate an HTML table based on the database data. While this article steps through most of the code, you can download the sample Angular2 project and SQLite database to see the full source code and test the functionality for yourself.

    Setting Up the API Server

    If you have not already done so, you will need to download the CData API Server. Once you have installed the API Server, you will need to run the application, configure the application to connect to your data (the instructions in this article are for the included sample database), and configure the application to create a REST API for any tables you wish to access in your SPA.

    Enable CORS

    If the Angular2 web app and API Server are on different domains, then Angular2 will generate cross-domain requests. This means that CORS (cross-origin resource sharing) must be enabled on any servers queried by Angular2 Web apps. We can enable CORS for the API Server by navigating to the Server tab in of the SETTINGS page of the API Server. You will need to adjust the following settings:

    • Click the checkbox to Enable cross-origin resource sharing (CORS).

    • Either click the checkbox to Allow all domains without ‘*’ or specify the domain(s) that are allowed to connect in Access-Control-Allow-Origin.

    • Set Access-Control-Allow-Methods to GET,PUT,POST,OPTIONS.

    • Set Access-Control-Allow-Headers to authorization.

    • Click Save Changes.

    Configure Your Database Connection

    To configure the API Server to connect to your database, you will need to navigate to the Connections tab on the SETTINGS page. Once there, click Add Connection. For this article, we will connect to a SQLite database. When you configure the connection, you can name your connection, select SQLite as the database, and fill in the Database field with the full path to your SQLite database (the included database is chinook.db from the SQLite tutorial).

    Image title

    Configure a User

    Next, create a user to access your database data through the API Server. You can add and configure users on the Users tab of the SETTINGS page. Since we are only creating a simple SPA for viewing data, we will create a user that has read-only access. Click +Add, give the user a name, select GET for the Privileges, and click Save Changes.

    Image title

    As you can see in the screenshots, we already had a user configured with read and write access. For this article, we will access the API Server with the read-only user, using the associated authtoken.

    Image title

    Accessing Tables

    Having created a user, we are ready to enable access to the database tables. To enable tables, click the Add Resources button on the Resources tab of the SETTINGS page. Select the data connection you wish to access and click Next. With the connection selected, you can begin enabling resources by clicking on a table name and clicking Next. You will need to add resources one table at a time. In this example, we enabled all of the tables.

    Image title

    Sample URLs for the REST API

    Having configured a connection to the database, created a user, and added resources to the API Server, we now have an easily accessible REST API based on the OData protocol for those resources. Below, you will see a list of tables and the URLs to access them. For information on accessing the tables, you can navigate to the API page for the API Server. For the URLs, you will need the address and  port of the API Server. Since we are working with Angular2, we will append the  @json parameter to the end of URLs that do not return JSON data by default.

    Table URL
    Entity (table) List http://address:port/api.rsc/
    Metadata for table albums http://address:port/api.rsc/albums/$metadata?@json
    albums data http://address:port/api.rsc/albums

    As with standard OData feeds, if you wish to limit the fields returned, you can add a $select parameter to the query, along with other standard URL parameters, such as $filter$orderby$skip, and $top.

    Building a Single Page Application

    With the API Server setup completed, we are ready to build our SPA. We will walk through the source files for the SPA contained in the ZIP file, making note of any relevant sections of code as we go along. Several of the source files are based loosely on the Angular2 tutorial from angular.io.

    Index.html

    This is the home page of our SPA and the source code mainly consists of script elements to import the necessary Angular2 libraries.

    App/Main.ts

    This TypeScript file is used to bootstrap the App.

    App/Rxjs-Extensions.ts

    This TypeScript file is used to import the necessary Observable extensions and operators.

    App/App.module.ts

    This TypeScript file is used to create a class that can be used in other files to import the necessary modules to create and run our SPA.

    App/App.component.css

    This file creates CSS rulesets to modify the h1h2, th, and td elements in our HTML.

    App/App.component.html

    This file is the template for our SPA. The template consists of a title, a drop-down to select an available table, a drop-down to (multi) select columns in the table to be displayed, a button to retrieve the data, and a table for the data. Different sections are enabled/disabled based on criteria in *ngIf directives and the menus and table are built dynamically based on the results of calls to the API Server, using the *ngFor directive to loop through the returned data.

    All of the calls to the API Server and assignment of values to variables are made in the AppComponent and AppService classes.

    <h1>{{title}}</h1>
    <br>
    <label>Select a Table</label>
    <br>
    <select [(ngModel)]="selectedTable" (change)="tableChanged()"> <option *ngFor="let sel_table of availableTables" [value]="sel_table">{{sel_table}}</option>
    </select>
    <br>
    <br>
    <label>Select Columns</label>
    <br>
    <select *ngIf="selectedTable" [(ngModel)]="selectedColumns" (change)="columnsChanged()" multiple> <option *ngFor="let sel_column of availableColumns" [value]="sel_column">{{sel_column}}</option>
    </select>
    <br>
    <br>
    <button *ngIf="selectedTable && selectedColumns" (click)="dataButtonClicked()">Get [{{selectedTable}}] Data</button>
    <br>
    <br>
    <table *ngIf="selectedTable && selectedColumns"> <thead> <tr> <th *ngFor="let column of selectedColumns">{{ column }}</th> </tr> </thead> <tbody> <tr *ngFor="let row of tableData"> <td *ngFor="let column of selectedColumns">{{ row[column] }}</td> </tr> </tbody>
    </table>
    

    app/app.service.ts

    This TypeScript file builds the service for retrieving data from the API Server. In it, we have functions for retrieving the list of tables, retrieving the list of columns for a specific table, and retrieving data from a table. We also have a class that represents the metadata of a table as returned by the API Server.

    API_Table

    The metadata returned by the API Server for a table includes the table’s name, kind, and URL. We only use the name field, but pass the entire object in the event that we need the other information if we decide to build upon our SPA.

    export class API_Table { name: string; kind: string; url: string;
    }
    

    constructor()

    In the constructor, we create a private instance of the Http class and set the Authorization HTTP header based on the user/authtoken credentials for the user we created earlier. We then include this header in our HTTP requests.

    constructor(private http: Http) { this.headers.append('Authorization', 'Basic ' + btoa(this.userName+":"+this.authToken));
    }
    

    getTables()

    This function returns a list of the tables. The list is retrieved from the API Server by making an HTTP GET request, including the Authorization header, to the base URL for the API Server: http://localhost:8153/api.rsc

    getTables(): Promise&lt;API_Table[]&gt; { return this.http.get(this.baseUrl, {headers: this.headers}) .toPromise() .then(response => response.json().value ) .catch(this.handleError);
    }
    

    getColumns()

    This function returns a list of columns for the table specified by tableName. Since the $metadata endpoint returns XML formatted data by default, we pass the @json parameter in the URL to ensure that we get JSON data back from the API Server. Once we have the JSON data, we can drill down to retrieve the list of column names.

    getColumns(tableName: string): Promise&lt;string[]&gt; { return this.http.get(`${this.baseUrl}/${tableName}/$metadata?@json`, {headers: this.headers}) .toPromise() .then(response => response = response.json().items[0]["odata:cname"] ) .catch(this.handleError);
    }
    

    getTableData()

    This function returns the rows of data for the specified table and columns. We pass the tableName in the URL and then pass the list of columns (a comma-separated string) as the value of the $select URL parameter.

    getTableData(tableName:string, columnList: string): Promise&lt;Object[]&gt; { return this.http.get(`${this.baseUrl}/${tableName}/?$select=${columnList}`, {headers: this.headers}) .toPromise() .then(response => response = response.json().value ) .catch(this.handleError);
    }
    

    app/app.component.ts

    In this TypeScript file, we have defined the functions that react to the events in the SPA; within these functions, we call the functions from the AppService and use the results to populate the various elements of the SPA. These functions are fairly straightforward, assigning values to the different variables as necessary.

    ngOnInit()

    In this function, we call the getTables function from our AppService. Since getTables returns the raw data objects from our API Server table query, we need to push only the name field from each result into the array of available tables and not push the entire object.

    ngOnInit(): void { this.appService .getTables() .then( tables => { for (let tableObj of tables) { this.availableTables.push( tableObj.name ) } }); }
    

    tableChanged()

    This function is called whenever the user selects a different table from the drop-down menu in the SPA. The function makes a call to the API Server to retrieve the list of columns for the given table, which populates another drop-down menu.

    tableChanged(): void { this.appService .getColumns(this.selectedTable) .then( columns => this.availableColumns = columns ); this.selectedColumns = []; }
    

    columnsChanged()

    This function is called whenever the user changes which columns are selected from the drop-down menu. It simply clears the table data so that we do not display an empty table if the columns selected after the button is clicked are different from those originally selected.

    columnsChanged(): void { this.tableData = []; }
    

    dataButtonClicked()

    This function serves to join the array of selected columns into a comma-separated string, as required by the $select parameter in an OData query, and pass the table name and list to the getTableData function in the AppService. The resulting data is then used to populate the HTML table.

    dataButtonClicked(columnList: string): void { columnList = this.selectedColumns.join(','); this.appService .getTableData( this.selectedTable, columnList ) .then( data => this.tableData = data ); }
    

    Running the Single Page Application

    With our connection to data configured and the source files for the SPA reviewed, we are now ready to run the single page application. You will need to have node.js and npm installed on your machine in order to run the SPA. Included in the sample download is a pre-configured package.json file. You can install the needed modules by running npm install from the command line at the root directory for the SPA. To start the SPA, simply run npm start in the same directory.

    When the SPA launches, you will see the title and a drop down menu to select a table. The list of tables is retrieved from the API Server and includes all of the tables you added as resources when configuring the API Server.

    Image title

    With a table selected, the drop-down, multi-select menu for columns appears, allowing you to select the columns you wish to see in your table. You can see that as you select columns, the table headers appear.

    Image title

    Once the table and columns are selected, you can click the Get [table] Data button to retrieve data from your database via the API Server. The HTML table will be populated with data based on the table and columns you selected before clicking on the button.

    Image title

    Now that you have seen a basic example of connecting to your database data in dynamic web pages, visit our API Server page to read more information about the API Server and download the API Server. Start building dynamic web pages using live data from your on-premises and cloud-based databases, including SQLite, MySQL, SQL Server, Oracle, and PostgreSQL! As always, our world-class Support Team is ready to answer any questions you may have.

    Original Link

    ASP.NET Core With Couchbase: Getting Started [Video]

    ASP.NET Core is the newest development platform for Microsoft developers. If you are looking for information about plain old ASP.NET, check out ASP.NET With Couchbase: Getting Started.

    ASP.NET Core Tools to Get Started

    The following video will take you from having no code to having an HTTP REST API that uses Couchbase Server, built with ASP.NET Core.

    These tools are used in the video:

    Getting Started Video

    Original Link

    Spring Data REST and Projections

    Spring Data REST and Projections is the final post in my series on using Spring Data REST. Projections allow you to control exposure to your domain objects in a similar way to Domain Transfer Objects (DTOs).

    This post forms part of a series looking at Spring Data REST:

    What Are Projections?

    It is a common practice to use Domain Transfer Objects in REST API design as a method of separating the API from its underlying model. This is particularly relevant to Spring Data JPA REST where you may want to restrict what is visible to clients.

    Spring Data JPA REST allows you to achieve something similar through the use of Projections.

    Example

    If we stick with the example used in the previous posts, we define our JPA object as:

    @Entity
    public class ParkrunCourse { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private long id; private String courseName; private String url; private Long averageTime;
    }
    

    We then have our Spring Data JPA repository:

    @RepositoryRestResource
    public interface ParkrunCourseRepository extends CrudRepository<ParkrunCourse, Long> {
    }
    

    Now lets say we want to restrict our ParkrunCourse JPA object, and not return the id or url. We then define our projection as:

    @Projection(name = "parkrunCourseExcerpt", types = ParkrunCourse.class)
    public interface ParkrunCourseExcerpt { String getCourseName(); Long getAverageTime();
    }
    

    We then need to re-define our Spring Data REST repository using the excerptProjection attribute:

    @RepositoryRestResource(excerptProjection = ParkrunCourseExcerpt.class)
    public interface ParkrunCourseRepository extends CrudRepository<ParkrunCourse, Long> {
    }
    

    The projection is then called as:

    http://localhost:8080/rest/parkrunCourses/1?projection=parkrunCourseExcerpt

    Discussion

    This series of posts have demonstrated how Spring Data REST can be used to turn Spring Data repositories into REST APIs. We have also shown how we can control access to these REST API’s using Spring Security or control visibility using RepositoryDetectionStrategies. This post has shown how we can use Projections in a similar way you would use Data Transfer Objects to control what clients see.

    Spring Data REST offers a quick way to expose your database as a REST API without lots of boilerplate code. The downside is you need to spend time ensuring the database you have the right level of access exposed, and what information you are intending to expose.

    I see two main use cases for this:

    • Exposing an internal database in a decoupled form: You have a number of client applications wanting access to your database and want to ensure access is more loosely coupled than JDBC.
    • Public Access Database: You have a public access database that you wish to expose on the internet. The lack of boilerplate and speed with which you can build your API makes Spring Data JPA a good fit

    Conclusions

    This post considered how you can use projections to control the view of your Spring Data repositories. We also presented some conclusions on the best use cases for Spring Data REST.

    Original Link

    Quick Start With Apache Livy

    Apache Livy is a project currently in the process of being incubated by the Apache Software Foundation. It is a service to interact with Apache Spark through a REST interface. It enables both submissions of Spark jobs or snippets of Spark code. The following features are supported:

    • Jobs can be submitted as pre-compiled jars, snippets of code, or via Java/Scala client API.

    • Interactive Scala, Python, and R shells. 

    • Support for Spark 2.x and Spark1.x, Scala 2.10, and 2.11.

    • Doesn’t require any change to Spark code.

    • Allows for long-running Spark Contexts that can be used for multiple Spark jobs by multiple clients.

    • Multiple Spark Contexts can be managed simultaneously — they run on the cluster instead of the Livy Server in order to have good fault tolerance and concurrency.

    • Possibility to share cached RDDs or DataFrames across multiple jobs and clients.

    • Secure authenticated communication.

    The following image, taken from the official website, shows what happens when submitting Spark jobs/code through the Livy REST APIs:

    Image title

    This article provides details on how to start a Livy server and submit PySpark code.

    Prerequisites

    The prerequisites to start a Livy server are the following:

    • The JAVA_HOME env variable set to a JDK/JRE 8 installation.

    • A running Spark cluster.

    Starting the Livy Server

    Download the latest version (0.4.0-incubating at the time this article is written) from the official website and extract the archive content (it is a ZIP file). Then setup the SPARK_HOME env variable to the Spark location in the server (for simplicity here, I am assuming that the cluster is in the same machine as for the Livy server, but through the Livy configuration files, the connection can be done to a remote Spark cluster — wherever it is). By default, Livy writes its logs into the $LIVY_HOME/logs location; you need to manually create this directory. Finally, you can start the server:

    $LIVY_HOME/bin/livy-server
    

    Verify that the server is running by connecting to its web UI, which uses port 8998 by default http://<livy_host>:8998/ui.

    Using the REST APIs With Python

    Livy offers REST APIs to start interactive sessions and submit Spark code the same way you can do with a Spark shell or a PySpark shell. The examples in this post are in Python. Let’s create an interactive session through a POST request first:

    curl -X POST --data '{"kind": "pyspark"}' -H "Content-Type: application/json" localhost:8998/sessions
    

    The  kind attribute specifies which kind of language we want to use (pyspark is for Python). Other possible values for it are spark (for Scala) or sparkr (for R). If the request has been successful, the JSON response content contains the id of the open session:

    {"id":0,"appId":null,"owner":null,"proxyUser":null,"state":"starting","kind":"pyspark","appInfo":{"driverLogUrl":null,"sparkUiUrl":null},"log":["stdout: ","\nstderr: "]}
    

     You can double-check through the web UI:

    Image title

    You can check the status of a given session any time through the REST API:

    curl localhost:8998/sessions/ | python -m json.tool 
    

    Let’s execute a code statement:

    curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"2 + 2"}'
    

    The code attribute contains the Python code you want to execute. The response of this POST request contains the id  of the statement and its execution status:

    {"id":0,"code":"2 + 2","state":"waiting","output":null,"progress":0.0}
    

    To check if a statement has been completed and get the result:

    curl localhost:8998/sessions/0/statements/0
    

    If a statement has been completed, the result of the execution is returned as part of the response (data attribute):

    {"id":0,"code":"2 + 2","state":"available","output":{"status":"ok","execution_count":0,"data":{"text/plain":"4"}},"progress":1.0}
    

    This information is available through the web UI, as well:

    Image title

    The same way, you can submit any PySpark code:

    curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d'{"code":"sc.parallelize([1, 2, 3, 4, 5]).count()"}' 
    

    Image title

    When you’re done, you can close the session:

    curl localhost:8998/sessions/0 -X DELETE 
    

    And that’s it!

    Original Link

    Beyond Headless Content: Layout as a Service in dotCMS

    Expect More Than Content

    You should expect more than just content from your REST APIs. With “Layout as a Service”, or LaaS-ie (groan), you can get the benefits of a traditional CMS-driven experience with the developer friendliness of CaaS. Layout as a Service makes app/CMS integrations (including previews) extraordinarily straightforward. Scroll down to “The Goods” for example code.

    Give it a REST

    In the CMS space, RESTful access to content (Content as a Service, CaaS, Headless CMS, etc.) is all the rage. With the rise of modern JavaScript frameworks and Single Page Apps, it is pretty easy to see why. Content as a Service allows the decoupling of the management of content from the presentation of that content and gives developers access to content in a familiar JSON format. This is a huge benefit for developers as they are no longer tied to developing in what they might consider old-fashioned CMS-based page presentations. Developers can develop (read: play) with the latest modern application technologies, Angular, React or whatever and access / inject business managed content into their apps via REST. The good news is that dotCMS has had these REST endpoints for years. Hooray Developers, right?

    When Content as a Service Is Not Enough

    The problem comes along when business users need to see a whole layout in CONTEXT to manage content effectively, very much like traditional in-context editing, and when users require more control over the layout, order, and presentation of their content than a single CaaS call will allow. In fact, they need to manage something very much like a page made up of different content objects, but it needs to be machine consumable in order to be delivered properly in other applications.

    Concrete example: We recently had a customer whose business users were in charge of managing not single content objects, but lists/carousels of different content objects, with specific graphical headers, based on visitor personalization and contextual data — think Netflix movie lists. Business users needed to be able to generate lists and respond in an extremely agile way to fast developing marketplace trends (say the death of a beloved performer) and the business needed to be able to manage and order the lists of the carousels (managing the list of lists) so that they made sense for the visitor. And, insult to injury, the resulting layouts and lists needed to be displayed not as pages, but in apps and set top boxes, some with limited HTML markup capabilities.

    Enter Layout as a Service

    Layout as a Service (LaaS) takes the best of a CMS-driven experience (easy templating, server-side contextual rendering, workflow, personalization, rule and permission-based content delivery) and marries it with the developer friendliness of Content as a Service. In dotCMS 4.2, you can call any page and receive the full page payload back as a single JSON object, including:

    • Site/Host/Channel with surrounding metadata
    • Page details, SEO Keywords, descriptions and canonical URLs
    • Template details
    • Content block details — these are a listing of the editable blocks of content that make up a page
    • Layout details, including header, footer, column and row information for grids

    Not only that — and here is the magic — you can tell dotCMS to render the results and return the rendered results as JSON to you so you can use the information to paint your screen as needed, with very little effort.

    The Goods

    Let me show by example: Take a look at the lowly “About Us” page on the dotCMS demo site: https://demo.dotcms.com/about-us/index

    While it looks simple, this page is virtual, and is actually made up of multiple content objects and content areas which come together to form the page. Feel free to alter the layout and content of this page by logging into the https://demo.dotcms.com site, going to “Browser → /about-us → index” page and clicking on it.

    https://demo.dotcms.com/dotAdmin

    U:
    P: admin

    This should take you to “Edit Mode”, which looks like this:

    You can see all the editable content, content areas (containers) and you can control and manage them, even adding dynamic widgets to them. This is all boilerplate CMS stuff. Go ahead and play around. 

    If you are interested, click on the edit template link on the left — this will let you manage the page layout on a screen like this. You can add rows, columns, headers, footers — basically control the layout:

    Once you’ve had your fill of CMS based editing, take a look at this code:

    https://gist.github.com/wezell/afede08d0fa0c7436d41555eda05185a

    This code basically calls the whole page you were managing via a RESTful API — here is the API URL:

    https://demo.dotcms.com/api/v1/page/render/about-us/index

    Which (if you are authenticated in the demo) returns the whole “About Us” page as a JSON object, including the layout and rendering, and uses it to recreate the page, layout and all, in a static JavaScript-driven app. The meat is here:

    https://gist.github.com/wezell/afede08d0fa0c7436d41555eda05185a#file-index-html-L63-L115

    The JavaScript loops over the layout, gets the rows and columns, builds the grid and then injects the innerHTML into the grid blocks. While this example is using inline styles, there is no reason you could not composite your page using whatever latest CSS/grid coolness you would want, including Bootstrap 4 or Flexbox-based layouts.

    You can see the end result rendering the https://demo.dotcms.com/about-us/index page via JavaScript on GitHub here:

    https://rawgit.com/wezell/afede08d0fa0c7436d41555eda05185a/raw/#/about-us/index

    That’s It?

    Well, let’s look at what we’ve done. Out of the box, dotCMS is a Java-based content management system and we have, with NO JAVA, created a single page JavaScript-driven app (developed by HTMLers) complete with content, layout design, gridding (managed by business users) which can then be rendered selectively across different apps using JSON/RESTful interface.

    The important point to take home is that the content managers are still empowered to change the layout/template of the “About Us” page in the source CMS, add rows, columns, edit and reorder, show/hide the header, and we can use their chosen layout to drive or hint towards our programmatic layout via the JS code above. Feel free to play around yourself. Add a row, reorder content, hide the header. It will all work.

    Now, because dotCMS is an open source Java-based CMS, if you don’t like the way certain aspects of layout API, you can always write your own OSGi-based Jersey endpoint. In fact, the original POC Java code for this work is available on GitHub, but that is a blog for another day.

    Original Link

    Chronograf Dashboard Definitions

    If you have used Chronograf, you have seen how easy it is to create graphs and dashboards. And in fact, your colleagues have probably come over to your laptop to marvel at the awesomeness of your dashboards and asked you how they, too, can share in the awesomeness of your dashboard. But maybe your answer was, “Wow, I don’t know how to share my awesome dashboard with you!” Well, worry no more. In this article, I am going to show you how to download your dashboard and how others can upload your dashboard to their instance of Chronograf.

    In talking to customers, a common question I get is, What sort of things should we be looking at when monitoring our InfluxEnterprise cluster? Our fantastic support team probably hears that question more than they care to. So they have created a list of common queries that will help monitor and troubleshoot your cluster. I have listed those queries at the bottom of this article. In addition to the queries, it would be great to have a dashboard that was always running these queries.

    So, I have created my dashboard. I’m happy and it looks great.

    How do you get a copy of my dashboard? Well, Chronograf has a great REST API. If you want to take a look at what’s available, just go to http://[chronoserver]:8888/docs. In order to get the dashboard, there are a few steps to follow.

    First, find the ID of the dashboard.

    To do this, you will have to list out all the dashboards and find the ID of the dashboard in question. (I know this is not ideal, and we are looking to make this easier.) To do that, you will have to make a GET request to http://[chronoserver]:8888/chronograf/v1/dashboards. This will return a JSON array of all the dashboard definitions you have:

    { "dashboards": [ { "id": 2, "cells": [ … cell definitions …], "templates": [], "name": "My Awesome Dashboard", "links": { "self": "/chronograf/v1/dashboards/2", "cells": "/chronograf/v1/dashboards/2/cells", "templates": "/chronograf/v1/dashboards/2/templates" } }, { "id": 3, "cells": ["cells": [ … cell definitions …], "templates": [], "name": "InfluxDB Monitor", "links": { "self": "/chronograf/v1/dashboards/3", "cells": "/chronograf/v1/dashboards/3/cells", "templates": "/chronograf/v1/dashboards/3/templates" } }
    

    In this case, I can see that "My Awesome Dashboard" has an ID of 2. Now, let’s get the dashboard. You could select and copy from the above output, but I have found that is sometimes error-prone — and we don’t want to spend time debugging our JSON for one missing curly brace. The JSON for our dashboard, in this case, would be available at http://[chronoserver]:8888/chronograf/v1/dashboards/3. You can either paste the URL into the browser and save the JSON, or form the command line:

    $ curl -i -X GET http://localhost:8888/chronograf/v1/dashboards/3 > MyAwesomeDashboard.json 
    

    Now, send that file to your buddy. When they get it, they can upload it to their Chronograf server with the following from command line:

    $ curl -i -X POST -H "Content-Type: application/json" \
    http://[chronoserver]:8888/chronograf/v1/dashboards \
    -d @/path/to/MyAwesomeDashboard.json
    

    And voila. Now your buddy has a copy of your dashboard to use. Simple, quick, and easy. Now go write some code.

    Original Link

    PowerShell With the Couchbase REST API

    PowerShell is a scripting environment/command line that comes with Windows and is also available for Linux and within Azure.

    Maybe you’ve used Postman or Fiddler to make HTTP requests. Those are great, but not necessarily the right tools for automation or scripting.

    You may have heard of curl before. It’s a command line tool for making HTTP requests.

    If you’re a .NET/Windows developer (like me), maybe you aren’t familiar with curl. I use PowerShell as my default command line every day (though I still consider myself a PowerShell neophyte).

    In this post, I’ll show you how you can use PowerShell’s Invoke-WebRequest to make HTTP requests (which you can use within PowerShell scripts for automation).

    You can check out the PowerShell script I created on GitHub.

    Note: As of the time of writing this post, I’m using PowerShell 5.1 on Windows 10.

    Couchbase REST API

    Couchbase Server has an extensive REST API that you can use to manage and administrate just about every aspect of Couchbase Server. For this blog post, I’m going to focus on the Full-Text Search (FTS) API. I’m going to show this because:

    • Creating an FTS index is something you’ll eventually want to automate.
    • You will probably want to share an FTS index you created with your team and/or check it into source control.
    • Couchbase Console already shows you exactly how to do it with curl.

    I’m not going to cover FTS in detail. I invite you to check out past blog posts on FTS, and this short video demonstrating full-text search.

    Full-Text Search Review

    When you initially create an FTS index, you will probably use the built-in FTS UI in the Couchbase Console. This is fine when you are doing the initial development, but it’s not practical if you want to share this index with your team, automate deployment, or take advantage of source control.

    Fortunately, you can use the Show index definition JSON feature to see the JSON data that make up the index definition. You can also have Couchbase Console generate the curl method for you.

    Generate FTS curl script

    Well, if you’re using curl, that’s very convenient. Here’s an example:

    X

    You can copy/paste that into a script, and check the script into source control. But what if you don’t use curl?

    PowerShell Version: Invoke-WebRequest

    First, create a new PowerShell script. I called mine createFtsIndex.ps1. All this PowerShell script is going to do is create an FTS index on an existing bucket.

    You can start by pasting the curl command into this file. The bulk of this command is the JSON definition, which will be exactly the same.

    Let’s break down the rest of the curl command to see what’s happening:

    • -XPUT: This is telling curl to use the PUT verb with the HTTP request.
    • -H "Content-Type: application/json": Use a Content-Type header.
    • http://localhost:8094/api/index/medical-condition: This is the URL of the REST endpoint. The localhost will vary based on where Couchbase is running, and the medical-condition part is just the name of the FTS index.
    • -d '…json payload…': The body of content that will be included in the HTTP request.

    PowerShell’s Invoke-WebRequest can do all this stuff too, but the syntax is a bit different. Let’s step through the equivalents:

    • -Method PUT: This is telling Invoke-WebRequest to use the PUT verb with the HTTP request, so you can replace -XPUT.
    • -Header @{ … }: Specify headers to use with the request (more on this later).
    • -Uri http://localhost:8094/api/index/medical-condition": You just need to add -Uri in front.
    • -Body '…json payload…': The body of content is included this way instead of using curl’s -d

    Headers

    PowerShell expects a “dictionary” that contains headers. The syntax for a literal dictionary in PowerShell is @{"key1"="value1"; "key2"="value2"}.

    So then, to specify Content-Type: -Headers @{"Content-Type"="application/json"}.

    One thing that the curl output did not generate is the authentication information that you need to make a request to the API. With curl, you can specify basic authentication by adding the username/password to the URL. It will then translate it into the appropriate Basic Auth headers.

    With PowerShell, it appears you have to do that yourself. My local Couchbase Server has credentials “Administrator” and “password” (please don’t use those in production). Those need to be encoded into Base64 and added to the headers.

    Then the full Headers dictionary looks like this:

    -Headers @{"Authorization" = "Basic "+[System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes("Administrator:password")); "Content-Type"="application/json"}
    

    You might think that’s a bit noisy, and I agree. If you know a cleaner way to do this, I’m dying to know. Please leave a comment.

    Execute the PowerShell Script

    To execute the script, simply type .\createFtsIndex.ps1 at the PowerShell command line.

    Execute PowerShell script

    You’re now ready to make this a part of your deployment!

    Original Link