ibm cloud

IBM is Dancing Again

In 1992, IBM, the biggest technology company in the 20th century, was about to vanish. In 1993, the company was rescued by Louis V. Gerstner Jr.,  who took over as CEO and got them back on the track again. He retired in 2002, leaving IBM as a leading technology company again with billions of profits.

In his book, Who Says Elephants Can’t Dance, he explained that he did that through many things, mainly:

Original Link

How to Invoke an External REST API from a Cloud Function

In a previous blog post, I showed how to create your first cloud function (plus a video). It’s very likely that your cloud function will need to invoke an external REST API. The following tutorial will show you how to create such a function (it’s very easy).

  1. Sign into an IBM Cloud account
  2. Click Catalog
  3. Remove the label:lite filter and type
  4. Click on Functions box
  5. Click the Start Creating button
  6. Click Create Action
  7. For Action Name, enter “ajoke” and click the Create button. A new cloud function will be created with Hello World message
  8. Replace the function code with the following code which invokes a 3rd party REST API which returns a random joke:
    var request = require("request"); function main(params) { var options = { url: "", json: true }; return new Promise(function (resolve, reject) { request(options, function (err, resp) { if (err) { console.log(err); return reject({err: err}); } return resolve({joke:resp.body.value.joke}); }); });
    • The code is simple. It uses the request Node.js package to connect to an external REST API
    • The external REST API returns a random joke
    • A JavaScript Promise is used for invoking the REST API
    • At the end, the cloud function returns a response in JSON format
  9. Now click the Save button to save the code. Once the code is saved, the button will change to Invoke. Click the button to invoke the function. In the right-hand panel you should see output with a random joke:
    { "joke": "Project managers never ask Chuck Norris for estimations... ever."

This is how it looks inside the IBM Cloud Functions editor:

Image title

Of course, you can also build and test a cloud function using the CLI. I’ll cover that in another blog post.

For now, let’s expose this cloud function as a REST API so we can invoke it outside the console. In fact, you will be able to invoke it directly from the browser once we make it a Web Action.

  1. On the left-hand side, click Endpoints
  2. Check Enable as Web Action and click Save
  3. Copy the URL and enter into a browser’s address bar

Here is how it looks in Firefox:

Image title

That was easy, right?

In this blog post, you learned how to create a cloud function which invokes an external (3rd party) API. It’s very likely that even the simplest application will need to get data from an external API so this a good example/template to have.

Original Link

How to Migrate On-Premise Databases to IBM DB2 On Cloud


Database migration can look simple from outside, namely, get the source data and import/load to the target database, but the devil is always in the details, and the route is not that simple. A database consists of more than just the data. A database can consist of many different — but often related — objects. With DB2, two types of objects can exist: system objects and data objects. Let’s see what they are, and later on in the article, some of the major objects are discussed from caution perspectives during planning and migrating.

Most of the major database engines offer the same set of major database object types: (please read additionally on these object types from respective vendors. The definition and function would be more or less similar. An analogy is that you drive cars, and when you move from one car to another, the basics of cars remain the same, but differences include ignition buttons, windows, structure as a whole, etc., but the functional use and base of the car remains the same, like 4 wheels, engines, chassis, etc.)

  • Tables
  • Indexes
  • Sequences
  • Views
  • Synonyms
  • Indexes
  • Alias
  • Triggers
  • User-defined data types (UDTs),
  • User-defined functions (UDFs)
  • Stored procedures
  • Packages

System Objects include:

  • Storage groups
  • Tablespaces
  • Buffer pools
  • System Catalog tables and views
  • Transaction log files

These objects at on-premise databases should be given proper care while planning migrations. It is very important to understand what can be migrated and what cannot since there might be a need for professional services from a 3rd party or from a cloud vendor in doing so.

What Can and Can’t Be Migrated?

General SQL user-defined functions (UDFs) can be migrated but external UDFs might have some problem being migrated. External UDFs might be written in C, C++, or Java and then compiled in some cases to form a library and sit at a specified location and would need to be registered with DB2. So, external UDFs need to be rebuilt on cloud servers because OS versions might be different at the target. Migrating such UDFs might need database migration services from cloud vendors or they cannot be migrated to cloud. Similarly, SQL stored procedures can be migrated to the target Database, but external store procedures will carry the same constraints than that of UDF and will not be supported. Materialized query table(MQTs) can be migrated, but they should be created after the data is moved to the target database. Similarly, the triggers can be migrated when data is moved to target database. The link between system-period temporal tables and their associated history tables must be broken before the table’s data can be moved (this holds true for bitemporal tables). A system-period temporal table is a table that maintains historical versions of its rows. Bitemporal Modeling is an information modeling technique designed to handle historical data along two different timelines. A bitemporal table is a table that combines the historical tracking of a system-period temporal table with the time-specific data storage capabilities of an application-period temporal table. Bitemporal tables are generally used to keep user-based period information as well as system-based historical information.

Now we have some idea on what to migrate and what not to. Database administrators should seek some downtime while performing this. Proper planning and cautions should be taken while performing each of the discussed activities, and proper time should be allotted to understand the nature of migration. Let me also point out a major constraint with migration to DBaaS or DB on instances on cloud (from System Object PoV): Only one buffer pool would be supported and the user spaces should be merged with main user space with one buffer pool to migrate to the target state. Multiple user spaces with multiple DB pools and buffer pools will not be supported for DBaaS or DB on instance (VM) on cloud. So, remember that!

Now we would start the migration. There are certain tools developed from IBM to perform migration tasks, and the important ones are db2look utility and IBM Optim high performance unload. Db2look utility is used for generating new Data Definition Language (DDL) statements for target DB2. IBM Optim high performance unload can perform copying the current Database to a temporary folder/bucket, which can be AWS S3 or Softlayer Swift. The same tool can be leveraged to paste through import/load utility to the target.

The various ways to move data to DB2 on cloud — DB2 hosted, DB2 on cloud and DB2 warehouse on cloud are given below:

  • Load data from a local file stored on the desktop (Using #Bluemix interface)
  • Load data from a Softlayer swift object store (Using #Bluemix interface)
  • Load data from Amazon S3 (Using #Bluemix interface)
  • Use the DB2 data movement utilities, remotely
  • Use the IBM data transfer service (25 to 100TB)
  • Use the IBM Mass Data migration service (100 TB or more)

Now comes the security aspect while migrating, encryption using AES or 3DES is recommended. SSL and TLS are the preferred methods to secure data in transit.

Let’s also shed some light on DB2’s native encryption and how it works.

  • Client requests an SSL connection and lists its supported cipher suites (AES, 3DES)
  • Then, the server responds with a selected cipher suite and a copy of its digital certificate, which includes a public key.
  • Client checks the validity of certificate — if it is valid, a session key and a message authentication code (MAC) is encrypted with a public key and sent back to the server.
  • Server decrypts the session key and MAC (message authentication code), then send an acknowledgment to start an encrypted session with the client
  • Server and client securely exchange data using the session key and MAC selected.

These are some of the important points to be considered while migrating on-premise databases to IBM DB2 on Cloud.

Feel free to share your views in the comments. 

Original Link

Using Cloud Functions for Automated, Regular Cloud Database Jobs

Some days ago, I blogged about a tutorial I wrote. The tutorial discusses how to combine serverless and Cloud Foundry for data retrieval and analytics. That scenario came up when I looked into regularly downloading GitHub traffic statistics for improved usage insights. What I needed was a mechanism to execute a small Python script on a daily or weekly basis. After looking into some possible solutions, IBM Cloud Functions was the clear winner. In this article, I am going to discuss how simple it is to implement some regular, automated activities, such as maintenance jobs for a cloud database.

Image title

Code Your Action

The action is the part that is executed. IBM Cloud Functions supports several programming languages for coding an action. JavaScript, Swift, Python, and some others can be used, or even a Docker image provided. In my case, I implemented a Python action to fetch the GitHub account information and the list of repositories from Db2, then to retrieve the traffic data from GitHub and, last, to merge it in Db2. The code for that particular action can be found in this file on GitHub.

Create Action, Trigger, and Rule

Once the code is ready, it can be used to create a Cloud Functions action. The available runtime environments already include drivers for several database systems, including Db2. The ZIP file includes extra files for modules that are not part of the standard environment. The second step is to bind the action to the database service. Thereby, the database credentials are automatically obtained.

# Create the action to collect statistics
bx wsk action create collectStats --kind python-jessie:3 # Bind the service credentials to the action
bx wsk service bind dashDB collectStats --instance ghstatsDB --keyname ghstatskey # Create a trigger for firing off weekly at 6am on Sundays
bx wsk trigger create myWeekly --feed /whisk.system/alarms/alarm\ --param cron "0 6 * * 0" --param startDate "2018-03-21T00:00:00.000Z"\ --param stopDate "2018-12-31T00:00:00.000Z" # Create a rule to connect the trigger with the action
bx wsk rule create myStatsRule myWeekly collectStats 

A trigger emits an event on the given schedule. The above trigger definition uses the cron syntax to fire every Sunday at 6 AM. Last, a rule creates the connection between the trigger and the action. This causes the action to be executed on a weekly schedule.


Using IBM Cloud Functions it is easy to implement automated, regular maintenance jobs. This could be to clean up data in a database, call APIs of web services, summarize activities, and send out the weekly report, and much more. For my use case, it is the ideal tool for the problem. It is inexpensive (“cheap”) because it only consumes resources once a week for few seconds. Read the full tutorial in the IBM Cloud documentation.

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn.

Original Link

Access Db2 from IBM Cloud Functions the Easy Way (Node.js)

Recently, I have been experimenting with the IBM Watson Conversation service and Db2. With a new feature in the conversation service it is possible to perform programmatic calls from a dialog node. Why not query Db2? I implemented both a Db2 SELECT and INSERT statement wrapped into actions of IBM Cloud Functions. It is quite easy and here is what you need to know.Db2 access via IBM Cloud Functions

The conversation service supports client-side and service-side calls. This means, either the application driving the chat can be instructed to make an outside call or the conversation service itself is invoking an action. That is, IBM Cloud Function actions can be called. For my experiment I coded up two actions, one to fetch data from a Db2 database, the other to insert new data. I chose Node.js 8 runtime platform because the Db2 driver for Node.js is already part of the runtime environment. The sources for the Db2-related actions are in this Github repository.

Passing the right values to the functions shouldn’t be a problem for you. Obtaining the credentials for Db2 and making them available inside the action got simplified recently. The CLI plugin for IBM Cloud Functions allows you to bind a service to an action as shown here:

bx wsk service bind dashDB /hloeser/ConferenceFetch --instance henrikDB2 --keyname henriksKey2 
In the example I am binding credentials for the dashDB service (that is Db2 Warehouse on Cloud) to one of my actions named “ConferenceFetch”. Because I have multiple service instances and possibly multiple keys (credentials) I make use of the optional parameters “instance” and “keyname”. Thereafter, the Db2 configuration including username and password is available in the action metadata. In the action code I am using this syntax to obtain the “dsn” information. The dsn is used by the Db2 driver to connect to the database.

 __bx_creds: {dashDB:{dsn}} 

With this rough outline you should be able to get your IBM Cloud Functions connected to Db2. Those actions can then directly be called from within a dialog node of Watson Conversation. I will post details on how that works in another blog entry.

If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn.

Original Link

Performant Serverless Swift Actions

While coding and drafting “Mobile app with a Serverless Backend,” we came up with an idea to use Swift on the server-side for an iOS app (it’s an interesting use case) to cater to the full stack Swift developers out there. The same is true for the Android version of the app as well. 

Here’s an architectural interpretation of what we wanted to achieve:

As an initial step, I started exploring Swift actions under Functions (FaaS) on IBM Cloud. My first challenge was to authenticate the user through the AppID service, and this should entirely happen on the server-side in a Swift action. I figured out that there is an introspect API endpoint which will validate the token your pass. This is good, can I use external packages like SwiftyJSON inside a Swift Cloud Functions action? This will be answered soon.

Here’s the action to validate a token:

/*************** ** Validate an user access token through Introspect endpoint of ** App ID Service on IBM Cloud. ***************/
import Foundation
import Dispatch
import SwiftyJSON func main(args: [String:Any]) -> [String:Any] { var args: [String:Any] = args let str = "" var result: [String:Any] = [ "status": str, "isactive": str ] guard let requestHeaders = args["__ow_headers"] as! [String:Any]?, let authorizationHeader = requestHeaders["authorization"] as? String else { print("Error: Authorization headers missing.") result["ERROR"] = "Authorization headers missing." return result } guard let authorizationComponents = authorizationHeader.components(separatedBy: " ") as [String]?, let bearer = authorizationComponents[0] as? String, bearer == "Bearer", let accessToken = authorizationComponents[1] as? String, let idToken = authorizationComponents[2] as? String else { print("Error: Authorization header is malformed.") result["ERROR"] = "Authorization header is malformed." return result } guard let username = args["services.appid.clientId"] as? String, let password = args["services.appid.secret"] as? String, let tenantid = args["tenantid"] as? String else{ print("Error: missing a required parameter for basic Auth.") result["ERROR"] = "missing a required parameter for basic Auth." return result } let loginString = username+":"+password let loginData = String.Encoding.utf8)! let base64LoginString = loginData.base64EncodedString() let headers = [ "content-type": "application/x-www-form-urlencoded", "authorization": "Basic \(base64LoginString)", "cache-control": "no-cache", ] let postData = "tenantid=\(tenantid)&token=\(accessToken)" var request = URLRequest(url: URL(string: (args["services.appid.url"] as? String)! + "/introspect")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = .utf8) let semaphore = DispatchSemaphore(value: 0) let sessionConfiguration = URLSessionConfiguration.default; let urlSession = URLSession( configuration:sessionConfiguration, delegate: nil, delegateQueue: nil) let dataTask = urlSession.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in guard let data = data, error == nil else { print("Error: \(String(describing: error?.localizedDescription))") return } if let httpStatus = response as? HTTPURLResponse { if httpStatus.statusCode == 200 { let responseString = String(data: data, encoding: .utf8) guard let data = responseString?.data(using: String.Encoding.utf8), let dictionary = try? JSONSerialization.jsonObject(with: data, options: []) as? [String:Bool] else { return } if let myDictionary = dictionary { print(" isActive : \(myDictionary["active"]!)") result = [ "status": String(httpStatus.statusCode), "isactive": myDictionary["active"]! ] } } else { print("Unexpected response:\(httpStatus.statusCode)") print("\(httpStatus)") result["ERROR"] = httpStatus } } print("operation concluded") semaphore.signal() }) dataTask.resume() _ = semaphore.wait(timeout: DispatchTime.distantFuture) if(result["isactive"] != nil && result["isactive"]! as! Bool) { let parsedAccessToken = parseToken(from: accessToken)["payload"] let parsedIdToken = parseToken(from: idToken)["payload"] var _accessToken = "" var _idToken = "" if let accessTokenString = parsedAccessToken.rawString() { _accessToken = accessTokenString } else { print("ERROR: accessTokenString is nil") } if let idTokenString = parsedIdToken.rawString() { _idToken = idTokenString } else { print("ERROR: idTokenString is nil") } args["_accessToken"] = _accessToken args["_idToken"] = _idToken return args } else{ result["ERROR"] = "Invalid Token or the token has expired" return result }
} extension String{ func base64decodedData() -> Data? { let missing = self.characters.count % 4 var ending = "" if missing > 0 { let amount = 4 - missing ending = String(repeating: "=", count: amount) } let base64 = self.replacingOccurrences(of: "-", with: "+").replacingOccurrences(of: "_", with: "/") + ending return Data(base64Encoded: base64, options: Data.Base64DecodingOptions()) }
} func parseToken(from tokenString:String) -> JSON { print("parseToken") var json = JSON([:]) let tokenComponents = tokenString.components(separatedBy: ".") guard tokenComponents.count == 3 else { print("ERROR: Invalid access token format") return json } let jwtHeaderData = tokenComponents[0].base64decodedData() let jwtPayloadData = tokenComponents[1].base64decodedData() let jwtSignature = tokenComponents[2] guard jwtHeaderData != nil && jwtPayloadData != nil else { print("ERROR: Invalid access token format") return json } let jwtHeader = JSON(data: jwtHeaderData!) let jwtPayload = JSON(data: jwtPayloadData!) json["header"] = jwtHeader json["payload"] = jwtPayload json["signature"] = JSON(jwtSignature) return json

You can find other actions here.

Clone the code by running the following command on a terminal, or download the code from the GitHub repo:

git clone

From the architecture diagram, you should have figured out that once we are authenticated, the next steps are to add the user through one of the actions and then save the feedback provided by the user along with his device id (to send push notifications). There is a trigger associated with the feedback table so that once a new feedback is added to the table, the trigger will be triggered to send the feedback to Watson tone analyzer, and the output tone is passed to Cloudant to map and send the associated message as a push notification to the feedback provider/customer.

This is all good and interesting. What will be the execution time for this flow to complete? Let’s see the execution time of one action and also, how to improve the performance:

swift serverless action execution times

If you observe, the initial serverless action call took 5.51 secs to complete and subsequent calls are faster. So, what exactly is the reason? Here’s what IBM Cloud Functions documentation says 

When you create a Swift action with a Swift source file, it has to be compiled into a binary before the action is run. Once done, subsequent calls to the action are much faster until the container that holds your action is purged. This delay is known as the cold-start delay.

How to overcome this and make our Swift Cloud Functions actions performant from the word go by avoiding cold-start delay:

To avoid the cold-start delay, you can compile your Swift file into a binary and then upload to Cloud Functions in a zip file. As you need the OpenWhisk scaffolding, the easiest way to create the binary is to build it within the same environment it runs in.

 See the following steps:

  • Run an interactive Swift action container by using the following command:
    docker run --rm -it -v "$(pwd):/owexec" openwhisk/action-swift-v3.1.1 bash
  • Copy the source code and prepare to build it.
    cp /owexec/{PATH_TO_DOWNLOADED_CODE}/ValidateToken.swift /swift3Action/spm-build/main.swift
    echo '_run_main(mainFunction:main)' >> /swift3Action/spm-build/main.swift

  • (Optional) Create the Package.swift file to add dependencies.
     swift import PackageDescription let package = Package( name: "Action", dependencies: [ .Package(url: "", majorVersion: 3), .Package(url: "", "0.2.3"), .Package(url: "", "1.7.10"), .Package(url: "", "15.0.1"), .Package(url: "", "0.16.0") ] )

    This example adds swift-watson-sdk and example-package-deckofplayingcards dependencies. Notice that CCurlKitura-net, and SwiftyJSON are provided in the standard Swift action so you can include them in your own Package.swift.

  • Copy Package.swift to the spm-build directory
    cp /owexec/Package.swift /swift3Action/spm-build/Package.swift
  • Change to the spm-build directory
    cd /swift3Action/spm-build
  • Compile your Swift Action.
    swift build -c release
  • Create the zip archive.
    zip /owexec/{PATH_TO_DOWNLOADED_CODE}/ .build/release/Action
  • Exit the Docker container.

You can see that is created in the same directory as ValidateToken.swift. 

  • Upload it to OpenWhisk with the action name authvalidate:
    wsk action update authvalidate -- kind swift:3.1.1
  • To check how much faster it is, run 
    wsk action invoke authvalidate -- blocking

Refer to this link to install the Cloud Functions standalone CLI.

The time that it took for the action to run is in the “duration” property; compare it to the time it takes to run with a compilation step in the validateToken action.

So, what will be the performance improvement when I run an action with a .swift file for the first time and an action with a binary file?

Here’s the answer:

Image title

auth-validate is an action created with the .swift file and authvalidate1 is an action created with a binary file.

Don’t forget to refer our other solution tutorials covering this end-to-end use cases and scenarios. 

Original Link

Deploying TensorFlow Models to Kubernetes on IBM Cloud

Ansgar Schmidt and I open sourced a sample that shows how to use TensorFlow to recognize certain types of flowers. The first version of the sample used the MobileNet model, which we deployed to the serverless platform OpenWhisk to run the predictions. This article describes how to deploy a TensorFlow model to Kubernetes.

TensorFlow supports various image recognition models that can be retrained for certain objects. MobileNet is small and can be run in OpenWhisk. The more precise and bigger Inception model, however, is too big for the 512 MB RAM memory limit of OpenWhisk. Instead, it can be deployed to Kubernetes, for example on the IBM Cloud — which provides a free plan that can be used to run this sample.

In order to deploy TensorFlow models to production environments, you should check out TensorFlow Serving. It provides a lot of nice functionality to manage models, provide API access to models, and more. I set up Serving, but it wasn’t as straightforward as I had hoped. Below is a quick tutorial of how you can deploy a model on Kubernetes without Serving.

I’ve open sourced the code. There is also a live demo of the original sample using the MobileNet model.

Check out the blog from Ansgar to learn how to train the model. The only change is that you have to refer to Inception rather than MobileNet. Ansgar also describes how to deploy the model to OpenWhisk via Docker. The same Docker image can also be deployed to Kubernetes on the IBM Cloud.

Before the Docker image can be built, you need to create an instance of IBM Object Storage on the IBM Cloud. Check out the article Accessing IBM Object Store from Python for details. Paste the values of ‘region’, ‘projectId’, ‘userId’ and ‘password’ in After this, upload the model (retrained_graph.pb and retrained_labels.txt) into your Object Storage instance in a bucket ‘tensorflow’.

In order to build the image, run these commands:

$ git clone
$ cd VisualRecognitionWithTensorflow/Classify
$ docker build -t $USER/tensorflow-kubernetes-classify:latest .

In order to test the image, run these commands:

$ docker run -d -p 8080:8080 $USER/tensorflow-kubernetes-classify:latest
$ curl http://localhost:8080/classify?image-url=

In order to deploy the image to Kubernetes on the IBM Cloud, run the following commands after you’ve updated your username in [tensorflow-model-classifier.yaml](Classify/tensorflow-model-classifier.yaml):

$ docker push $USER/tensorflow-kubernetes-classify:latest
$ bx plugin install container-service -r Bluemix
$ bx login -a
$ bx cs cluster-config mycluster
$ export KUBECONFIG=/Users/nheidlo.....
$ cd Classify
$ kubectl create -f tensorflow-model-classifier.yaml
$ bx cs workers mycluster
$ kubectl describe service classifier-service

In order to test the classifier, open the following URL after you’ve replaced your ‘Public IP’ and ‘NodePort’ from the previous two commands:

Check out the complete source on GitHub. For now it’s in my fork, but we’ll merge it in the original project soon.

Here is a screenshot of the sample web application:


Original Link

IBM Cloud: Deploying the TensorFlow Inception Model

As a developer, I’m trying to better understand how developers work together with data scientists to build applications that leverage machine learning. In this context, one key question is how developers can access models from their applications. Below is a quick introduction to TensorFlow Serving and a description how to deploy TensorFlow models onto the IBM Cloud.

Here is the description of TensorFlow Serving from the homepage:

“TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data.”

TensorFlow Serving can be deployed on Kubernetes, which is documented in this tutorial. The tutorial shows how to deploy and access the InceptionV3 model, which has been trained with 1.2 million images from ImageNet with 1,000 classes. This model is a great starting point for your own visual recognition scenarios. See the retraining documentation for details.

The tutorial describes how to build your own Docker image with TensorFlow Serving. Unfortunately, there are quite a number of manual steps necessary at this point. If you just want to try the quality of the image recognition model, you can use the image that I created from DockerHub.

In order to deploy this image on Kubernetes in the IBM Cloud, save the file inception_k8s.yaml locally.

apiVersion: extensions/v1beta1
kind: Deployment
metadata: name: inception-deployment
spec: replicas: 3 template: metadata: labels: app: inception-server spec: containers: - name: inception-container image: nheidloff/inception_serving command: - /bin/sh - -c args: - serving/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=inception --model_base_path=/tmp/inception-export ports: - containerPort: 9000
apiVersion: v1
kind: Service
metadata: labels: run: inception-service name: inception-service
spec: ports: - port: 9000 targetPort: 9000 selector: app: inception-server type: NodePort

After this, invoke these commands:

$ bx plugin install container-service -r Bluemix
$ bx login -a
$ bx cs cluster-config mycluster
$ export KUBECONFIG=/Users/nheidlo.....
$ kubectl create -f inception_k8s.yaml
$ bx cs workers mycluster
$ kubectl describe service inception-service

Write down the IP address and NodePort since you need this information in the next step. Also, note that this works even for the lite Kubernetes account on IBM Cloud without Ingress.

This is what you should see after everything has been deployed (‘$kubectl proxy’).


TensorFlow Serving supports gRPC to access the models. In order to test the served model a test gRPC client is needed. In the simplest case you can use the same image since it comes with a test client. Here is a sample flow of commands to get the classes Inception returns for my picture.

$ docker run -it nheidloff/inception_serving
$ cd serving/
$ wget
$ bazel-bin/tensorflow_serving/example/inception_client --server= --image=./4Y7B9422-4.jpg

If you want to access the deployed models via REST instead of gRPC, you might want to check out this blog entry Creating REST API for TensorFlow models, which looks promising and is on my list of things I’d like to try out.

Original Link

Understanding Cloud Foundry Logging on IBM Cloud

Last month, after receiving user questions, I blogged about how to decipher Cloud Foundry log entries. Today, I want to point you to a small Cloud Foundry Python app I wrote. It helps to better understand Python and Cloud Foundry logging. You can also use it to test the IBM Cloud Log Analysis service, which provides an easy-to-use interface to logs generated by applications running in the IBM Cloud. In the premium plans, external log events can also be fed into the service for consolidated storage and analysis.

As usual, the code for my app is available on GitHub. Once deployed to IBM Cloud, the app can be used to send messages on a chosen log level back to the server. The server-side log level, i.e. the threshold for processed log messages, can also be set. The app produces diagnostic output on “stdout” and “stderr”. The two are treated differently by Cloud Foundry. Here is a screenshot of the logging app:

Test App for Cloud Foundry Logging

The produced log entries can also be used to try out the IBM Cloud Log Analysis service. Diagnostic logs are automatically forwarded to the Log Search of that service. The messages are fed into Elasticsearch and can be analyzed using Kibana. I wrote some search queries (one shown below) and then built visualizations like the “Donut” shown below based on those queries. I will write more about that in a future blog post.

Search Query for Elasticsearch / IBM Cloud Log Analysis

An official tutorial using that app and Log Analysis is available in the IBM Cloud docs. It walks you through the steps to create the service, deploy the app and to compose search queries as well as the visualizations.Image titleIf you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn.

Original Link

Dev Friendliness: IBM Cloud Automation Manager and IBM Cloud Private

This is a great time to be working in the IT field with rapidly evolving trends in

  • Cloud computing
  • Artificial Intelligence and machine learning
  • Blockchain technology
  • Mobility technology
  • Increased operational complexity: maintaining one-of-a-kind environments using vendor-specific tools is expensive
  • Reduced efficiency: It is difficult to standardize delivery of cloud services
  • Governance: It is difficult to ensure applications are secure and compliant

Cost Management: Are Resources Being Used Efficiently?

The key to achieving a balance between Central IT and Lines of Business is to give central IT tools and capabilities that enable them to respond with speed and agility to the needs of the business. When Central IT can respond with speed and agility, the incentive for business units to go-their-on-way is reduced and it becomes possible to achieve cost efficiencies by standardizing delivery of cloud resources and application services.

IBM Cloud Automation Manager enables Central IT to respond quickly to the needs of the business by providing a common automation framework for provisioning application services in all of your public, private and hybrid clouds and an integrated service composition experience that enables IT Architects to create multi-architecture application services that are easy to consume in DevOps tool chains and from self-service catalogs.

Image title

Enterprise-Grade Cloud Agnostic Automation for Public, Private, and Hybrid Workloads

Manually configuring cloud infrastructure and application environments is tedious and error-prone. Automation reduces errors but most automation solutions are cloud vendor specific and require specialized skills. IBM Cloud Automation Manager leverages open technology Terraform to deliver cloud agnostic automation that can be used to provision multi-architecture and hybrid application services in public and private clouds including IBM Cloud, IBM Cloud Private, AWS EC2, Azure, VMware, OpenStack, PowerVC, and Google Cloud Platform.

IBM Cloud Private and IBM Cloud Automation

IBM Cloud Automation Manager, delivered on IBM Cloud Private, offers end-to-end capabilities that allow IT operations to deploy, automate and manage multi-cloud, multi-architecture application environments, while providing easier access for developers to build and create applications within company policy and security.

Together, IBM Cloud Automation Manager and IBM Cloud Private provides a consistent, fully integrated solution for deploying and managing all of your containerized, VM based and cloud native workloads in all of your public, private and hybrid clouds.

IBM Cloud Automation Manager gives you flexibility in your hybrid IT and helps you gain speed through self-service access to cloud and application services. IBM Cloud Automation Manager’s modern micro-service architecture is highly flexible to meet your future needs and is purposefully designed for extensibility, enabling you to leverage your existing datacenter investments.

IBM Cloud Automation Manager and Terraform

IBM Cloud Automation Manager uses open source Terraform for cloud agnostic automation.

Terraform is a highly successful, rapidly growing, innovative open source project that enjoys the support of multiple cloud vendors. Terraform is designed around the principle of managing cloud infrastructure as code. Using Terraform, cloud infrastructure is described with declarative text files that can be stored in a version control system and managed as code. Infrastructure managed as code is easy to share, easy to reproduce and simple to govern.

IBM Cloud Automation Manager brings enterprise capabilities to Terraform:

  • Graphical user interface to simplify access to cloud automation services
  • Multi-tenancy and RBAC separation between service providers and consumers
  • Integration with Gitlab and GitHub
  • REST API to access all IBM Cloud Automation Manager capabilities
  • Production ready out of the box. High availability, security, monitoring, tenancy and other services are obtained from the IBM Cloud Private runtime.

Workflow Orchestration and Service Composition

IT Architects can use IBM Cloud Automation Manager’s drag and drop Service Composer to combine different types of automation activities into a service object that can be published into self-service catalogs or delivered in DevOps toolchains. Automation activities are dragged from the palette onto the canvas, connected with other activities on the canvas and published, as a single consumable entity, into an object store or service catalog. The Service Composer supports multiple activities including Terraform configurations & variable presets, order forms, REST API invocations, if/then decision logic, email notifications, Helm charts and others. A service object can also invoke activities that reside in different clouds, enabling delivery of hybrid cloud services.

Composed services enable IT Architects to hide automation complexity and tailor delivery of cloud infrastructure and application services to meet the needs of the individual business unit consumer.

Get Started Fast With Pre-Built Automation and Your Choice of Application Architecture

IBM Cloud Automation Manager with IBM Cloud Private includes a catalog of pre-built Helm charts, Terraform configurations and Chef scripts for popular IBM middleware and open source software. Select the architecture that is right for your project, or compose mixed architecture solutions and get into production quick with IBM Cloud Private and IBM Cloud Automation Manager.

Any Workload, Any Cloud, On-Demand

Business units can respond with speed and agility to business requirements when Central IT is able to deliver on-demand self-service access to cloud infrastructure and application services.

Cloud agnostic automation enables Central IT to reduce cost by standardizing processes, tools and skills for delivering and maintaining cloud infrastructure and application services at scale in all of your clouds. IBM Cloud Automation Manager in IBM Cloud Private provides a multi-cloud management solution that achieves the right balance between Central IT’s need for governance and oversight with business unit’s need to respond with speed and agility.

Original Link

Using Db2 as a Cloud SQL Database With Python

Load data into Db2 on Cloud

Over the summer, I learned that Python is at the top of the IEEE programming languages ranking. It is also my favorite language for quickly coding tools, web apps and analyzing data with notebooks (such as on IBM Data Science Experience). Did you know that IBM provides four different Db2 drivers for Python? These are:

  1. A driver with the native Db2 API.

  2. A driver that supports the official Python DBI (database interface).

  3. A driver for the popular SQLAlchemy Python SQL Toolkit.

  4. A driver for the Python-based Django web framework.

In an older article, I showed you how to use SQLAlchemy with Db2. Today, I am going to demonstrate you how simple it is to create a SQL database-backed web app in the IBM Cloud, utilizing the native Db2 API.

The app is based on the Flask web framework and provides access to city information. The data comes from GeoNames. After the data has been loaded into Db2, it is accessed by the app and displayed using a simple page template. Users can search via a form or directly access city information through static URIs.City Information
I put the source code and all required instructions into a GitHub repository. The included README takes you through all the steps from provisioning a Db2 database on the IBM Cloud to creating a table and loading data to how to deploy the app. Make sure to take a look at the (few!) comments in the files that provide additional insight.

You can find an extended version of the instructions as a tutorial in the docs for IBM Cloud.

I hope you enjoy it. If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn.

Original Link

Cloudy Morning: Growing Pains

As I was compiling this list, an interesting trend in the articles struck me. This month, there’s a lot of focus about “next steps” in the cloud (or at least for cloud-related components and tools). For instance, does Kubernetes need to move on from GitHub? And will you miss the name IBM Bluemix? Are you ready to make use of AWS’ C5 instances? And do you know how cloud service providers are changing their pricing models?

Well, take a look! It’s just below.

Looking Cloudy Out

  1. Kubernetes Needs to Ditch GitHub, by Matt Butcher. While GitHub is traditionally the go-to place for the cool kids of software, now that Kubernetes is growing up, it might be time to move out.
  2. A Serverless Computing Primer: A Comparison, by Derric Gilling. This breakdown of the big three serverless vendors (AWS, Azure, and Google) covers their strengths, their weaknesses, and how devs can best use them.
  3. Monitoring AWS Lambda With Thundra, by Serkan Ozal. Check out how one org has configured their monitoring infrastructure for better visibility into environments that use AWS Lambda.
  4. Why SaaS Is Dead: The Rise of the Micro Value Software Economy, by Brian Reale. Look into the not-too-distance future and see how cloud providers are changing their pricing models and how that impacts SaaS.
  5. Top 20 Cloud Blogs for Every Cloud Architect [Infographic], by Sakshi Gaurav. Want to stay on top of your cloud game? This curated list of 20 blogs in infographic form will help connect you with the knowledge you need.

Going Stormchasing

Bluemix No More!

Earlier this month, it was something of a surprise to learn that IBM is phasing out the Bluemix name in favor of IBM Cloud. The reasoning makes sense. IBM Bluemix and IBM Cloud had basically already merged. This just makes it official.

AWS Launches C5 Instances

When C4 just isn’t powerful enough, it’s time to use C5 (that’s an explosives joke right there). My comic genius aside, it’s good to see the compute-intensive C5s come to fruition after they were announced just under a year ago. So far, they’re available in six sizes in three AWS regions: US East (Northern Virginia), US West (Oregon), and EU (Ireland).

Be warned: According to the announcement, “The current NVMe driver is not optimized for high-performance sequential workloads and we don’t recommend the use of C5 instances in conjunction with sc1 or st1 volumes. We are aware of this issue and have been working to optimize the driver for this important use case.”

The Crystal Ball for 2018

Forrester just put out its predictions for cloud computing in 2018. A couple of predictions? Kubernetes will win the container orchestration war, cloud security will become more integrated with platforms, and despite the rise of multi-cloud environments, don’t expect vendor lock-in to go anywhere.

Diving Deeper Into Cloud

  1. Refcard: Getting Started With Kubernetes
  2. Guide: Orchestrating and Deploying Containers

Who’s Hiring?

Cloud Security Engineer

Location: Santa Clara, California, United States


  • Strong technical skills and the ability to learn and continue to maintain cutting edge skills and knowledge on a variety of technical areas (Unix/Linux, Application Security, Vulnerability Management, Incident Management, etc…)
  • At least 3+ years in a relevant technology field with at least two years being in a technical security role with the ability to demonstrate and produce examples of your relevant work.
  • Effective communication; in both written and oral communication. Be able to break down complex topics and be able to educate others on security concepts.
  • Exhibit passion around both technical security and working with diverse teams to help them understand their responsibilities with security.
  • Ability to collaborate effectively with others and the ability to multi-task and work on multiple projects concurrently.
  • Demonstrate high energy and a sense of urgency and work within potentially compressed time frames.
  • Strong analytical and logistical skills with equally strong attention to details.
  • Strong personal work ethic and integrity required.
  • Assume other work and duties as assigned.

Senior Product Developer
BMC Software

Location: Santa Clara, California, United States


  • 5-7 yrs of development experience; 3 yrs of frontend and 2 yrs of AngularJS
  • Experience in designing and developing SaaS/cloud applications, DevOps, and full software development life cycle
  • Expertise with unit/integration testing, test driven development and related modern best practices/technologies
  • Excellent understanding of web technologies HTML5, CSS3, JavaScript, JQuery
  • Experience with web servers and app servers, internals of browsers and standards compliance of different browsers (IE/FireFox/Chrome)
  • Experience with AngularJS, Jasmine, UI Bootstrap, SASS, Compass, Bourbon, Grunt, Karma
  • Experience with server-side issues such as caching, clustering, persistence, security, SSO, state management, high scalability/availability and failover a plus
  • Excellent communication skills: demonstrated ability to explain complex technical issues to both technical and non-technical audiences
  • Must have strong decision-making skills; take-charge personality, and the ability to drive a plan to completion

Original Link