OKEx Adds Support for the Vietnamese Dong on Its Fiat-to-Crypto Platform

OKEx Adds Support for the Vietnamese Dong on Its Fiat-to-Crypto Platform

Malta-based cryptocurrency exchange OKEx has added the Vietnamese Dong (VND) on its Customer-to-Customer (C2C) trading system, enabling Vietnamese customers to exchange their fiat currency for tokens on the platform.

The C2C platform was created by OKEx in 2017 as a peer-to-peer platform where users can buy and sell cryptocurrencies using fiat currencies. OKEx’s Head of Operations Andy Cheung noted that the addition of the VND on the company’s fiat-to-token platform would drive the adoption of cryptocurrencies in Vietnam.

“Vietnam is one of the most important blockchain hubs in Southeast Asia. We see a significant growth in the use of cryptocurrencies in this market,” Cheung added.

Trades made on the C2C platforms won’t incur any transaction fees, according to the release. The exchange also plans to introduce market makers (merchants) in the future.

Merchants are verified traders who have enough reserves to facilitate transactions on the platform. The digital asset platform requires a security deposit from qualified merchants before they can be accepted on the platform.

According to the company’s website:

“Market makers help serve a larger number of crypto enthusiasts and hence support a high trading volume. OKEx examines every merchant on the C2C platform. Every merchant on the platform needs to declare their digital assets which must exceed OKEx’s internal requirements to get qualified and to trade.”

OKEx, which recently dislodged Binance as the largest crypto exchange by trade volume, added four stablecoins to the list of assets available for trading on its token-to-token platform in October 2018. The exchange also announced its expansion to the U.S., having secured money transmittal licenses (MTLs) from 20 states across the U.S., excluding New York and Washington, D.C.

This article originally appeared on Bitcoin Magazine.

Original Link

Managing Persisted State for Oracle JET Web Component Variable With Writeback Property

Starting from JET 6.0.0, Composite Components (CCA) are renamed to be Web Components (I like this new name more, it sounds more simple to me). In today’s post, I will talk about Web Component writeback property and importance of it.

All variables (observable or not) defined inside Web Component will be reset when navigating away and navigating back to the module where Web Component is included. This means you can’t store any values inside Web Component, because these values will be lost during navigation. Each time when we navigate back to module, all Web Components used inside that model will be reloaded, this means JS script for Web Component will be reloaded and variables will be re-executed loosing previous values. This behaviour is specific to Web Component only, values for variables created in the owning module will not be reset.

Original Link

ESA targets 2021 for Space Rider demo flight

Space Rider aims to provide Europe with an affordable, independent, reusable end-to-end integrated space transportation system for routine access and return from low orbit. It will be used to transport payloads for an array of applications, orbit altitudes and inclinations. Credit: ESA

ROME — The European Space Agency expects to carry out the qualification flight of the Space Rider spaceplane in 2021 followed by multiple demonstration missions before handing over the program to industry, according to Lucia Linares, head of ESA space transportation strategy and policy.

Speaking at the PhiWeek, a five-day conference focusing on the future of Earth observation taking place Nov. 12-16 at the ESA Centre for Earth Observation (ESRIN) in Frascati, Italy, Linares said the private sector showed a lot of interest in the spaceplane during a recent Space Rider workshop.

“We had big interest from commercial companies,” Linares said. “Being it for pharmaceutical applications but also for health issues, for instance for testing how blood circulates in microgravity.”

Space Rider, a continuation of ESA’s Intermediate Experimental Vehicle (IXV), which flew in space in 2015, will be able to carry up to 800 kilograms of payload for orbital missions lasting as long as two months. The platform would allow payload to be exposed to microgravity and the space environment for a longer period, after which it would be returned to the Earth.

“We plan to have a number of demonstration missions to demonstrate a range of capabilities – in orbit demonstration and validation, defense and security applications and, of course, commercial opportunities,” Linares said.

Linares also said that ESA and its partner Arianespace are readying for a proof-of-concept flight of the Small Spacecraft Mission Service, which is set to take place in early 2019.

The mission will test a new smallsat dispenser aboard the Vega rocket,  Arianespace’s smallest launcher.

Linares said that after issuing an announcement of opportunities for the demonstration in 2017, the agency received an overwhelming response mostly from the commercial industry.

“We received numerous proposals: 71 responses, 166 spacecraft, of which only 30 are institutional,” Linares said.

“We have selected the aggregate that will fly on the first proof-of-concept flight, which is formed of seven nano and micro satellites, more than half of which are commercial and up to 44 cubesats in up to 12 deployers.”

Linares added that ESA prioritized missions focused on Earth observation to fly on the demo flight.

The agency, she said, is also looking to support commercial micro-launcher developments, in line with the vision of the ESA Director General Jan Woerner, who sees ESA’s role in what he calls the Space 4.0 era as an enabler of private endeavors rather than the dominant funder or lead implementer of projects.

“We want to move with this paradigm to follow the request of commercial actors in Europe and when they have an idea privately funded, which they believe in, then ESA should be there to support them to build in competitiveness in Europe,” Linares said.

Original Link

CNN Sues Trump Administration

News outlet demands return of White House credentials to CNN White House correspondent Jim Acosta. Original Link

Running a Status Iglu Repository on AWS S3

While setting up a Snowplow analytics system, I have to setup a private Iglu repository. The main idea behind this is described on Iglu’s GitHub page. That manual missed several steps that are really important for building an Iglu repository on an AWS infrastructure. I had spent a lot of time trying to figureout those step. So here they are:

  1. You should upload data to S3 in the layout that is described here:
  2. Enable an S3 bucket as a static hosting solution. It can be from the Properties menu of the S3 bucket.
  3. Amend the policy of the S3 bucket to allow for public access. It is located in tbe Permission section within the Bucket Policy submenu.
  4. { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::bucket-name/*" } ]
  5. Create a CORS policy. It is also located in the Permission section in the submenu of the CORS configuration section.

    Original Link

Network Visibility & Data Veracity – the Keys to Operational Simplicity

Operators need an accurate, up-to-date network and service topology view in order to determine the ‘Eulerian path’ of service configuration and implementation. Original Link

Dish Awarded $90M in Video Piracy Case

SetTV ordered to stop retransmitting Dish’s satellite or OTT TV feeds, hand over SetTV-branded set-top boxes. Original Link

Telkom’s mobile business is flying – subscribers up 50%

Telkom has grown its mobile subscriber base by a stunning 50% in the past year as its aggressively priced, data-led FreeMe plans gain traction among consumers. Original Link

Taking Your Company Private?

[Recently Elon Musk (Tesla CEO) talked about taking his company private. Read how a Zone Leader recalls a situation where a publicly traded company considered going private in an effort to avoid SOx compliance.]

In early August, Tesla CEO Elon Musk indicated he had secured the financial means to take his company from a public traded company to a private corporation. This was around the stock price had plummeted from just under $371 a share down to right above $290 a share a week earlier. Certainly, the memories of the yearly low price of $252.48 were still on his mind as well — not to mention some unfavorable press releases regarding their underlying technology.

Original Link

Mobile Sensor Platform Makes Cities Smarter & Safer

TruWITNESS offers real-time situational awareness for security and public safety operations. Original Link

China may have developed a quantum radar that can spot stealth planes

A defence firm has unveiled a prototype quantum radar. If it works, it could use entangled protons to locate stealth aircraft that normally avoid detection Original Link

What Lurks Around The Bend In 2019?

Analog Devices’ Brendan O’Dowd expounds his expectations in development and deployment for the coming year. Original Link

AC-LVDT Signal Conditioner Is Tamper Proof

Advanced LVDT signal conditioner handles power-generation applications. Original Link

Terminal Resistors Crank Up Power Options

Wide terminal resistors feature reverse geometry for improved power rating and shock resistance. Original Link

Dual-Channel 4A Gate Driver Packs Protection

STGAP2DM gate driver integrates galvanic isolation and protection features. Original Link

Tiny DK Packs Sensor Fusion, Voice Capture, Bluetooth 5.0 Mesh Networking

Ready-to-go kit delivers BlueNRG-2 SoC ultra-low-power processing and mesh networking capabilities. Original Link

High-Power Regulator Eases Data Center Cooling Requirements

Step-down dc/dc power regulator delivers highest power in its class. Original Link

Accelerated Supercomputers Hit New Highs

NVIDIA GPU-accelerated systems boost performance by nearly 50%. Original Link

Netcracker Nets Bigger Deal With RCN & Grande

Operators have gone live with vendor’s Active Mediation system. Original Link

Wave2Wave’s ROME Robotic Fiber Switch Extends Automation & SDN to Layer 0

Wave2Wave’s ROME is a robotic fiber switch that automates Layer 0, the physical connectivity layer, in data center and telco networks, explains David Wang. Original Link

Hackbot in the Morning

I love coming in to work early; normally I get in around 6:45 a.m. Very few people are here at that time, and the ones that are? They get it. It’s about as close to freedom a working adult can expect.

On a particularly quiet morning very recently, I found myself thinking, “I’ve never made a bot with a Pi… isn’t that one of the things that everybody’s gotta do sooner or later? I wonder what it takes to actually get something up and running.”

So I decided that, with minimal effort and materials expended, I would give it a shot while there was nobody here to tell me otherwise – nothing fancy, just a platform that’ll drive around, to which I can ultimately attach more junk (it started with some junk reclaimed from other projects). Check the end of this post for a consolidated list of items used.

Finished Raspberry Pi bot, Hackbot".

I started off with a Raspberry Pi 3 I mounted to an acrylic plate to try to give it some weight to hold it down when there’s a slew of cables stuck in it. I hit my pile of junk in my basement for more parts… and I came across this old project:

Remains of the Little-Dude project, a motor driver and to gearmotors.

That there (above) is the remains of my “Little Dude” project I did a video for a while back, using servo wheels glued together as pulleys… good times, good times… Anyway, that’s a pair of gear motors (ratio unknown, whatever – let’s rock!) wired up to a TB6612 dual-channel motor driver good to 1.2A per channel. Sweet! This bot was just about building itself!

I figured could mount the motors and driver directly onto the acrylic with hot glue, although if anything got too hot, it would fall off. I decided to just really gum up the motors so any heat couldn’t melt all of the hot glue and stick to the edges of the driver board. This bot wouldn’t be drawing that much current. If I remember right the stall current on the motors is around 330mA, and we’ll be nowhere near that driving this little beastie around the office.

When it came to power, for initial testing I’d have to have all the cables and junk plugged in anyway, but Hackbot needed localized life support if he’s eventually going to romp free… not to mention headless operation. I could have started it with all the junk plugged in, then yank the keyboard, mouse and monitor out and set it on the floor, but that was definitely a short-term solution. I let that part go while I got the rudidments in place. In the meantime, I had to make a little regulator circuit with a 7805 – that should be good to an amp and a half, so it should cover me for the Pi and the motors. But a two-cell lipo is about 8.4V fully charged, and at a guess I’ll maybe draw 500mA average (totally ballparking here), giving me (8.4V – 5V)*0.5A = 1.7W to dissipate on that TO220. It would probably get hot unless I heatsink it.

Programming this in Python was easy enough, as it’s just GPIO manipulation. I wasn’t going to try and PWM the motor driver, as I had the gear motors and I knew they were slow. If they were faster, I might have had to worry about feeding them PWM to control the speed.

I then created a wiring diagram of sorts:

Fritzing diagram of RasPi, TB6621 Motor Driver and two gearmotors.

The hookup was simple enough. Each channel on the TB6621 is driven by three pins: [A/B]IN1 and [A/B]IN2, which determine the direction, and PWM[A/B], which allows you to adjust the speed of the channel through a PWM signal applied here. There’s also a standby signal (STBY) that will enable/disable both channels. That’s seven lines total, and they all get a GPIO line.

Running motor power through the RasPi wasn’t really the smartest thing I could do, but 1) the motors were originally rated for 6V and the lipo’s going to be more, and 2) I bet it wouldn’t be a problem at this stage, (though I wondered if I’d get motor noise back to the RasPi). Checking the TB6621 datasheet showed lots of noise-suppressing diodes on the motor outputs. Clearly, destiny wanted me to do this.

There were still a couple of things I needed to make this viable: wheels and a caster. Oh sure, I could have used those hot-glued servo wheels that were originally on the motors (check the first pic above) for some extra jank-factor. But I felt like I was already up against enough sketchy design, so I just bought them. Pro tip: that caster is a little tight at first. You can loosen it up by clamping that ball between the plastic with a pliers and giving it a little squeeze (not too much, as you can’t un-squeeze it).

After some hot glue magic plus a minimum of wiring, the bottom of Hackbot looked like this:

Bottom of Hackbot, motors and driver mounted with hot glue.

The top looks like this:

Top of Hackbot.

Now I had the physical platform close, but I still had to make a little regulator board. I figured it really didn’t need to be anything special, just an L7805 regulator, plus an electrolytic cap of 100uF or so on the output oughta do it. I gave it a two-pin male header to plug into the two-cell lipo, and snatched a micro USB plug from some unsuspecting cable to connect to the 5V out.

Fritzing diagram of regulator circuit.

Then I mounted it on one of the stand-offs holding the RasPi down.

Regulator board mounting on Hackbot.

Another view:

Regulator board mounting on Hackbot.

The PCB that the regulator is mounted on is from an old product line of shaped PCB’s that we used to have, that one being a pentagon. It was lying around, so I used it. The idea here was that the battery will sit on top of the RasPi in some fashion (I can work out the specifics later) while plugged into that two-pin male header you see there.

From there, I turned to code. There’s definitely more junk to add, but that’s for another day. The car’s got wheels, right? Let’s go driving!
#Just a little test code to run around the floor a bit import RPi.GPIO as GPIO
import time #a couple of delay constants
leg = 2
turn = 0.5 #set up control pins for motor driver
STBY = 31
AIN1 = 33
AIN2 = 35
PWMA = 37
BIN1 = 32
BIN2 = 36
PWMB = 38 GPIO.setmode(GPIO.BOARD) #use board pin numbers #set the GPIO's to outputs
GPIO.setup(PWMB, GPIO.OUT) #set initial condiions, STBY
#is low, so no motors running
GPIO.output(PWMB, GPIO.HIGH) #movement is governed by the 4
#following functions. These will
#go into their own library, ultimately.
def go_forward(run_time): GPIO.output(AIN1, GPIO.LOW) GPIO.output(AIN2, GPIO.HIGH) GPIO.output(BIN1, GPIO.LOW) GPIO.output(BIN2, GPIO.HIGH) GPIO.output(STBY, GPIO.HIGH) #start time.sleep(run_time) GPIO.output(STBY, GPIO.LOW) #stop def turn_left(run_time): GPIO.output(AIN1, GPIO.HIGH) GPIO.output(AIN2, GPIO.LOW) GPIO.output(BIN1, GPIO.LOW) GPIO.output(BIN2, GPIO.HIGH) GPIO.output(STBY, GPIO.HIGH) #start time.sleep(run_time) GPIO.output(STBY, GPIO.LOW) #stop def turn_right(run_time): GPIO.output(AIN1, GPIO.LOW) GPIO.output(AIN2, GPIO.HIGH) GPIO.output(BIN1, GPIO.HIGH) GPIO.output(BIN2, GPIO.LOW) GPIO.output(STBY, GPIO.HIGH) #start time.sleep(run_time) GPIO.output(STBY, GPIO.LOW) #stop def reverse(run_time): GPIO.output(AIN1, GPIO.HIGH) GPIO.output(AIN2, GPIO.LOW) GPIO.output(BIN1, GPIO.HIGH) GPIO.output(BIN2, GPIO.LOW) GPIO.output(STBY, GPIO.HIGH) #start time.sleep(run_time) GPIO.output(STBY, GPIO.LOW) #stop #Then we make a simple driving patern and loop try: while True: #go forward go_forward(leg) #turn right? turn_right(turn) #go forward go_forward(leg) #turn right? turn_right(turn) #go forward go_forward(leg) #turn left turn_left(turn) #go forward go_forward(leg) #turn left turn_left(turn) #reverse reverse(leg) except KeyboardInterrupt: GPIO.cleanup()

To run this, I powered the Pi from my two-cell lipo with all the cables plugged in, just long enough to open a terminal window and run that code. Then, I quick-like yanked all the cords out, put it on the floor and let-r-rip!

Hackbot tears up the rug at SparkFun!

Look at that thing lay it down! Am I right?? Those motors are really slow, and we don’t sell that particular one anymore. I want to say they’re geared 300:1…? Or maybe 300RPM? The concept is hereby proven, and I can change those motors out if I really want to.

Things I could have done better

1) Like I knew it would, the regulator gets hot. Not so hot that I can’t touch it, but I don’t want to for very long. The current draw is around 315mA sitting idle, and around 500mA-ish when we’re driving around, so it’s dissipating over 1.5W when driving. That’s a bit much for a TO-220 package by itself, so I should put a hunk of metal on it.

2) Running motor current through the Pi, while convenient for today, is kinda dumb for the long term. I can get away with it for now because the motor current is relatively low, but the current path between the USB plug and 5V on the header is in no way designed to do this. I should really run the motor voltage directly from the regulator.

3) Headless operation: plugging cables in and out to do this is no good. Pinocchio wanted to be a real boy; so does Hackbot.

4) Hot glue. You know what? The hot glue is working for me. Saved me a ton of time.

Those are the things that gotta happen before anything else gets added. What to add?

What comes next

There’s a bunch of GPIO left available on the Pi header, including SPI, I2C and UART interfaces, so there’s a lot of room for adding junk. I can’t add anything particularly analog, but I don’t think I need to. But anything with a digital interface is fair game – distance/proximity sensors, GPS, environmental sensors… it just depends on what your ultimate goal is, or what sensors you want play with. This would make an interesting mobile test platform for new gear.

For myself, I’m less about getting tiny bots to do my will, and more about, “That thing is sweet! I wish I was two inches tall so I could get in and drive!” So it’ll probably get a camera at some point, probably an OSD to keep me updated with various info. Then LEDs, Troll hair, googly eyes…

Recap: Materials Used

To extricate the list from my story-telling style:

Break Away Headers - Straight

added to your cart!

Break Away Headers – Straight

In stock PRT-00116

A row of headers – break to fit. 40 pins that can be cut to any size. Used with custom PCBs or general custom headers.



Raspberry Pi 3

added to your cart!

Raspberry Pi 3

27 available DEV-13825

Everyone knows and loves Raspberry Pi, but what if you didn’t need additional peripherals to make it wireless. The Raspberry …



SparkFun Motor Driver - Dual TB6612FNG (1A)

added to your cart!

SparkFun Motor Driver – Dual TB6612FNG (1A)

Out of stock ROB-14451

The TB6612FNG Motor Driver can control up to two DC motors at a constant current of 1.2A (3.2A peak). Two input signals (IN1 …



Voltage Regulator - 5V

added to your cart!

Voltage Regulator – 5V

In stock COM-00107

This is the basic L7805 voltage regulator, a three-terminal positive regulator with a 5V fixed output voltage. This fixed reg…



Lithium Ion Battery - 1000mAh 7.4v

added to your cart!

Lithium Ion Battery – 1000mAh 7.4v

In stock PRT-11855

This high discharge LiPo is a great way to power any R/C, robotic, or portable project. This is an excellent choice for anyth…



Ball Caster Metal - 3/8"

added to your cart!

Ball Caster Metal – 3/8"

In stock ROB-08909

This ball caster kit from Pololu includes a black ABS housing, a 3/8″ diameter metal ball, two spacers (1/16″ and 1/8″ thick)…



Electrolytic Decoupling Capacitors - 100uF/25V

added to your cart!

Electrolytic Decoupling Capacitors – 100uF/25V

In stock COM-00096

Electrolytic decoupling capacitors 100uF/25V. These capacitors are great transient/surge suppressors. Attach one between the …

Micro Gearmotor - 130 RPM (6-12V)

added to your cart!

Micro Gearmotor – 130 RPM (6-12V)

Only 10 left! ROB-12281

These micro gearmotors are incredibly tough and feature full metal gears. They have a gear ratio of 210:1 and operate up to 1…

Wheel 32x7mm

added to your cart!

Wheel 32x7mm

In stock ROB-08901

This custom-designed plastic wheel from Pololu has a silicone tire measuring 1.26″ (32 mm) in diameter and is designed to fit…


Some reading you may appreciate

How to Solder: Through-Hole Soldering

This tutorial covers everything you need to know about through-hole soldering.

TB6612FNG Hookup Guide

Basic hookup guide for the TB6612FNG H-bridge motor driver to get your robot to start moving!

Python Programming Tutorial: Getting Started with the Raspberry Pi

This guide will show you how to write programs on your Raspberry Pi using Python to control hardware.

comments | comment feed

How to Configure Druid to Use Minio as Deep Storage

Apache Druid (incubating) is a high performance analytics data store for event-driven data. Druid relies on a distributed filesystem or binary object store for data storage. The most commonly used deep storage implementations are S3 (popular for those on AWS) and HDFS (popular if you already have a Hadoop deployment). In this post, I will show you how to configure non-Amazon S3 deep storage for a Druid cluster. And for this, I will use Minio as S3 deep storage for a Druid cluster.


Minio is a high performance distributed object storage server, designed for large-scale private cloud infrastructure. The Amazon S3 API is the de facto standard for object storage. Minio implements the Amazon S3 v2/v4 API. It is best suited for storing unstructured data such as photos, videos, log files, backups, and container/VM images. Size of an object can range from a few KBs to a maximum of 5TB.

Original Link

MTN sees ‘great progress’ in resolving Nigeria woes

MTN Group is making “great progress” with Nigerian authorities in talks about US$10.1-billion in claims, encouraging Africa’s largest wireless carrier that it can settle the long-running dispute out of court. Original Link

Mystery “space cow” is a weird new type of powerful space explosion

Months of observations have shown that the strange explosion in space called “the Cow” gets extra power from within, making it a new type of celestial event Original Link

China developing new launch vehicle for human spaceflight, future moon missions

Model of China's unnamed new launch vehicle on display in Zhuhai in November. Credit: CASC


HELSINKI — China unveiled a heavy-lift launch vehicle it is developing to carry a next-generation crewed spacecraft and power human spaceflight missions beyond low Earth orbit.

A model of the conceptual design of the unnamed launch vehicle was on display last week at the 12th China International Aviation and Aerospace Exhibition in Zhuhai, southern China, along with other exhibits including a full-scale model of the Chinese space station core module.

The launch vehicle plans are being developed by the China Academy of Launch Vehicle Technology (CALT) under the China Aerospace Science and Technology Corporation (CASC), the main contractor for the Chinese space program.

Hu Xiaojun, a researcher at CALT, told press at the Zhuhai Airshow that the new launch vehicle is intended for future crewed spaceflight missions using China’s next-generation crewed spacecraft, including lunar missions.

At 90 meters, the new 2,000-metric-ton launcher will be close to twice as tall as China’s current most powerful rocket, the Long March 5, with the same 5-meter diamater. It will be designed to send 25 metric tons to trans-lunar injection and 70 metric tons to low Earth orbit.

Concepts for future human spaceflight missions were also laid out at a human spaceflight conference in Xi’an, north China, in late October. From slides presented during talks, the conceptual designs of the launch vehicles show clustered engines more reminiscent of the SpaceX Falcon 9 rocket than of typical Long March heritage.

Feasibility studies began in 2016 and the new launcher will adopt new design methods and reusability, with work on key technologies ongoing. It will feature an escape system like that used by the Long March 2F rocket, which currently used to send China’s astronauts into orbit.

Intentions or commitment?

While apparently in the early stages and with the upcoming Chinese Space Station commanding attention and resources, the designs signal a clear intent to develop capabilities for human spaceflight beyond LEO.

“The development of launch vehicle concepts within China’s space industry is a clear indication that there is serious thought in China about human exploration of the moon,” John Horack, the Neil Armstrong Chair in Aerospace Policy at the Ohio State University, told SpaceNews.

“The Chinese, like the rest of the world, view the moon and its environs as a next place beyond LEO for humans to explore, and they are serious about investigating the technical requirements and obstacles to achieving a return of humans to the moon.”

A scale model of the return capsule for next-generation crewed spacecraft, which will be partly reusable and succeed the three-module Shenzhou spacecraft, was also on display in Zhuhai.

The two-module spacecraft, which will have an uncrewed flight test in 2019, will consist of a service module and return capsule. Two versions—one 14 metric tons and another of around 20 metric tons for lunar missions—are being developed.

Both the new spacecraft and new crew-rated launcher with their reusability features will be expected to replace the Shenzhou and Long March 2F for low-Earth orbit missions as well as put the moon within reach.

China is also in the early stages of development of a super-heavy-lift launch vehicle, the Long March 9, capable of lifting 140 metric tons to low Earth orbit, 50 tons to Earth-moon transfer orbit, and 44 tons to Earth-Mars transfer orbit.

The Saturn 5-class rocket would serve for launching infrastructure for lunar and other deep space missions with Chinese officials giving a test flight date of 2028, with a first use stated to be a Mars sample-return mission.

“What these designs and technical work do not reveal, at least by themselves, is a solid development schedule, firm funding commitment, dependable infrastructure readiness, and clear political will to go to the moon, especially as an effort that would be solely an enterprise of the Chinese,” Horack says.

“China has its own set of internal competitions for funding, its own internal debates about how best to proceed, its own unique version of public sentiment, strong competition within the aerospace sector between organizations, and more. These may all be markedly different from we experience in the United States, but they do occur. And they factor significantly in the future trajectory of anything the Chinese may do in space,” Horack concluded.

It is likely that China’s plans will evolve as progress is made and challenges, whether technical, political or otherwise, appear. However, the intended destination appears clear.

New concepts for Chinese human spaceflight missions presented in Xi’an, October 2018. Credit: Wanyzhh


Model of the next-generation return capsule exhibited at Zhuhai Airshow 2018. Credit: CAST
Model of the next-generation return capsule exhibited at Zhuhai Airshow 2018. Credit: CAST


A slide illustrating China’s next-generation crewed spacecraft presented at a human spaceflight conference in Xi’an in October 2018. Credit: Wanyzhh
A slide illustrating China’s next-generation crewed spacecraft presented at a human spaceflight conference in Xi’an in October 2018. Credit: Wanyzhh

Original Link

Stan Lee’s legacy isn’t just superheroes but the humanity he gave them

The Marvel comics legend has died aged 95. The creative force behind Spider-Man, Iron Man, The Incredible Hulk, Avengers, Thor, X-Men and many more, Lee’s influence on culture was huge – but let’s not overlook his impact on our humanity Original Link

ESA preps Earth observation satellite with onboard AI processor

Josef Aschbacher, ESA’s director for Earth observation programs

ROME — The European Space Agency plans to launch an Earth observation satellite equipped with an artificially intelligent processor that would enable the spacecraft to make decisions regarding what to image and which data to send to the ground.

The satellite, currently nicknamed BrainSat, will sport Intel’s Myriad visual data processor and launch next year, Josef Aschbacher, ESA’s director for Earth observation programs, told SpaceNews on the sidelines of PhiWeek, a five-day conference focusing on the future of Earth observation at the ESA Centre for Earth Observation in Frascati, Italy.

“The technology has not been flown in space yet. We are working on this right now because we think that’s certainly an experiment that we need to do,” Aschbacher said. “At the moment, we are organizing the details of when and where exactly, but we will see this coming up to space very soon.”

The sensor, requiring only 1 Watt of power, is one of the game-changing advancements in artificial intelligence and computing technology that the space industry is seeking to harness.

ESA, Aschbacher said, is boosting its artificial intelligence development team and is collaborating with leaders in the sector including Google, NVIDIA, Amazon and SAP.

Last year, the Earth Observation directorate launched what it calls the PhiLab, a future-focused team working on harnessing the potential of artificial intelligence and other disruptive innovations.

Aschbacher said the cooperation between ESA and artificial intelligence developers is mutually beneficial. The agency’s Earth observation satellites produce 150 terabytes of data per day — a massive data set that can’t be conveniently processed by human analysts but which can at the same time serve as an excellent source of training data for machine learning and artificial intelligence algorithms.

Speaking at the same event, Planet CEO Will Marshall said that advancements in artificial intelligence and computing technology are finally unlocking the full potential of Earth observation, enabling companies to take full advantage of the resources that are currently in orbit.

“The goal is to be able to see deforestation in the morning and be able to tell the officials the same day to send someone to go there immediately instead of notifying them that something happened a month later,” he said. “It requires a lot of translation. It’s not just the imagery. We are flooded with imagery at this point.”

Last year, Planet achieved their goal of imaging every spot on Earth once a day. Now, Marshall said, the San Francisco-headquartered firm is focused on a new goal: indexing everything that’s happening on the Earth every day using artificial intelligence.

“Basically we are trying to turn the massive amount of images that we get every day into a map of objects and locations of the objects,” he said.  “Then we can search, for example, how many houses are there in Italy, what’s happening around the borders or how ships are moving in the oceans.”

Artificial intelligence, Marshall said, will cover the ‘last mile problem’ of getting the right information into the hands of the right people to help them make the right decisions when it comes to natural disasters and emergencies, climate change or geopolitical issues.

Aschbacher said ESA hopes to create an AI-powered system that would bring together images from all Earth observation platforms operated by the agency as well as European national and commercial providers. This task will be challenging since data come in different formats, using different sensors and calibration.

“In the future, data from ESA assets will be integrated, but also assets across Europe because we want to create a coordinated approach to some societal questions,” he said. “It’s really much better to connect them and make sure that this is really not just a data source but a connected data source in order to better utilize various data sources both from big satellites and small satellites combined.”

Aschbacher said he expects the development of artificial intelligence systems will be a large part of the agency’s next Earth Observation Envelope Programme that will be decided on at the ESA ministerial conference in November 2019. This meeting of ESA member states takes place every three years, and is where budgets and future activities of the agency are approved.

“At the moment the funding [for AI] is not huge on our side,” he said. “But we expect AI as well as machine learning and other emerging technologies such as blockchain to be a bigger part of the next envelope program.”

Original Link

Incognito Interview: Scaling Fiber-Based Services & How to Accelerate Your Journey

When it comes to delivering new IP-based services over fiber, there are no prizes for being second to market. Ragu Masilamany, VP of Products at Incognito, sits down with Light Reading to discuss the business imperative for fiber, how fixed-line operators are approaching fiber-based service introductions, the challenges they face, and what’s needed for their success. Original Link

Implementing Data-Driven Testing Using Google Sheets

Data-Driven Testing (DDT) is an approach, or in other words an architecture, for creating automated tests. In the previous post, "How to Implement Data-Driven Testing in your JMeter Test", DDT implementation was described using Excel spreadsheets. In this post, we will talk about how to implement DDT using Google Sheets with Apache JMeter™.

For this purpose, the Google Sheets API will be used. This API will allow us to read and write data from the Google spreadsheet. We will read the input parameter values for the tested API (this API will be described below) from the spreadsheet and write the result in the same spreadsheet.

Original Link

Private by Design: How we built Firefox Sync

What is Firefox Sync and why would you use it

That shopping rabbit hole you started on your laptop this morning? Pick up where you left off on your phone tonight. That dinner recipe you discovered at lunchtime? Open it on your kitchen tablet, instantly. Connect your personal devices, securely. – Firefox Sync

Firefox Sync lets you share your bookmarks, browsing history, passwords and other browser data between different devices, and send tabs from one device to another. It’s a feature that millions of our users take advantage of to streamline their lives and how they interact with the web.

But on an Internet where sharing your data with a provider is the norm, we think it’s important to highlight the privacy aspects of Firefox Sync.

Firefox Sync by default protects all your synced data so Mozilla can’t read it. We built Sync this way because we put user privacy first. In this post, we take a closer look at some of the technical design choices we made and why.

When building a browser and implementing a sync service, we think it’s important to look at what one might call ‘Total Cost of Ownership’.  Not just what users get from a feature, but what they give up in exchange for ease of use.

We believe that by making the right choices to protect your privacy, we’ve also lowered the barrier to trying out Sync. When you sign up and choose a strong passphrase, your data is protected from both attackers and from Mozilla, so you can try out Sync without worry. Give it a shot, it’s right up there in the menu bar!

Sign in to Sync Button in the Firefox Menu

Why Firefox Sync is safe

Encryption allows one to protect data so that it is entirely unreadable without the key used to encrypt it. The math behind encryption is strong, has been tested for decades, and every government in the world uses it to protect its most valuable secrets.

The hard part of encryption is that key. What key do you encrypt with, where does it come from, where is it stored, and how does it move between places? Lots of cloud providers claim they encrypt your data, and they do. But they also have the key! While the encryption is not meaningless, it is a small measure, and does not protect the data against the most concerning threats.

The encryption key is the essential element. The service provider must never receive it – even temporarily – and must never know it. When you sign into your Firefox Account, you enter a username and passphrase, which are sent to the server. How is it that we can claim to never know your encryption key if that’s all you ever provide us?  The difference is in how we handle your passphrase.

A typical login flow for an internet service is to send your username and passphrase up to the server, where they hash it, compare it to a stored hash, and if correct, the server sends you your data. (Hashing refers to the activity of converting passwords into unreadable strings of characters impossible to revert.)

Typical Web Provider Login Flow

The crux of the difference in how we designed Firefox Accounts, and Firefox Sync (our underlying syncing service), is that you never send us your passphrase. We transform your passphrase on your computer into two different, unrelated values. With one value, you cannot derive the other0. We send an authentication token, derived from your passphrase, to the server as the password-equivalent. And the encryption key derived from your passphrase never leaves your computer.

Firefox Sync Login Flow

Interested in the technical details?  We use 1000 rounds of PBKDF2 to derive your passphrase into the authentication token1. On the server, we additionally hash this token with scrypt (parameters N=65536, r=8, p=1)2 to make sure our database of authentication tokens is even more difficult to crack.

We derive your passphrase into an encryption key using the same 1000 rounds of PBKDF2. It is domain-separated from your authentication token by using HKDF with separate info values. We use this key to unwrap an encryption key (which you generated during setup and which we never see unwrapped), and that encryption key is used to protect your data.  We use the key to encrypt your data using AES-256 in CBC mode, protected with an HMAC3.

This cryptographic design is solid – but the constants need to be updated. One thousand rounds of PBKDF can be improved, and we intend to do so in the future (Bug 1320222). This token is only ever sent over a HTTPS connection (with preloaded HPKP pins) and is not stored, so when we initially developed this and needed to support low-power, low-resources devices, a trade-off was made. AES-CBC + HMAC is acceptable – it would be nice to upgrade this to an authenticated mode sometime in the future.

Other approaches

This isn’t the only approach to building a browser sync feature. There are at least three other options:

Option 1: Share your data with the browser maker

In this approach, the browser maker is able to read your data, and use it to provide services to you. For example,  when you sync your browser history in Chrome it will automatically go into your Web & App Activity unless you’ve changed the default settings. As Google Chrome Help explains, “Your activity may be used to personalize your experience on other Google products, like Search or ads. For example, you may see a news story recommended in your feed based on your Chrome history.”4

Option 2: Use a separate password for sign-in and encryption

We developed Firefox Sync to be as easy to use as possible, so we designed it from the ground up to derive an authentication token and an encryption key – and we never see the passphrase or the encryption key. One cannot safely derive an encryption key from a passphrase if the passphrase is sent to the server.

One could, however, add a second passphrase that is never sent to the server, and encrypt the data using that. Chrome provides this as a non-default option5. You can sign in to sync with your Google Account credentials; but you choose a separate passphrase to encrypt your data. It’s imperative you choose a separate passphrase though.

All-in-all, we don’t care for the design that requires a second passphrase. This approach is confusing to users. It’s very easy to choose the same (or similar) passphrase and negate the security of the design. It’s hard to determine which is more confusing: to require a second passphrase or to make it optional! Making it optional means it will be used very rarely.  We don’t believe users should have to opt-in to privacy.

Option 3: Manual key synchronization

The key (pun intended) to auditing a cryptographic design is to ask about the key: “Where does it come from? Where does it go?” With the Firefox Sync design, you enter a passphrase of your choosing and it is used to derive an encryption key that never leaves your computer.

Another option for Sync is to remove user choice, and provide a passphrase for you (that never leaves your computer). This passphrase would be secure and unguessable – which is an advantage, but it would be near-impossible to remember – which is a disadvantage.

When you want to add a new device to sync to, you’d need your existing device nearby in order to manually read and type the passphrase into the new device. (You could also scan a QR code if your new device has a camera).

Other Browsers

Overall, Sync works the way it does because we feel it’s the best design choice. Options 1 and 2 don’t provide thorough user privacy protections by default. Option 3 results in lower user adoption and thus reduces the number of people we can help (more on this below).

As noted above, Chrome implements Option 1 by default, which means unless you change the settings before you enable sync, Google will see all of your browsing history and other data, and use it to market services to you. Chrome also implements Option 2 as an opt-in feature.

Both Opera and Vivaldi follow Chrome’s lead, implementing Option 1 by default and Option 2 as an opt-in feature.

Brave, also a privacy-focused browser, has implemented Option 3. And, in fact, Firefox also implemented a form of Option 3 in its original Sync Protocol, but we changed our design in April 2014 (Firefox 29) in response to user feedback6. For example, our original design (and Brave’s current design) makes it much harder to regain access to your data if you lose your device or it gets stolen. Passwords or passphrases make that experience substantially easier for the average user, and significantly increased Sync adoption by users.

Brave’s sync protocol has some interesting wrinkles7. One distinct minus is that you can’t change your passphrase, if it were to be stolen by malware. Another interesting wrinkle is that Brave does not keep track of how many or what types of devices you have. This is a nuanced security trade-off: having less information about the user is always desirable… The downside is that Brave can’t allow you to detect when a new device begins receiving your sync data or allow you to deauthorize it. We respect Brave’s decision. In Firefox, however, we have chosen to provide this additional security feature for users (at the cost of knowing more about their devices).


We designed Firefox Sync to protect your data – by default – so Mozilla can’t read it. We built it this way – despite trade-offs that make development and offering features more difficult – because we put user privacy first. At Mozilla, this priority is a core part of our mission to “ensure the Internet is a global public resource… where individuals can shape their own experience and are empowered, safe and independent.”

0 It is possible to use one to guess the other, but only if you choose a weak password.

1 You can find more details in the full protocol specification or by following the code starting at this point. There are a few details we have omitted to simplify this blog post, including the difference between kA and kB keys, and application-specific subkeys.

2 Server hashing code is located here.

3 The encryption code can be seen here.

4 Section “Use your Chrome history to personalize Google”

5 Chrome 71 says “For added security, Google Chrome will encrypt your data” and describes these two options as “Encrypt synced passwords with your Google username and password” and “Encrypt synced data with your own sync passphrase”.  Despite this wording, only the sync passphrase option protects your data from Google.

6 One of the original engineers of Sync has written two blog posts about the transition to the new sync protocol, and why we did it. If you’re interested in the usability aspects of cryptography, we highly recommend you read them to see what we learned.

7 You can read more about Brave sync on Brave’s Design page.

The post Private by Design: How we built Firefox Sync appeared first on Mozilla Hacks – the Web developer blog.

Original Link

Red Hat Releases Red Hat OpenStack Platform 14 and a New Virtual Office Solution, ownCloud Enterprise Integrates with SUSE Ceph/S3 Storage, Run a Linux Shell on iOS with iSH and Firefox Launches Two New Test Pilot Features

News briefs for November 13, 2018.

Hat this morning released Red Hat OpenStack Platform 14
, delivering “enhanced
Kubernetes integration, bare metal management and additional automation”. According to the press
release, it will be available in the coming weeks via the Red Hat Customer Portal and as a component of both Red Hat Cloud Infrastructure and
Red Hat Cloud Suite.

Hat also announced a new virtual office solution
today. This solution “provides a
blueprint for modernizing telecommunications operations at the network edge via an open,
software-defined infrastructure platform”. Learn more about it here.

ownCloud yesterday announced SUSE Enterprise Storage Ceph/S3 API as a certified storage backend for
ownCloud Enterprise Edition. The press release notes that the “SUSE Ceph/S3 Storage
integration reduces dependency on proprietary hardware
by replacing an organization’s storage infrastructure with an open, unified and
smarter software-defined storage solution”. For more information on ownCloud, visit here.

There’s a new project called iSH that lets you run a Linux shell on an iOS device. Bleeping
Computer reports
that the project is available as a TestFlight beta for iOS devices, and it is
based on Alpine Linux. It allows you to “transfer files, write shell scripts, or simply to use Vi to
develop code or edit files”. You first need to
install the TestFlight app, and then you can start
testing the app by visiting this page:

The Firefox Test Pilot Team announces two new features: Price Wise and Email Tabs. Price Wise lets
you add products to your Price Watcher list, and you’ll receive desktop notifications whenever the price
drops. With Email Tabs, you can “select and send links to one or many open tabs all within Firefox in
a few short steps, making it easier than ever to share your holiday gift list, Thanksgiving recipes or
just about anything else”. See the Mozilla
for details.

Original Link

Insight TV Tunes In Technicolor’s HDR Tech

4K programmer is using Technicolor’s Advanced HDR system for higher quality video. Original Link

You.i TV Raises $23M More

Round in multiscreen app company led by Causeway Media Partners. Original Link

MySQL Database Horizontal Scaling Solution

The constant development of cloud technologies, mature storage resources and computing resources, and gradually decreasing costs make it easier for enterprises to develop, deploy, and provide services. This benefits rapidly developing small and medium-sized enterprises that can respond to the increasing traffic by choosing to continuously add new machines to increase the application cluster.

However, with the development of enterprises, there is still a bottleneck which we cannot remove by simply piling up machines; this is the performance ceiling caused by a growing database. One of the effective measures to solve this performance ceiling is to shard the database, and to make sure the data volume of a single table is smaller than five million data entries.

Original Link

Inmarsat preps new maritime product to fend off KVH competition

Rupert Pearce Inmarsat

WASHINGTON — Satellite operator Inmarsat said it will release a new maritime connectivity product around year’s end to stem the flow of customers to competitor KVH.

Rupert Pearce, Inmarsat’s CEO, said Nov. 8 that the company is taking steps to retain customers in its largest market segment, having identified broadband for social use among crews as the missing component of Inmarsat’s maritime service offering.

With the new product, called Crew Xpress, Pearce said Inmarsat is confident it will win the “fight for the hearts and minds” of its low-data-rate L-band customers as increasing numbers of them shift to higher data-rate connectivity using Very Small Aperture Terminals, or VSATs.

KVH has emerged as Inmarsat’s principal competitor in maritime VSAT connectivity, winning customers with a network built on capacity from Intelsat and Sky Perfect JSAT.

Pearce, in an earnings call, said Inmarsat continues to lose a larger than expected number of its L-band FleetBroadband customers to KVH and other VSAT competitors, but is “working extremely hard to address this competitive dynamic.”

Inmarsat’s third quarter maritime revenue decreased 5.7 percent to $135 million compared to the same period last year. Despite the loss, Inmarsat said its share of the maritime VSAT market has increased to around 25 percent, up from around 15 percent two years ago.

Pearce said customers across Inmarsat’s four markets — maritime, aviation, government and enterprise — are using the company’s satellites for an increasing level of mission-critical connectivity services. Maritime crew welfare doesn’t fall into that category, he said, but the changes underway provide an opportunity for Inmarsat to hone its Fleet Xpress products that support connectivity at sea with the company’s four Global Xpress Ka-band satellites.

“If you ask me whether I expect Fleet Xpress as it currently is today to be competitive in five years time if we do nothing, the answer is of course not,” he said. “I’d be delighted if my competitors disagree with me because I will eviscerate them in the market through improvements to Fleet Xpress. It’s a game in which we continue to have to invest.”

Inmarsat’s total revenue for the three months ended Sept. 30 grew by 3.7 percent to $369.3 million, with inflight connectivity and government services both increasing compared to the same time last year.

Aviation was Inmarsat’s fastest growing segment, increasing 34 percent year-over-year to $68.2 million for the quarter.

Pearce said Panasonic Avionics’ September agreement to use Inmarsat exclusively for Ka-band services validates Inmarsat’s use of those frequencies over the more commonly used Ku-band.

Pearce said the 10-year Panasonic Avionics partnership marked a sea-change in industry thinking, though competitors like Intelsat and SES, which provide substantial amounts of Ku-band capacity to Panasonic Avionics, Gogo and Global Eagle Entertainment, would likely disagree.

“You’ll see that the industry is beginning to realize that Ku-band does not cut it, that Ku-band cannot be something to rely on for future services for inflight connectivity, and that is going to cause people to take a pause and change their strategies,” he said.

Panasonic Avionics has customized Ku-band payloads on satellites operated by Intelsat, SES and Eutelsat, and has another under construction on Apsatcom’s Apstar-6D satellite that launches next year. The increasing prevalence of Ku- and Ka-band has led companies including Hughes, Gilat and Viasat to develop aviation antennas that support both frequencies.

Pearce said Inmarsat continues to see a market for L-band services despite the increased demand for satellite communications that can support higher speed services. One of those big opportunities is the Internet of Things, though it is too early to gauge revenue potential there, he said.

Original Link

Deciphering Data to Uncover Hidden Insights: Understanding the Data

" The best vision is insight" – Malcolm Forbes.

When it comes to data analytics for enterprises, nothing is more important than making accurate and reliable inferences from data. It is no surprise that enterprises are investing heavily in big data analytics as they can reap large profits with accurate insights. However, this is often easier said than done. Data collected from real-world applications is affected by many variables, making data prediction challenging. Regardless, data analytics remains essential for many, if not all, businesses around the world.

In this article, I will walk you through the process of deciphering data for uncovering hidden insights.

Original Link

Sending Emails Asynchronously Through AWS SES

Sending Emails Asynchronously Through AWS SES

Sending Emails Asynchronously Through AWS SES

Leonardo Losoviz

2018-11-13T14:30:53+01:00 2018-11-13T14:40:42+00:00

Most applications send emails to communicate with their users. Transactional emails are those triggered by the user’s interaction with the application, such as when welcoming a new user after registering in the site, giving the user a link to reset the password, or attaching an invoice after the user does a purchase. All these previous cases will typically require sending only one email to the user. In some other cases though, the application needs to send many more emails, such as when a user posts new content on the site, and all her followers (which, in a platform like Twitter, may amount to millions of users) will receive a notification. In this latter situation, not architected properly, sending emails may become a bottleneck in the application.

That is what happened in my case. I have a site that may need to send 20 emails after some user-triggered actions (such as user notifications to all her followers). Initially, it relied on sending the emails through a popular cloud-based SMTP provider (such as SendGrid, Mandrill, Mailjet and Mailgun), however the response back to the user would take seconds. Evidently, connecting to the SMTP server to send those 20 emails was slowing the process down significantly.

After inspection, I found out the sources of the problem:

  1. Synchronous connection
    The application connects to the SMTP server and waits for an acknowledgment, synchronously, before continuing the execution of the process.
  2. High latency
    While my server is located in Singapore, the SMTP provider I was using has its servers located in the US, making the roundtrip connection take considerable time.
  3. No reusability of the SMTP connection
    When calling the function to send an email, the function sends the email immediately, creating a new SMTP connection on that moment (it doesn’t offer to collect all emails and send them all together at the end of the request, under a single SMTP connection).

Because of #1, the time the user must wait for the response is tied to the time it takes to send the emails. Because of #2, the time to send one email is relatively high. And because of #3, the time to send 20 emails is 20 times the time it takes to send one email. While sending only one email may not make the application terribly slower, sending 20 emails certainly does, affecting the user experience.

Let’s see how we can solve this issue.

Paying Attention To The Nature Of Transactional Emails

Before anything, we must notice that not all emails are equal in importance. We can broadly categorize emails into two groups: priority and non-priority emails. For instance, if the user forgot the password to access the account, she will expect the email with the password reset link immediately on her inbox; that is a priority email. In contrast, sending an email notifying that somebody we follow has posted new content does not need to arrive on the user’s inbox immediately; that is a non-priority email.

The solution must optimize how these two categories of emails are sent. Assuming that there will only be a few (maybe 1 or 2) priority emails to be sent during the process, and the bulk of the emails will be non-priority ones, then we design the solution as follows:

  • Priority emails can simply avoid the high latency issue by using an SMTP provider located in the same region where the application is deployed. In addition to good research, this involves integrating our application with the provider’s API.
  • Non-priority emails can be sent asynchronously, and in batches where many emails are sent together. Implemented at the application level, it requires an appropriate technology stack.

Let’s define the technology stack to send emails asynchronously next.

Defining The Technology Stack

Note: I have decided to base my stack on AWS services because my website is already hosted on AWS EC2. Otherwise, I would have an overhead from moving data among several companies’ networks. However, we can implement our soluting using other cloud service providers too.

My first approach was to set-up a queue. Through a queue, I could have the application not send the emails anymore, but instead publish a message with the email content and metadata in a queue, and then have another process pick up the messages from the queue and send the emails.

However, when checking the queue service from AWS, called SQS, I decided that it was not an appropriate solution, because:

  • It is rather complex to set-up;
  • A standard queue message can store only up top 256 kb of information, which may not be enough if the email has attachments (an invoice for instance). And even though it is possible to split a large message into smaller messages, the complexity grows even more.

Then I realized that I could perfectly imitate the behavior of a queue through a combination of other AWS services, S3 and Lambda, which are much easier to set-up. S3, a cloud object storage solution to store and retrieve data, can act as the repository for uploading the messages, and Lambda, a computing service that runs code in response to events, can pick a message and execute an operation with it.

In other words, we can set-up our email sending process like this:

  1. The application uploads a file with the email content + metadata to an S3 bucket.
  2. Whenever a new file is uploaded into the S3 bucket, S3 triggers an event containing the path to the new file.
  3. A Lambda function picks the event, reads the file, and sends the email.

Finally, we have to decide how to send emails. We can either keep using the SMTP provider that we already have, having the Lambda function interact with their APIs, or use the AWS service for sending emails, called SES. Using SES has both benefits and drawbacks:

  • Very simple to use from within AWS Lambda (it just takes 2 lines of code).
  • It is cheaper: Lambda fees are computed based on the amount of time it takes to execute the function, so connecting to SES from within the AWS network will take a shorter time than connecting to an external server, making the function finish earlier and costing less. (Unless SES is not available in the same region where the application is hosted; in my case, because SES is not offered in the Asian Pacific (Singapore) region, where my EC2 server is located, then I might be better off connecting to some Asia-based external SMTP provider).
  • Not many stats for monitoring our sent emails are provided, and adding more powerful ones requires extra effort (eg: tracking what percentage of emails were opened, or what links were clicked, must be set-up through AWS CloudWatch).
  • If we keep using the SMTP provider for sending the priority emails, then we won’t have our stats all together in 1 place.

For simplicity, in the code below we will be using SES.

We have then defined the logic of the process and stack as follows: The application sends priority emails as usual, but for non-priority ones, it uploads a file with email content and metadata to S3; this file is asynchronously processed by a Lambda function, which connects to SES to send the email.

Let’s start implementing the solution.

Differentiating Between Priority And Non-Priority Emails

In short, this all depends on the application, so we need to decide on an email by email basis. I will describe a solution I implemented for WordPress, which requires some hacks around the constraints from function wp_mail. For other platforms, the strategy below will work too, but quite possibly there will be better strategies, which do not require hacks to work.

The way to send an email in WordPress is by calling function wp_mail, and we don’t want to change that (eg: by calling either function wp_mail_synchronous or wp_mail_asynchronous), so our implementation of wp_mail will need to handle both synchronous and asynchronous cases, and will need to know to which group the email belongs. Unluckily, wp_mail doesn’t offer any extra parameter from which we could asses this information, as it can be seen from its signature:

function wp_mail( $to, $subject, $message, $headers = '', $attachments = array() )

Then, in order to find out the category of the email we add a hacky solution: by default, we make an email belong to the priority group, and if $to contains a particular email (eg: nonpriority@asynchronous.mail), or if $subject starts with a special string (eg: “[Non-priority!]“), then it belongs to the non-priority group (and we remove the corresponding email or string from the subject). wp_mail is a pluggable function, so we can override it simply by implementing a new function with the same signature on our functions.php file. Initially, it contains the same code of the original wp_mail function, located in file wp-includes/pluggable.php, to extract all parameters:

if ( !function_exists( 'wp_mail' ) ) : function wp_mail( $to, $subject, $message, $headers = '', $attachments = array() ) { $atts = apply_filters( 'wp_mail', compact( 'to', 'subject', 'message', 'headers', 'attachments' ) ); if ( isset( $atts['to'] ) ) { $to = $atts['to']; } if ( !is_array( $to ) ) { $to = explode( ',', $to ); } if ( isset( $atts['subject'] ) ) { $subject = $atts['subject']; } if ( isset( $atts['message'] ) ) { $message = $atts['message']; } if ( isset( $atts['headers'] ) ) { $headers = $atts['headers']; } if ( isset( $atts['attachments'] ) ) { $attachments = $atts['attachments']; } if ( ! is_array( $attachments ) ) { $attachments = explode( "\n", str_replace( "\r\n", "\n", $attachments ) ); } // Continue below...

And then we check if it is non-priority, in which case we then fork to a separate logic under function send_asynchronous_mail or, if it is not, we keep executing the same code as in the original wp_mail function:

function wp_mail( $to, $subject, $message, $headers = '', $attachments = array() ) { // Continued from above... $hacky_email = "nonpriority@asynchronous.mail"; if (in_array($hacky_email, $to)) { // Remove the hacky email from $to array_splice($to, array_search($hacky_email, $to), 1); // Fork to asynchronous logic return send_asynchronous_mail($to, $subject, $message, $headers, $attachments); } // Continue all code from original function in wp-includes/pluggable.php // ...

In our function send_asynchronous_mail, instead of uploading the email straight to S3, we simply add the email to a global variable $emailqueue, from which we can upload all emails together to S3 in a single connection at the end of the request:

function send_asynchronous_mail($to, $subject, $message, $headers, $attachments) { global $emailqueue; if (!$emailqueue) { $emailqueue = array(); } // Add email to queue. Code continues below...

We can upload one file per email, or we can bundle them so that in 1 file we contain many emails. Since $headers contains email meta (from, content-type and charset, CC, BCC, and reply-to fields), we can group emails together whenever they have the same $headers. This way, these emails can all be uploaded in the same file to S3, and the $headers meta information will be included only once in the file, instead of once per email:

function send_asynchronous_mail($to, $subject, $message, $headers, $attachments) { // Continued from above... // Add email to the queue $emailqueue[$headers] = $emailqueue[$headers] ?? array(); $emailqueue[$headers][] = array( 'to' => $to, 'subject' => $subject, 'message' => $message, 'attachments' => $attachments, ); // Code continues below

Finally, function send_asynchronous_mail returns true. Please notice that this code is hacky: true would normally mean that the email was sent successfully, but in this case, it hasn’t even been sent yet, and it could perfectly fail. Because of this, the function calling wp_mail must not treat a true response as “the email was sent successfully,” but an acknowledgment that it has been enqueued. That’s why it is important to restrict this technique to non-priority emails so that if it fails, the process can keep retrying in the background, and the user will not expect the email to already be in her inbox:

function send_asynchronous_mail($to, $subject, $message, $headers, $attachments) { // Continued from above... // That's it! return true;

Uploading Emails To S3

In my previous article “Sharing Data Among Multiple Servers Through AWS S3”, I described how to create a bucket in S3, and how to upload files to the bucket through the SDK. All code below continues the implementation of a solution for WordPress, hence we connect to AWS using the SDK for PHP.

We can extend from the abstract class AWS_S3 (introduced in my previous article) to connect to S3 and upload the emails to a bucket “async-emails” at the end of the request (triggered through wp_footer hook). Please notice that we must keep the ACL as “private” since we don’t want the emails to be exposed to the internet:

class AsyncEmails_AWS_S3 extends AWS_S3 { function __construct() { // Send all emails at the end of the execution add_action("wp_footer", array($this, "upload_emails_to_s3"), PHP_INT_MAX); } protected function get_acl() { return "private"; } protected function get_bucket() { return "async-emails"; } function upload_emails_to_s3() { $s3Client = $this->get_s3_client(); // Code continued below... }
new AsyncEmails_AWS_S3();

We start iterating through the pairs of headers => emaildata saved in global variable $emailqueue, and get a default configuration from function get_default_email_meta for if the headers are empty. In the code below, I only retrieve the “from” field from the headers (the code to extract all headers can be copied from the original function wp_mail):

class AsyncEmails_AWS_S3 extends AWS_S3 { public function get_default_email_meta() { // Code continued from above... return array( 'from' => sprintf( '%s ', get_bloginfo('name'), get_bloginfo('admin_email') ), 'contentType' => 'text/html', 'charset' => strtolower(get_option('blog_charset')) ); } public function upload_emails_to_s3() { // Code continued from above... global $emailqueue; foreach ($emailqueue as $headers => $emails) { $meta = $this->get_default_email_meta(); // Retrieve the "from" from the headers $regexp = '/From:\s*(([^\<]*?) <)??\s*\n/i'; if(preg_match($regexp, $headers, $matches)) { $meta['from'] = sprintf( '%s ', $matches[2], $matches[3] ); } // Code continued below... } }

Finally, we upload the emails to S3. We decide how many emails to upload per file with the intention to save money. Lambda functions charge based on the amount of time they need to execute, calculated on spans of 100ms. The more time a function requires, the more expensive it becomes.

Sending all emails by uploading 1 file per email, then, is more expensive than uploading 1 file per many emails, since the overhead from executing the function is computed once per email, instead of only once for many emails, and also because sending many emails together fills the 100ms spans more thoroughly.

So we upload many emails per file. How many emails? Lambda functions have a maximum execution time (3 seconds by default), and if the operation fails, it will keep retrying from the beginning, not from where it failed. So, if the file contains 100 emails, and Lambda manages to send 50 emails before the max time is up, then it fails and it retries executing the operation again, sending the first 50 emails once again. To avoid this, we must choose a number of emails per file that we are confident is enough to process before the max time is up. In our situation, we could choose to send 25 emails per file. The number of emails depends on the application (bigger emails will take longer to be sent, and the time to send an email will depend on the infrastructure), so we should do some testing to come up with the right number.

The content of the file is simply a JSON object, containing the email meta under property “meta”, and the chunk of emails under property “emails”:

class AsyncEmails_AWS_S3 extends AWS_S3 { public function upload_emails_to_s3() { // Code continued from above... foreach ($emailqueue as $headers => $emails) { // Code continued from above... // Split the emails into chunks of no more than the value of constant EMAILS_PER_FILE: $chunks = array_chunk($emails, EMAILS_PER_FILE); $filename = time().rand(); for ($chunk_count = 0; $chunk_count  $meta, 'emails' => $chunks[$chunk_count], ); // Upload to S3 $s3Client->putObject([ 'ACL' => $this->get_acl(), 'Bucket' => $this->get_bucket(), 'Key' => $filename.$chunk_count.'.json', 'Body' => json_encode($body), ]); } } }

For simplicity, in the code above, I am not uploading the attachments to S3. If our emails need to include attachments, then we must use SES function SendRawEmail instead of SendEmail (which is used in the Lambda script below).

Having added the logic to upload the files with emails to S3, we can move next to coding the Lambda function.

Coding The Lambda Script

Lambda functions are also called serverless functions, not because they do not run on a server, but because the developer does not need to worry about the server: the developer simply provides the script, and the cloud takes care of provisioning the server, deploying and running the script. Hence, as mentioned earlier, Lambda functions are charged based on function execution time.

The following Node.js script does the required job. Invoked by the S3 “Put” event, which indicates that a new object has been created on the bucket, the function:

  1. Obtains the new object’s path (under variable srcKey) and bucket (under variable srcBucket).
  2. Downloads the object, through s3.getObject.
  3. Parses the content of the object, through JSON.parse(response.Body.toString()), and extracts the emails and the email meta.
  4. Iterates through all the emails, and sends them through ses.sendEmail.
var async = require('async');
var aws = require('aws-sdk');
var s3 = new aws.S3(); exports.handler = function(event, context, callback) { var srcBucket = event.Records[0]; var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " ")); // Download the file from S3, parse it, and send the emails async.waterfall([ function download(next) { // Download the file from S3 into a buffer. s3.getObject({ Bucket: srcBucket, Key: srcKey }, next); }, function process(response, next) { var file = JSON.parse(response.Body.toString()); var emails = file.emails; var emailsMeta = file.meta; // Check required parameters if (emails === null || emailsMeta === null) { callback('Bad Request: Missing required data: ' + response.Body.toString()); return; } if (emails.length === 0) { callback('Bad Request: No emails provided: ' + response.Body.toString()); return; } var totalEmails = emails.length; var sentEmails = 0; for (var i = 0; i < totalEmails; i++) { var email = emails[i]; var params = { Destination: { ToAddresses: }, Message: { Subject: { Data: email.subject, Charset: emailsMeta.charset } }, Source: emailsMeta.from }; if (emailsMeta.contentType == 'text/html') { params.Message.Body = { Html: { Data: email.message, Charset: emailsMeta.charset } }; } else { params.Message.Body = { Text: { Data: email.message, Charset: emailsMeta.charset } }; } // Send the email var ses = new aws.SES({ "region": "us-east-1" }); ses.sendEmail(params, function(err, data) { if (err) { console.error('Unable to send email due to an error: ' + err); callback(err); } sentEmails++; if (sentEmails == totalEmails) { next(); } }); } } ], function (err) { if (err) { console.error('Unable to send emails due to an error: ' + err); callback(err); } // Success callback(null); });

Next, we must upload and configure the Lambda function to AWS, which involves:

  1. Creating an execution role granting Lambda permissions to access S3.
  2. Creating a .zip package containing all the code, i.e. the Lambda function we are creating + all the required Node.js modules.
  3. Uploading this package to AWS using a CLI tool.

How to do these things is properly explained on the AWS site, on the Tutorial on Using AWS Lambda with Amazon S3.

Hooking Up S3 With The Lambda Function

Finally, having the bucket and the Lambda function created, we need to hook both of them together, so that whenever there is a new object created on the bucket, it will trigger an event to execute the Lambda function. To do this, we go to the S3 dashboard and click on the bucket row, which will show its properties:

Displaying bucket properties inside the S3 dashboard
Clicking on the bucket’s row displays the bucket’s properties. (Large preview)

Then clicking on Properties, we scroll down to the item “Events”, and there we click on Add a notification, and input the following fields:

  • Name: name of the notification, eg: “EmailSender”;
  • Events: “Put”, which is the event triggered when a new object is created on the bucket;
  • Send to: “Lambda Function”;
  • Lambda: name of our newly created Lambda, eg: “LambdaEmailSender”.
Setting up S3 with Lambda
Adding a notification in S3 to trigger an event for Lambda. (Large preview)

Finally, we can also set the S3 bucket to automatically delete the files containing the email data after some time. For this, we go to the Management tab of the bucket, and create a new Lifecycle rule, defining after how many days the emails must expire:

Lifecycle rule
Setting up a Lifecycle rule to automatically delete files from the bucket. (Large preview)

That’s it. From this moment, when adding a new object on the S3 bucket with the content and meta for the emails, it will trigger the Lambda function, which will read the file and connect to SES to send the emails.

I implemented this solution on my site, and it became fast once again: by offloading sending emails to an external process, whether the applications send 20 or 5000 emails doesn’t make a difference, the response to the user who triggered the action will be immediate.


In this article we have analyzed why sending many transactional emails in a single request may become a bottleneck in the application, and created a solution to deal with the issue: instead of connecting to the SMTP server from within the application (synchronously), we can send the emails from an external function, asynchronously, based on a stack of AWS S3 + Lambda + SES.

By sending emails asynchronously, the application can manage to send thousands of emails, yet the response to the user who triggered the action will not be affected. However, to ensure that the user is not waiting for the email to arrive in the inbox, we also decided to split emails into two groups, priority and non-priority, and send only the non-priority emails asynchronously. We provided an implementation for WordPress, which is rather hacky due to the limitations of function wp_mail for sending emails.

A lesson from this article is that serverless functionalities on a server-based application work pretty well: sites running on a CMS like WordPress can improve their performance by implementing only specific features on the cloud, and avoid a great deal of complexity that comes from migrating highly dynamic sites to a fully serverless architecture.

Smashing Editorial (rb, ra, yk, il)

Original Link

ETI Taps Synacor’s Cloud ID as Reference Platform for Telecom, Cable Ops

ETI Software customers include Frontier, Windstream and Cincinnati Bell. Original Link

New Technology Traps and Identifies Organic and Non-Organic Molecules at Ultra-Low Concentrations

A new technology for trapping and chemical analysis of non-organic and organic molecules at trace concentrations has been developed by an international group of physicists from Far Eastern Federal… Original Link

TIM Hits Out at Vivendi

Telecom Italia (TIM) issues ‘clarifications’ on statements made by its largest shareholder, Vivendi, and related press reports. Original Link

New FLIR InSite Mobile Application Simplifies Inspection Management

WILSONVILLE, Ore. – FLIR Systems, Inc. (NASDAQ: FLIR) announced today the launch of FLIR InSite™, a new mobile application and web portal for organizing client information and thermal inspection data in one location that is easy to access, manage, and share. Ideal for electricians, electrical contractors, and thermography service professionals, the FLIR InSite workflow management tool reduces inspection preparation time, increases efficiency, and helps deliver results quickly. With FLIR InSite, inspection professionals deliver a better customer experience and can visually demonstrate the value of their services.

FLIR InSite application helps users effectively plan and prepare for their work before beginning the day’s inspection. Working seamlessly with FLIR thermal imaging cameras and tools, the app collects all the images and data needed for an inspection report, while also reducing administrative workload. For reporting, the application provides real-time updates and delivers images, inspection data, and reports through a secure and private client portal.

The FLIR InSite app is available as a free download through the Apple Store and on the FLIR website. For more information, visit (United States) or (Global).


About FLIR Systems, Inc.

Founded in 1978 and headquartered in Wilsonville, Oregon, FLIR Systems is a world-leading maker of sensor systems that enhance perception and heighten awareness, helping to save lives, improve productivity, and protect the environment. Through its nearly 3,500 employees, FLIR’s vision is to be “The World’s Sixth Sense” by leveraging thermal imaging and adjacent technologies to provide innovative, intelligent solutions for security and surveillance, environmental and condition monitoring, outdoor recreation, machine vision, navigation, and advanced threat detection. For more information, please visit and follow @flir.


Forward-Looking Statements

The statements in this release by Jim Cannon and the other statements in this release regarding the contract, including contract amount and anticipated delivery dates, are forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Such statements are based on current expectations and are not guarantees of future performance and involve risks and uncertainties that are difficult to predict. Therefore, actual outcomes and results may differ materially from what is expressed or forecasted in such forward-looking statements due to numerous factors, including the following: the ability to manufacture and deliver the systems referenced in this release, continuing demand for the product referenced in the release, constraints on supplies of critical components, excess or shortage of production capacity, the ability of FLIR to manufacture and ship products in a timely manner, FLIR’s continuing compliance with U.S. export control laws and regulations and ability to sell to the U.S. government, and other risks discussed from time to time in FLIR’s Securities and Exchange Commission filings and reports. In addition, such statements could be affected by general industry and market conditions and growth rates, and general domestic and international economic conditions. Such forward-looking statements speak only as of the date on which they are made and FLIR does not undertake any obligation to update any forward-looking statement to reflect events or circumstances after the date of this release, or for changes made to this document by wire services or Internet service providers.

Original Link

Mathematicians zero in on the birth of Hypatia

Probability theory deployed to recalculate the birth date of antiquity’s most famous female scholar. Gabriella Bernardi reports. Original Link

Primates of the Caribbean: dead monkeys do tell tales

DNA analysis sheds light on ancient primate island-hopping. Dyani Lewis reports. Original Link

Mystery surrounding extinct ‘opposite birds’ deepens

A 75-million-year old fossil shows a group of ancient birds could fly as well as their more successful peers, so why did they die out? Samantha Page reports. Original Link

Software Development as a Design Process

The book Agile IT Organization Design – for Digital Transformation and Continuous Delivery by Sriram Narayan explains software development as a design process. The following statement from the book means a lot.

"Software development is a design process, and the production environment is the factory where product takes place. IT Ops work to keep the factory running.."

It questions the way we traditionally think of operation support, maintenance, production bugs, rework, their costs and budgets and the focus in software development life.

Original Link

Heatwaves damage insect sperm, threatening biodiversity

More hot spells could lead to catastrophic collapse of beetle species, researchers find. Stephen Fleischfresser reports. Original Link