The public response to a scandal around “AI-powered translation” reveals how much AI is taken for granted. Original Link
Reports that 2018’s blockbuster video game saw people working 100-hour weeks are troubling, given that tech firms could make workers’ lives easier, says Michael Cook Original Link
MOUNTAIN VIEW, California –SES is working with the Massachusetts Institute of Technology to explore ways to use artificial intelligence and machine learning to simplify operation of its communications satellite fleet.
“We have a very large fleet and tens of thousands of telemetry signals on each of our satellites,” Valvanera Moreno, SES system architecture and innovation manager, said Oct. 10 at Satellite Innovation 2018 here.“The next satellites will have even more data to process. That’s why we think this area has a lot of value.”
Like SES, government agencies and space companies looking for ways to apply artificial intelligence to various problems they face.
Orbital Insight, a geospatial analytics company, relies on artificial intelligence to help answer questions its customers ask.
“Artificial intelligence enables human analysts to extract maximum value from imagery,” said Devin Brande, Orbital Insight advance programs director. “We are on the cusp of combining modern remote sensing with other sources of intelligence to create a rich picture.”
Raytheon’s Intelligence Surveillance and Reconnaissance business established a capability center to focus its artificial intelligence and machine learning expertise. “As we grow that into a fundamental capability of our business, the goal is to dissolve the capability center and have it become part of the DNA of our business,” said Gabriel Comi, Raytheon Intelligence, Information and Services’ Artificial Intelligence and Autonomy Capability Center chief architect.
CosmiQ Works, one of four laboratories established by In-Q-Tel to explore how the U.S. government can take apply new and emerging commercial space capabilities to solve national security problems, and its partners Radiant Solutions, DigitalGlobe and Amazon Web Services holds competitions, called SpaceNet, that offer cash prizes to competitors who develop automated methods to detect road networks or other landmarks from high-resolution satellite imagery.
CosmiQ Works makes the winning algorithms open source. “Hopefully, that helps our government partners and the commercial sector,” said Adam Van Etten, CosmiQ Works technical director. “Sometimes these algorithms that get a lot of press don’t translate to our domain.”
NATIONAL HARBOR, Md. — The Air Force is turning to the private sector for fresh sources of intelligence about orbital activities. Space operators also are looking at technologies like artificial intelligence to analyze data so they can anticipate potential hazards, predict space weather and satellite anomalies.
In both commercial and military space operations, everyone wants “predictive” intelligence to be able to make timely decisions to prevent collisions or respond to threatening behavior, said Melanie Stricklan, chief technology officer and co-founder of Slingshot Aerospace, in Manhattan Beach, California.
Slingshot is one of many companies that see a growing business in the burgeoning field of “space battle management.”
The Air Force is trying to pivot from the traditional “space situational awareness,” or SSA, that focuses on tracking and identifying objects to “intelligence-driven” space operations, Stricklan told SpaceNews.
Air Force civilian and military officials at the Air Force Association’s annual symposium this week are meeting with space executives, trying to identify “disruptive” technologies and figuring out how to “leverage commercial capabilities,” said Bill Beyer, a defense industry consultant at Deloitte. “Those conversations are going on everywhere.”
One of the segments of the market that the Air Force is trying to better understand is space situational awareness, an area where commercial activity is picking up. Air Force Space Command has issued an open call for industry pitches on “non-governmental SSA.”
Stricklan said the holy grail is “tactical SSA” that draws from the “unimaginable amount of data from sources such as the Air Force’s Space Fence and newer commercial sensor networks.”
Traditional tracking of space objects is not enough to combat increasingly complex threats in space, she said. The Air Force is trying to move beyond catalog maintenance and is searching for new tools to probe what is happening in outer space.
The National Space Defense Center at Schriever Air Force Base, Colorado, and the National Reconnaissance Office in Northern Virginia is where much of the emerging technology will be tested. In a briefing to contractors last month, NSDC Director Col. Todd Brost said the goal is to “augment the government’s ability to detect and characterize space threats.”
The NSDC is in the market for “SSA data for all altitudes, all longitudes, all latitudes 24/7,” according to Brost’s slide presentation.
The Air Force has not yet issued an official solicitation for industry bids and has not specified an “acquisition strategy” for how commercial services would be procured. Brost’s briefing said a request for proposals could come in early 2019.
“We see a big need for SSA solutions, not just in the military but also in academia and in satellite owners and operators,” said Stricklan. Slingshot this month launched its first artificial intelligence-enabled SSA product, called “orbital atlas,” targeted at commercial satellite operators and academia. A more advanced version is being developed for military space battle management.
Space debris, traffic control
Military efforts to improve the quality of space intelligence are taking place as the Trump administration moves to reassign the government’s SSA responsibilities. The job of alerting commercial operators and foreign countries of potential collisions or hostile activities will shift from the Air Force Space Command to the Commerce Department.
The changeover has drawn praise as long overdue. The military wants to turn more attention to space war fighting and not have to spend resources supporting civilian space traffic management. Meanwhile, commercial satellite operators have grown frustrated by the military’s reluctance to share space data and would like to see a civilian agency step up support for private sector space activity.
The military maintains a catalog of about 20,000 space objects larger than 10 cm in Earth orbit and makes that data publicly available. Although that is a valuable service, the military’s “operational procedures do not always prioritize commercial satellite operations, and the services provided are limited in their transparency, timeliness, and machine-to-machine interactivity” with operators’ data about the location of space objects, noted Adam Archuleta, satellite navigation systems engineer, at Maxar Technologies’ DigitalGlobe.
By recent estimates, he noted, there are 29,000 objects in space that are larger than 10 cm, 750,000 from 1 to 10 cm, and more than 166 million from 1 mm to 1 cm.
DigitalGlobe, like other satellite operators, are adding commercial data sources to their internal collision avoidance systems to supplemental military data. Archuleta said the company is now working with LeoLabs, whose cloud-based software turns radar data into information about where debris is located on orbit.
The commercial sector’s SSA problems are somewhat similar to the military’s challenges in space battle management.
Companies need better data to safely manage their fleets and cope with space congestion, said Walter Scott, executive vice president and chief technology officer of DigitalGlobe.
Technologies like artificial intelligence and analytics are going to be useful, but there is also a growing appetite for policies and agreements on rules and norms of behavior in space, Scott told SpaceNews. “We’re just beginning to see the emergence of AI in dealing with large amounts of data in space,” he said. “As the number of objects tracked increases you begin to see AI doing a better job at prediction.”
Having commercial data sources like LeoLabs makes it easier to anticipate where other satellites or objects might be, Scott said. Along with the access to more data should be the development of a set of rules. This is an issue that affects everyone, said Scott, including the commercial industry, governments and militaries around the world.
Scott said the proliferation of space junk also demands that the industry come up with technologies to remove it. He mentioned Maxar’s satellite servicing robot that is being developed with the Defense Advanced Research Projects Agency. “It’s a space robot that can move around, grab a failed satellite and relocate it, potentially placing it in an orbit where it won’t create a problem.”
The conclusion is drawn from a survey of over 3,000 participants in 126 countries and 300 executives from China. Original Link
This week, I spent a lot of time with companies talking about data and using data for a variety of purposes, ranging from improved decision making to machine learning and deep learning systems. All companies I talk to have tons of data in their archives and often generate a lot of data in real-time or through batched uploads
However, although all companies claim to be data-driven in their decision processes and ways of working, practice often shows a different reality. When reflecting on my experiences with a variety of companies, I realized that there are at least five reasons why companies are not as data-driven as they think.
Thanks to TIBCO for inviting me to TIBCO NOW 2018 where they made several announcements. In talking with Bob Eve, Senior Director of Analytics at TIBCO and Keith Woodie, System Engineer Consultant and Dan Hudson, V.P. Senior Manager Systems Integration at First Citizens Bank, they believe the DZone community will be most interested in the introduction of TIBCO Spotfire X.
Spotfire X is an AI-driven analytics experience that integrates agile data exploration with natural language processing (NLP), machine learning recommendations, and model-based authoring, adding native support for real-time streaming data. This will allow users to humanize their data and information, resulting in making better decisions, faster.
With the aid of AI, doctors only need video clips of the patient, which can be shot even through average smartphone cameras. Original Link
WASHINGTON — The Army offered a $100,000 prize for a solution to an increasingly tough problem for commanders in the field: In a battlefield dense with electromagnetic signals, is there a better way to distinguish friendly transmissions from hostile attacks?
There is, according to a team of eight engineers from Aerospace Corporation, based in El Segundo, Calif. They won the prize by correctly detecting and classifying the greatest number of radio frequency signals using a combination of signal processing and artificial intelligence algorithms.
The competition, known as the “Blind Signal Classification Challenge,” was sponsored by the Army’s Rapid Capabilities Office, a small organization that looks for ways to apply commercial technology to solve military problems.
When the challenge kicked off in April, the Army gave all 49 competitors a large amount of recordings of various types of radio signals to use as “training data” so they could develop their algorithms. In early June, the Army put out a new data set that had no labels, so contestants had to blindly analyze and identify the signals. The Aerospace team learned on Aug. 27 that it had won the challenge.
Bradley Hirasuna, who oversees technology programs at Aerospace, said the application of AI in electronic warfare could help the U.S. military thwart attempts by enemies to interfere with military GPS or communications satellite signals. Identifying friendly and hostile signals is a constant challenge, he said. “Because these signals are becoming more complex, electronic warfare officers are becoming overwhelmed, not able to keep up with the threat environment.”
Russian forces reportedly have deployed jammers to disrupt GPS-guided unmanned air vehicles in combat zones like Syria and eastern Ukraine. The Pentagon also worries about electronic attacks against satellite-based communications systems.
Traditionally the Army’s electronic warfare units deploy large armored vehicles bristling with antennas to scan the area and search for radio-frequency signals. With AI-based tools, the commander can pull up a picture of the RF spectrum on a laptop screen and see what is going on. “If they need to suppress a signal, they can identify the signal to suppress. If they need to make sure friendly forces’ communications can happen, they know not to suppress those signals,” Hirasuna said. “This is the Army’s way of aiding the electronic warfare officer as the RF spectrum becomes more congested and more complex.”
Hirasuna said the company became interested in applying AI technology to signals intelligence long before the challenge was announced. “We have seen the advent of software defined radios,” he said. “Machine learning and AI are now used to exploit the power of software defined radios.”
Andres Vila Casado, one of the engineers on the team, said Aerospace benefited from its decades of experience as a contractor to the U.S. Air Force in areas like satellite communications and information technology. “The military sees what Google and Facebook have achieved in AI and says, ‘Let’s use some of that to improve our electronic warfare tools.’”
The Army informed the team it plans to continue to explore ways to turn the “blind signal” challenge into an operational system, Vila Casado said. “We want to develop our solution into real products they could use in the field.”
60 percent of globally raised funds for AI projects all went to those in China from 2013 to the first quarter of this year. Original Link
Artificial intelligence is helping to find the thousands of lead pipes responsible for the water crisis in Flint, Michigan Original Link
AI has seen increased usage in automating these processes Original Link
Local tech giants like Xiaomi, Alibaba and Huawei flood in the virtual assistant industry. Original Link
Image analysis is defined in Wikipedia as "…the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques."
From Battlestar Galactica to The Terminator, on-screen robots have never been above a little rule-breaking. Could our new laws of robotics keep them in line? Original Link
The Air Force Office of Scientific Research is studying the use of super-fast computers that promise improved security for data storage and transmission on Air Force systems. Credit: Air Force
Michael Hayduk, chief of the computing and communications division at the Air Force Research Laboratory says quantum technology will be “disruptive” in areas like data security and GPS-denied navigation.
WASHINGTON — Top Pentagon official Michael Griffin sat down a few weeks ago with Air Force scientists at Wright Patterson Air Force Base in Ohio to discuss the future of quantum computing in the U.S. military. Griffin, the undersecretary of defense for research and engineering, has listed quantum computers and related applications among the Pentagon’s must-do R&D investments.
Quantum computing is one area where the Pentagon worries that it is playing catchup while China continues to leap ahead. The technology is being developed for many civilian applications and the military sees it as potentially game-changing for information and space warfare.
The U.S. Air Force particularly is focused on on what is known as quantum information science.
“We see this as a very disruptive technology,” said Michael Hayduk, chief of the computing and communications division at the Air Force Research Laboratory.
Artificial intelligence algorithms, highly secure encryption for communications satellites and accurate navigation that does not require GPS signals are some of the most coveted capabilities that would be aided by quantum computing.
Hayduk spoke last week during a meeting of the Defense Innovation Board, a panel of tech executives and scientists who advise the secretary of defense. The DIB met at the Pentagon’s Silicon Valley location, the Defense Innovation Unit Experimental.
Quantum computers are the newest generation of supercomputers — powerful machines with a new approach to processing information. Quantum information science is the application of the laws of quantum mechanics to information science. Hayduk explained. Unlike traditional computers that are made of bits of zero or one, in quantum computers bits can have both values simultaneously, given them unprecedented processing power.
“The Air Force is taking this very seriously, and we’ve invested for quite a while,” Hayduk said.
The Pentagon is especially intrigued by the potential of quantum computing to develop secure communications and inertial navigation in GPS denied and contested environments. “It’s a key area we’re very much interested in,” said Hayduk.
Some of these technologies will take years to materialize, he said. “In timing and sensing, we see prototype capabilities in a five-year timeframe.” Communications systems and networks will take even longer.
Quantum clocks are viewed as a viable alternative to GPS in scenarios that require perfect synchronization across multiple weapons systems and aircraft, for example, said Hayduk. “We’re looking at GPS-like precision in denied environments,” he said. “It often takes several updates to GPS throughout the day to synchronize platforms. We want to be able to move past that so if we are in a denied environment we can still stay synchronized.”
Global race underway
Meanwhile, the Pentagon continues to watch what other nations are doing. China is “very serious” about this, he said. It is projected to invest from $10 billion to $15 billion over the next five years on quantum computing. China already has developed quantum satellites that cannot be hacked.
“They have demonstrated great technology,” said Hayduk. In the U.S., “we have key pieces in place. But we’re looking at more than imitating what China is doing in ground satellite communications. We’re looking at the whole ecosystem: ground, air, space, and form a true network around that.”
Other countries have been getting in the game too. The United Kingdom is planning a $400 million program for quantum-based sensing and timing. A similar project by the European Union is projected to be worth $1 billion over 10 years. Canada, Australia and Israel also have sizable programs.
What these countries’ quantum computing efforts have in common is that they are “whole of government” national programs, said Hayduk, “which is very different than what the U.S. has now.”
The Air Force Research Laboratory expects to play a “key role in developing software and algorithms to drive applications,” he added.
Congress has proposed an $800 million funding line in the Pentagon’s budget over the next five years for quantum projects. Hayduk said money is important but DoD also needs human capital. “Quantum physicists are in high demand. We need to develop quantum engineers, people who can apply it.” Another concern is the lack of a domestic supply chain (most suppliers today are outside the U.S.) and testing labs focused on quantum science.
How quantum technology could b applied to artificial intelligence is part of a broader debate on the military’s use of AI.
Defense Innovation Board Chairman Eric Schmidt, the former executive chairman of Google’s parent company Alphabet, has been pushing the Pentagon to embrace the technology. This is despite mistrust in the tech industry about the military’s intentions for using AI, which prompted Google to end a partnership with the Air Force to develop machine learning algorithms.
The Pentagon this month announced it will be setting up a Joint Artificial Intelligence Center led by DoD Chief Information Officer Dana Deasy.
At the DIB meeting, Schmidt said the new AI center is the “beginning of a very, very large program that will affect everyone in a good way.” In light of the recent controversy about the ethics of using AI in military operations, the Pentagon has asked the board to help delop “AI principles” for defense.
The technology is regarded as essential to help analyze data and provide leaders with accurate information in real time. Defense and intelligence officials for years have complained that commanders in the field are handicapped by a lack of timely data and reliable communications systems.
You’re reading the SN Military.Space newsletter we publish Tuesdays. If you would like to get our news and insights for military space professionals before everyone else, sign up here for your free subscription.
Using free data from the European Space Agency, a startup in Finland created a geospatial information service that is entirely enabled by artificial intelligence. AI algorithms are used to remove clouds and track changes in structures on the ground. The service, targeted at government agencies and industries like agriculture and infrastructure, costs about $4,000 a year. It is free to researchers studying the impact of natural disasters.
Joni Norppa, CEO and co-founder of the startup, named Terramonitor, says the current AI boom is only the tip of the iceberg. “We’re just beginning our mission to democratize space data,” he tells SpaceNews.
“There is lot of interest in remote-sensing data for operational use,” Norppa said. “With machine learning and AI, you don’t need any human labor to do that.”
Today only a fraction of the available space data is operationalized. One of the obstacles is that almost every image is covered by clouds. “We made an algorithm that detects clouds and makes cloudless mosaics autonomously.”
Terramonitor’s AI-based mapping was developed with 10-meter resolution imagery and radar data from ESA’s Sentinel 2 and Sentinel 1 satellites. Norppa: “Companies see the value of satellite intelligence but they don’t know how to get it, and don’t think they can afford it.” Maybe now they can.
PENTAGON SHARPENS AI FOCUS As artificial intelligence reaches fever pitch in the private sector, the U.S. government is working to capitalize on the technology. The military suffered a setback when Google decided to pull out of Project Maven, a program that uses AI algorithms to identify targets from drone video streams.
Former Deputy Defense Secretary Bob Work, an early champion of AI for military application, said Google’s actions — in response to employee blowback that they did not want AI used to kill people — were “troubling” because they would deter other tech firms from working with DoD. He warned that a rift between the tech industry and DoD is harmful to national security as other countries move to exploit AI and other ground-breaking technologies.
“Not being able to tap into the immense talent at Google to help DoD employ AI in ethical and moral ways is very sad for our society.”
Former Deputy Defense Secretary Bob Work,
speaking at the Center for a New American Security.
JOINT ARTIFICIAL INTELLIGENCE CENTER The JAIC will be the Pentagon’s nerve center for all AI activities and will set the agenda for the military services as they plan their R&D budgets. Newly hired DoD Chief Information Officer Dana Deasy will oversee the center.
Analyst Stephanie Meloni, of the immixGroup, said it makes perfect sense for the CIO to guide AI efforts. To make AI capabilities a reality, “there will be a growing need to implement cloud, infrastructure, cybersecurity and Internet of Things technologies,” she wrote in Washington Technology. “Clean, authoritative and trustworthy data is the foundation of all analytics, and AI is no exception. … AI and machine learning is a topic that touches all technology categories.”
MILITARY BADLY NEEDS AI Air Combat Commander Gen. Mike Holmes said the Project Maven controversy has distracted from the central reason why the military must have the technology, which is to analyze the overwhelming amount of data that is collected by drones, satellites and other sensors. Maven was “one of the first steps” toward greater use of learning machines and algorithms “to be able to allow people to focus on things that people do best and let the machine do that repetitive task,” Holmes told reporters. “That’s a big part of our future and you’ll continue to see that.”
GO WHERE THE MONEY IS Holmes continued: “For the military to be able to move forward into the future, we need to take advantage of where that R&D money is being spent and where the advances are in technology. If you look at where the money is, there’s a lot more money in the tech side than there is in the classic defense industrial complex. … If we’re going to be effective in this battle of technology with adversaries, we hope to be able to take advantage of that.”
As Andrew Jones reported in SpaceNews last week, the Chinese envision Long March 9 to be a Saturn 5-class super-heavy-lift rocket comparable in capacity to the Space Launch System currently in development by NASA.
Long March 5, China’s largest rocket so far, debuted in 2016 and last July suffered a first stage engine issue that prompted a redesign. A third flight of the rocket is expected in November. Long March 9 would be ready for its first test flight around 2030.
These revelations should be of great concern to the United States, warns industry consultant Loren Thompson, of the Lexington Institute, a think tank funded by defense contractors. “China’s military probably has plans for Long March 9. Plans that don’t include sightseeing on Mars,” Thompson tells SpaceNews.
It is a clear illustration of how China’s military benefits from controlling the nation’s space program, Thompson argues in a new Forbes article. Beijing intends to catch up with and surpass NASA’s SLS, the most ambitious rocket program ever attempted by the agency. While SLS is being designed for deep space exploration, China has announced no plans for going to Mars. “So why does it want a rocket that can lift even more than SLS?”
LAUNCH A STRATEGIC ASSET Thompson echos similar concerns expressed by U.S. officials like Undersecretary of Defense for Research and Engineering Mike Griffin, who has argued that heavy lift is a strategic capability with enormous national security implications. Rocket technology can be applied to ballistic missiles and satellites.
China’s Long March family of launch vehicles can already accomplish most of what Beijing’s leaders want to do in space, Thompson noted. “So there must be some other explanation for pouring billions of yuans into developing a super-heavy rocket.”
CHINA SEEKS PARITY WITH U.S. While the official story is that China too wants to pursue deep space exploration, the military there is most likely eyeing some new capabilities such the ability to deploy powerful surveillance satellites on par with U.S. technology. Thompson: “Because China trails the U.S. in satellite technology, a spacecraft matching the functionality of Lockheed Martin’s Space Based Infrared System would probably be bigger and heavier in the Chinese configuration. With a super-heavy rocket, though, deploying such a constellation in geosynchronous orbit would presumably be much easier.”
Airbus hopes the investment will position the company to win military contracts. The big selling point will be the speed of the assembly line.
The military is becoming more interested in LEO constellations of small satellites as a more resilient alternative to large, monolithic platforms in higher orbits. In a conflict, if U.S. satellites were targeted by lasers or electronic jammers, new ones could be quickly produced and launched.
A key opportunity is the Defense Advanced Research Projects Agency’s Blackjack program, described as an “architecture demonstration intending to show the high military utility of global LEO constellations and mesh networks of lower size, weight and cost.” DARPA wants to buy commercial satellite buses and marry them with military sensors and payloads. This makes Blackjack an attractive project for mass manufacturers like Airbus and SpaceX that can compete on price and response time. READ MORE HERE
A selected group of soldiers is learning not just to operate the satellite but also to support imagery requests from the battlefield.
“We are training the soldiers from the start,” Army Space and Missile Defense Command engineer Matthew Hitt, said in an Army news release.
The training program also includes the “warfighter assisting low Earth orbit tracker” antenna and the Kestrel Eye ground station, so soldiers can support overhead satellite passes. “We want to get them to the point of being able to operate an overhead pass independently,” Hitt said. “We want them to be able to collect all the data, archive it and get it out to who needs it.”
Soldiers will have a “fundamental understanding how Kestrel Eye works, not just how to request something from the satellite. And they will be able to convey that information when they go back to their unit.” The Army estimated that using a microsatellite requires a smaller logistical footprint in the field when compared to an unmanned aerial system.
Not a subscriber? Let’s fix that.
As I write, the 2018 FIFA World Cup is underway in Russia. Everyone has their favorite team to win, maybe because they’re a fervent supporter of their national side or have entered a sweepstake and been allocated a country. Others take the word of the prognostications of animals, such as Paul the Octopus before having a flutter on the result.
However, here at DZone, we like to keep things techy, so what does data science say about the probable outcome of the competition?
Well, researchers at the Technical University of Dortmund in Germany have repeatedly run a simulation of the tournament (100,000 times). The team combined machine learning and conventional statistics, using a random-forest approach, to identify the most likely winner.
The random-forest technique is commonly used for large datasets. It determines future events using a decision tree, calculating the outcome for each branch using a set of training data. Decision trees typically suffer from overfitting (distortion occurring when there is insufficient training data), but the random-forest approach avoids overfitting by making repeated calculations on randomly selected branches and taking the average.
The researchers modeled the outcome of each game to determine the potential outcome of the tournament. The factors they considered included each country’s GDP and population, FIFA’s ranking, number of Champions League players, average age, and more.
So, who is the predicted winner? To save you reading the whole paper, I’ll share the result…Spain…with a probability of 17.8 percent.
But, the way the tournament is organized into groups and games is a factor in the result. Assuming Germany gets through the initial group stages, it will encounter a strong opponent (Brazil, Switzerland, Serbia or Costa Rica) in the next stage (which could knock it out of the tournament). Spain has an easier match if it reaches that stage (against Uruguay, Russia, Saudi Arabia, or Egypt). However, if Germany gets through that stage and into the quarterfinals, it catches Spain and becomes the favored team.
So it’s either Spain or Germany, right?
Well, others have made different predictions using a statistical approach that has been successful in previous championships. The model takes bookmakers’ odds, converts them into winning probabilities, and simulates the tournament by repeatedly playing through every conceivable match pairing. According to academics at the University of Innsbruck who used this approach, Brazil is predicted to win with a probability of 16.6%, followed by Germany (15.8%), and Spain (12.5%).
An Australian academic also predicts Brazil to be the winning team with a probability of 15.4%, using a Monte Carlo simulation of the possible outcomes of the tournament’s 63 matches to assess the probability of how far each team will progress.
So it’s definitely Brazil, Germany or Spain. Maybe.
Turning away from using AI and big data to predict the result of the tournament, the teams involved are also using data science and AI to improve their chances of a win.
Forbes reports that the German team is working with SAP, who have built software and analytics tools to offer new insights. These are already used by a number of top-flight teams including the UK Premier League champions (and my personal favorite team) Manchester City. The philosophy behind the software is that tactics are essential, and these should be derived by “observing and analyzing the various data sources of a game” according to head of scouting and match analysis, Cristofer Clemens.
One of the ways that the SAP software helps shape a team’s tactics is when it comes to penalty shootouts. The “Penalty Insights” tool is full of information on the opposing team’s penalty records, including videos of previous matches and statistics about run-up techniques and the area of the goal the player is likely to shoot at.
The Economist is also thinking ahead to the knockout stage of the World Cup when penalty shoot-outs can be used to determine the outcome of a game (and send a team crashing out of the competition) if the result is a draw after full and extra time. Amazingly, before shoot-outs were introduced in 1982, if a match was undecided after 120 minutes of play, the winner was determined by the toss of a coin.
Since the 1982 World Cup, there have been 26 penalty shoot-outs to decide matches in the knockout stages of the World Cup. The Economist reports that, of the 9 final matches occurring since their introduction, 7 of the 18 teams playing came to the final as a result of a successful shoot-out result. Furthermore, two of the finals themselves have been decided by penalty shoot-outs. So they are indeed crucial. Fortunately, the article has used analysis by Ignacio Palacios-Huerta of the London School of Economics to determine the best strategy to win a penalty shoot-out. Let’s just hope the international teams playing in this year’s tournament have time to subscribe! Maybe there’s hope for the Panama team yet!
In today’s mobile ecosystem, you need to create more engaging, insightful, and targeted mobile app experiences. User expectations have evolved as mobile technologies have become more sophisticated, and things considered bleeding edge in the recent past are no longer up to par.
Users want minimal friction, the ability to navigate and interact effortlessly. They want catered experiences that are relevant to them. They expect that apps are intelligent enough to know their intent and desires with less input and fewer taps.
Even a year ago, this wasn’t easy to achieve. But with huge advances in artificial intelligence, big data, and machine learning, it’s rapidly becoming a reality. The combination of these three technologies enables us to make apps smart. We can make our apps learn from each customer interaction and a variety of data points, ensuring that experiences are better, more engaging, and more personalized for users – every time they use the app.
At the forefront of many of these developments is Microsoft Azure’s Cognitive Services. Cognitive Services is a suite of APIs that allow developers to add AI to their offerings. There are a wide range of APIs available for purposes ranging from natural language processing to voice recognition and beyond.
There is huge potential to leverage Cognitive Services to improve and even revolutionize mobile app experiences. We can make our apps smarter to:
While Cognitive Services has an impressive list of APIs in production and soon to be available, I believe the following are the most interesting in their potential to impact user experiences, customer intelligence, and engagement.
This service leverages machine learning to serve contextual content based on data you provide and the behavior of your users. Using feedback (user activities and behaviors), it makes decisions on the type of content to serve to particular users, and learns to serve more targeted content as more data is gathered. The Custom Decision service is also intelligent enough to run experiments with the type of content it serves by testing content options with users.
To give an example, if a music app has a recommendation feature that makes suggestions based on listening history by genre, the service could test user reactions to music that is from a different genre than they had previously listened to. In this way, this allows the service to ensure it is not making mistakes with the type of content it is serving.
The API allows your app to automatically and rapidly learn about users in order to make the user experience more engaging, personal, and relevant. And since everything is hosted on Azure, you only have to provide the data and it learns automatically.
There are many use cases for implementing the Custom Decision service into mobile apps, particularly for things like ad targeting, news cycles, recommendation engines, push notifications, any content-focused subscription services, and more.
Content Moderator is a service that combines machine and human-based content review to allow app owners to have greater control over any user-generated content on their mobile app. It is ideal for mobile apps that allow for user-generated content because it can moderate text, images, and video.
Text: The service is able to detect potentially offensive content in over 100 languages by matching it against custom lists, and also looks for personally identifiable information. It flags content and blocks it from being published on your app.
Images: Using custom lists, optical character recognition, and machine learning based classifiers, Content Moderator can block and flag any user-generated images.
Video: Content Moderator searches for and blocks adult video content to prevent it being published to your app.
Furthermore, you are able to add content to blacklists and the system will learn what kind of content it should be blocking based on the data you provide it. As more data is collected, the Content Moderator gets better at identifying which content needs to be flagged.
Importantly, Content Moderator allows you to retain control and visibility with a human review tool. This is a great feature for content that falls into a grey area and is not easily caught algorithmically, as it flags potentially offensive content and allows a human moderator to review and either discard or approve it.
Content Moderator is ideal for mobile apps that have a lot of user-generated content. Apps that don’t have this type of service in place require a great deal of manual moderation – in other words, you have to pay somebody to monitor, review, approve and remove this kind of content from your app. Apps that allow commenting, photo and video uploading, and other forms of user-generated content can benefit tremendously from this service.
The Speaker Recognition API is the next step in authentication and security. It can be used to add voice authentication as a safeguard for users to access applications, rather than things like passwords or pins, logins, or even biometric authentication like fingerprint scans. Voice authentication has great potential to enhance the user experience by streamlining authentication, particularly for apps with username/password login or pins.
Many users opt to remain logged into apps so they don’t have to sign in every time they use them. While this is more convenient, it is a security risk. Adding voice authentication keeps the authentication process simple and fast while blocking access to unrecognized users, which is both convenient and secure. When users want to use the app, they simply say a word or phrase to gain access.
This API is especially useful for mobile apps that deal with sensitive information. Banking and financial services first come to mind, though the Speaker Recognition API can be useful for any application that has user profiles with personal data.
The trend toward intelligent apps is quickly gaining momentum and is likely to be a huge driving force in the way mobile apps evolve in the foreseeable future. With the potential to create more targeted, contextually relevant, and personal user experiences, it’s an area that we will see a lot of focus on. Microsoft Azure’s Cognitive Services are currently a step above the rest when it comes to adding intelligent features into applications, with just a few of many APIs being discussed above.
They might be small, but bark beetles can ruin a forest. In the US, tens of millions of acres have been devastated by them in the past decade alone. In Europe, however, a combination of drones and artificial intelligence might be giving trees a fighting chance.
Bark beetles burrow into trees and lay eggs under the surface of the bark where the larvae feed on the tree’s inner layers and eventually chew their way out. All of this damages the tree’s vascular system, fatally weakening it.
Within months of a …
It’s no secret that the digital revolution is quickly changing the way businesses and customers interact with each other. Like Blockbuster, companies that don’t understand the evolving needs and tastes of their customers will die, while companies like Netflix that fail fast, quickly adopt technology, and evolve, will thrive.
If you want to see this in action now, look at Domino’s Pizza versus its competitors. The pace of technological change and how Domino’s understands its customer base is very different compared to other pizza chains. One small misstep, however, and the bottom can fall out. Several techniques have been developed to ensure cohesion between business and customer—one of them is net promoter score (NPS).
NPS is a very important indicator of customer satisfaction for your organization or business. Generally, it’s associated with changes to an overall customer satisfaction initiative where any rise in NPS should relate to more revenue and positive brand recognition, while a drop in NPS could have your organization going viral for all the wrong reasons. It’s so important, in fact, that a lot of compensation is tied to it—maybe even yours. While NPS isn’t the only indicator of customer satisfaction, it is the most well-known. In addition, customers have access to an expanding number of channels not even thought of just 10 years ago. Companies like G2 Crowd, and apps like the iTunes App Store and Google Store, etc., can present a positive or negative perception to thousands of users within minutes of an otherwise perfect release. These perceptions can cause great success or irreparable damage to a company or brand.
What does this mean if you’re a quality professional? Defects, test automation, test coverage, and quality gates are still very important, but you need to shift up. The purpose behind shift up is very simple: Look up, look around, and see how the work you’re doing as a quality professional is impacting the people using your products, applications, or services. Shifting up goes beyond traditional user acceptance testing. It’s about understanding, advocating for, and thinking of yourself as a customer. Shifting up is the real way we avoid failure in production, and to ensure that the software, apps, and services we build truly match up to customer and business expectations.
No longer are we looking at moving up the IT chain toward the project management office (PMO) with shift left as success. That’s expected. No longer are we looking to get to production faster by shifting right. That’s status quo if you want to be competitive in the marketplace. You need to ensure that the customer is satisfied with the product or service your company is putting out into the market. Think about it this way: Even the most flawlessly built and functioning applications designed according to the best technical and business requirements can still negatively affect revenue, branding, and competitive position. Or, put another way, all the positive metrics and key performance indicators (KPIs) used to measure the success of shift left and shift right in a release are not a complete gauge of success in production.
Let me reiterate. All the positive metrics and KPIs used to measure the success of shift left and shift right in a release are not a complete gauge of success in production.
Shift up is the concept of taking a diverse set of users with individual needs, wants, and experiences, and ensuring your application can meet all their needs quickly, while respecting processes and timelines defined in your shift-left and shift-right processes. To experience success in shift up, you need to leverage tools and techniques that weren’t even available when UI/UX and usability testing were just getting a foothold in the market. You need artificial intelligence, real-time user monitoring, analytics, and a way to ensure interoperability across the digital spectrum.
To understand how to measure the uniqueness of a customer and how they use your product, we need to define two essential concepts for creating, acquiring, and leveraging the analytics necessary to shift up.
Ethnography is defined as the scientific study of people and cultures. But from a shift-up perspective, I like the definition given by ExperienceUX : “Ethnography is a study through direct observation of users in their natural environment rather than in a lab. The objective of this type of research is to gain insights into how users interact with things in their natural environment.” In a QA setting, that means we need to be concerned with what the user is feeling. For example, are we looking at someone new to technology who’s frustrated at a supermarket self-scan machine? Are there distractions around? Was the user just fired from a job? Can any of these factors affect the way an application is used?
Psychometric analysis, according to the Australian Institute of Psychometric Coaching, is the standard and scientific study of mental capabilities and behaviors. In short, metrics that ensure your application is appropriate for the capabilities and behaviors of your target audience: Children’s educational toys, a web service catering to senior citizens, etc.
Hopefully by this point, the implications of not looking at your application through an ethnographic or psychometric lens are now becoming a little clearer. Think of common issues encountered by your technical support or customer success groups. Now, think about them from an economic or branding aspect. Are people no longer using your application because of true performance or infrastructure problems or because the user can’t go as fast through the system as they used to? As a result, are they more attracted to your competitor’s service? Is a message displaying that a lane open or closed on your automated point-of-sale machine considered a bug because the user is looking at the common color scheme as opposed to the words “open” or “closed?” Is this helping your business goal of reducing cashier headcount in favor of automated systems? Finally, if you are a franchiser and notice a difference in quality and revenue from two similar locations in different parts of the country, is it a design flaw or a management problem, or are external factors like the local education system and cultural differences between locations to blame? How do you know? How do you design for this? More important, how do you test for it?
Traditionally, organizations have used usability, UI/UX, and field testing to uncover these problems. Tools exercising those concepts are good and do serve a purpose. But while those tools may tell you in UI/UX testing if localization is correct or ir the GUI is aesthetically pleasing, they don’t generally account for the user’s emotions or behaviors. Field testing, whether in beta or in the wild (as another company likes to put it), could be difficult to organize, logistically difficult to execute, and may not be completely effective in protecting your IP.
How can you ensure proper QA without having to resort to resource-, time-intensive, and expensive options? With an intelligent automated testing solution that use artificial intelligence, machine learning, continuous monitoring, and analytics.
Newly added, real-time user monitoring functionality in Eggplant AI—the brain behind our Digital Automation Intelligence Suite—gives you all the capabilities you need to successfully shift up without negatively impacting existing processes. Eggplant AI is built on Internet of Things (IoT) devices, databases, and open source testing tools to help teams execute automated test cases without having to re-engineer your entire suite. Eggplant Functional tests an application through the eyes of the user—exactly as someone would see and interact with it—without any complicated scripting.
One of our clients in the telecommunications industry experienced this firsthand with a defect in a customer ordering process that only Eggplant AI was able to detect. To sign up for or transfer service, users would fill out a web form. But if the user went back a screen to edit data and clicked “continue” to proceed, any information previously captured was lost, an error screen appeared, and as a result, users were abandoning the site. The designers didn’t consider that a back button on a browser would be more useful and intuitive rather than the custom-designed back button. Eggplant AI incorporates data on user behaviors and creates test cases to find bugs that even the most talented QA professionals may miss.
With Eggplant AI, you can model your application, connect actions to snippets, and dynamically create and execute test cases. The solution leverages risk-based testing, testing coverage analysis, bugs detected in earlier releases—and now with the addition of real-time user monitoring, specific ways your customer is using your product in production to ensure that the quality of that release meets business, technical, and customer standards. Psychometric KPIs can be built into the system, and Eggplant AI comes out of the box with auto-ethnography metrics and KPI capabilities.
Your organization lives and dies by how it’s perceived by your customers, as well as how they use your products, applications, and services. Each customer is unique, with different needs, experiences, and behaviors. What may work in one town, country, even continent, may not work in another. The future of QA and how it can provide a real impact to your business requires how well you can shift up. Remember, the customer—every one of them—is always right.