A brief intro to simulating quantum systems with QuTiP.
With the reincarnation of Linux Journal, I thought I’d take this article through a quantum leap (pun intended) and look at quantum computing. As was true with the beginning of parallel programming, the next hurdle in quantum computing is developing algorithms that can do useful work while harnessing the full potential of this new hardware.
Unfortunately though, most people don’t have a handy quantum computer lying around on which they can develop code. The vast majority will need to develop ideas and algorithms on simulated systems, and that’s fine for such fundamental algorithm design.
So, let’s take look at one of the Python modules available to simulate quantum systems—specifically, QuTiP. For this short article, I’m focusing on the mechanics of how to use the code rather than the theory of quantum computing.
The first step is installing the QuTiP module. On most machines, you can install it with:
sudo pip install qutip
This should work fine for most people. If you need some latest-and-greatest feature, you always can install QuTiP from source by going to the home page.
Once it’s installed, verify that everything worked by starting up a Python instance and entering the following Python commands:
>> from qutip import * >> about()
You should see details about the version numbers and installation paths.
The first step is to create a qubit. This is the simplest unit of data to be used for quantum calculations. The following code generates a qubit for two-level quantum systems:
>> q1 = basis(2,0) >> q1 Quantum object: dims = [, ], shape = (2, 1), type = ket Qobj data = [[ 1.] [ 0.]]
By itself, this object doesn’t give you much. The simulation kicks in when you start applying operators to such an object. For example, you can apply the sigma plus operator (which is equivalent to the raising operator for quantum states). You can do this with one of the operator functions:
>> q2 = sigmap * q1 >> q2 Quantum object: dims = [, ], shape = (2, 1), type = ket Qobj data = [[ 0.] [ 0.]]
As you can see, you get the zero vector as a result from the application of this operator.
You can combine multiple qubits into a tensor object. The following code shows how that can work:
>> from qutip import * >> from scipy import * >> q1 = basis(2, 0) >> q2 = basis(2,0) >> print q1 Quantum object: dims = [, ], shape = [2, 1], type = ket Qobj data = [[ 1.] [ 0.]] >> print q2 Quantum object: dims = [, ], shape = [2, 1], type = ket Qobj data = [[ 1.] [ 0.]] >> print tensor(q1,q2) Quantum object: dims = [[2, 2], [1, 1]], shape = [4, 1], type = ket Qobj data = [[ 1.] [ 0.] [ 0.] [ 0.]]
This will couple them together, and they’ll be treated as a single object by operators. This lets you start to build up systems of multiple qubits and more complicated algorithms.
More general objects and operators are available when you start to get to even more complicated algorithms. You can create the basic quantum object with the following constructor:
>> q = Qobj([, ]) >> q Quantum object: dims = [, ], shape = [2, 1], type = ket Qobj data = [[1.0] [0.0]]
These objects have several visible properties, such as the shape and number of dimensions, along with the actual data stored in the object. You can use these quantum objects in regular arithmetic operations, just like any other Python objects. For example, if you have two Pauli operators, sz and sy, you could create a Hamiltonian, like this:
>> H = 1.0 * sz + 0.1 * sy
You can then apply operations to this compound object. You can get the trace with the following:
In this particular case, you can find the eigen energies for the given Hamiltonian with the method
Several helper objects also are available to create these quantum objects for you. The basis constructor used earlier is one of those helpers. There are also other helpers, such as
Because you’ll be dealing with states that are so far outside your usual day-to-day experiences, it may be difficult to reason what is happening within any particular algorithm. Because of this, QuTiP includes a very complete visualization library to help see, literally, what is happening within your code. In order to initialize the graphics libraries, you’ll likely want to stick the following code at the top of your program:
>> import matplotlib.pyplot as plt >> import numpy as np >> from qutip import *
From here, you can use the
sphereplot() function to generate three-dimensional spherical plots of orbitals. The
plot_energy_levels() function takes a given quantum object and calculates the associated energies for the object. Along with the energies, you can plot the expectation values for a given system with the function
I’ve covered only the barest tip of the proverbial iceberg when it comes to using QuTiP. There is functionality that allows you to model entire quantum systems and see them evolving over time. Hopefully, this short article has been able to highlight one tool available to you if you decide to embark on research in quantum systems and computation.
Big Data is huge. With nearly 2.3 trillion gigabytes of data created every single day and the data universe doubling every two years, there’s no denying that data is, and will continue to be, shaping our world. To your everyday user of tech, it may not seem like it, but big data trends are constantly changing and evolving.
We’ve already brought you the trends we expect to see in business intelligence in 2018, but what can you expect for big data in 2018? Although these are the top changes we predict, the rapidly evolving nature of this field means we’re likely to be in for some surprises, too.
Here’s our take.
Despite all the buzz about the unprecedented volumes of data that humanity generates every day, the fact remains that databases all over the world remain in analog form, un-digitized, and thus untapped regarding analytics.
The coming year will witness increased digitization of this “dark data,” such as historical records, paper files, and many other forms of non-digital data recording. As this new wave of dark data enters the cloud, we can expect to see a major increase in the range of trends and cycles we can predict.
A recent Forbes survey reports that the number of enterprises with more than 100 terabytes of unstructured data has doubled since 2016 — but that only 32% of those companies have succeeded in analyzing that data in any actionable way.
As deep learning algorithms become more capable of deriving useful insights with less human supervision, we expect to see unstructured data become an increasingly significant part of the big data ecosystem.
Chief Data Officers (CDOs) are taking increasingly high-level roles in organizational strategy. The same Forbes survey mentioned above finds that more than 50% of CDOs will report directly to the CEO in 2018, up from just 40% in 2017.
As data analytics become part of the core operations of an ever-increasing number of businesses, CDOs will take more active roles in shaping new initiatives, and will become common sights in executive boardrooms around the world.
In 2017, quantum computing was essentially just an exciting idea, supported by little more than a few promising case studies. And while commercial quantum computing is still several years away, 2018 will bring an increasing battery of tests from companies like Google, Intel, and the Turing Institute, all of whom have quantum computers in active development.
As the ability to crunch 10,000,000,000,000,000 numbers at once becomes a realistic possibility, the sheer scale of the data we discuss will rapidly begin to expand.
“Big data” has been a buzzword for a few years now. However, in the wake of stories like alleged Russian influence in the US presidential election, Facebook’s controversial political ads, and the massive Yahoo! data breach, the general public is beginning to recognize that big data has become an intimate aspect of all our lives.
While we can certainly (and sadly) expect more data-related scaremongering in 2018, we’re also likely to see increasing media coverage of big data as a discipline, as a growing group of non-experts becomes more curious to find out how data analytics impact their lives.
Want to see what 2018 has in store for business intelligence? Keep ahead of the game with our free whitepaper: The Biggest BI Trends for 2018.
Failing to keep pace with the proliferation of big data and the increased power of machine learning could also prevent companies from making a substantial shift from standalone processes to more collaborative environments.
Here are the frontiers of technology that startups and institutions will get cracking on by 2018.
One of the main regulatory problems is that Bitcoin users are assigned a private identity, although every transaction on the bitcoin network is publicly visible on the blockchain. This, in turn, raises the issue of security as cryptocurrency uses public key cryptography to protect transactions. For this reason, regulators and financial players must engage in a constructive dialogue to find proper solutions.
Banks are using blockchain in other ways – not to hide identities, but to verify them. Mitsubishi UFJ Financial Group (MUFG), together with OCBC Bank and HSBC, has completed a proof-of-concept for a Know Your Customer (KYC) blockchain in collaboration with Singapore’s Infocomm Media Development Authority (IMDA).
The KYC blockchain runs on a Distributed Ledger Technology (DLT) platform that enables structured information to be recorded, accessed and shared across a distributed network using advanced cryptography. It allows banks to collect, validate, and share customer information in a secure way, making a complex and highly regulated process possibly more efficient.
In the digital age, the universe of data keeps expanding, leaving some companies in disarray when it comes to making effective use of unstructured data from a wide variety of sources. Having a big set of data at their disposal makes companies’ appetite for hypotheses and trends even bigger.
It is a challenge to develop algorithms that can successfully query varied data sets and deliver meaningful results. Big data can be fed into a machine-learning algorithm trained to process and reproduce the right behavior, while experts need to be able to manage data chaos.
According to IBM, demand for data scientists is expected to grow 28 percent by 2020, so mid-size companies and startups that lack the skills, resources, and capabilities will find it hard to bank on their business potential. In this regard, the MUFG Digital Accelerator programme offers companies the tools and the expertise necessary to achieve their full business potentials.
Quantum computing is likely to have the biggest impact on industries that are data-rich and time-sensitive. It’s especially useful for optimization problems present in many industries, such as supply chain and finance, but are difficult to solve, given the large amount of data needed and the current processing power of computers.
Across industries, quantum computing shows promise in helping to determine attractive investment portfolios, supply chain on a global scale, and ads customers see based on hundreds of their attributes. Quantum computing can even help detect fraud and prescribe personalized medicine.
The development of quantum computing is an expensive process and takes time. The technology is still maturing, and there are some hurdles left to overcome in order to build fully scalable quantum computers.
2018 could be the year in which some applications are brought to light. IonQ, an early-stage company developing quantum computing for commercial applications, plans to bring general-purpose quantum computers to market by late 2018.
For quite some time now, the real estate sector has been ripe for disruption by the use of new technologies. The often-dubbed “proptech” is the real estate version of “fintech”.
Most proptech companies promise to automate the process by giving house-hunters with real-time listings and much-faster search options. Apart from property search engines, other opportunities areas include crowdfunding websites, smart building firms, construction planning, and leasing and liquidity. Spatial-mapping robots could provide virtual tours of properties and help potential homeowners make decisions.
Robotics would be required to build hardware for spatial mapping or microsensors. This is especially needed as the Internet of Things can benefit real estate by lowering utility bills, increasing surveillance, and providing information on parking availability, among others.
With 2018 just around the corner, it’s not too early for organizations to evaluate their technology priorities for next year. Startups can help to figure out how technology can continue to meet the changing needs and demands of key industries.
The MUFG Digital Accelerator programme offers innovative companies a chance to demonstrate their solutions for real issues and for some of the problems outlined above. Companies can expect to gain valuable knowledge from bank experts and get access to a global network of clients, investors, and partners.
MUFG will provide participants with the tools and a dedicated working space in Tokyo to create and sharpen their business models. They will guide participants to launch their businesses through the program. The attending startups will have an opportunity to present their business plans in front of plenty of investors on Demo Day and be financing assessed if they meet all specific requirements.