ALU

STEM News

Factom™ – Anchoring & Securing Data Using Bitcoin (and soon Ethereum) Blockchain

Factom™ is a distributed, decentralized protocol layer running on top of Bitcoin that allows data of any kind to be memorialized, anchored, time-stamped, and cryptographically secured using Bitcoin’s blockchain and tremendous network hashrate (now estimated to be 2.5 exa hashes per second!)

Due to its use of Merkle Trees, Factom makes the storage of data extremely efficient – gigabytes of data can be hashed down into 32 bits of data using Merkle Trees, and this Merkle Root anchored into one or more blockchains (bitcoin and ethereum currently).

Paul Snow, Chief Architect of Factom explains how developers can build applications using Factom’s API and/or data management and storage solution. These applications leverage the immutability of the blockchain and the scalability of the Factom™ network. Paul shares the story of Factom, how it started and where it is heading now. It’s an intriguing story and the interview is full of great knowledge.

Original Link

From football to physics

Zachary Hulcher was once set on becoming a lawyer. In high school, he took part in mock trials and competed in youth judicial, playing the role of legal counsel and presenting cases in front of a student jury. He says his inspiration came partly from the television show Law and Order: “There’s drama, there’s action, you send people to jail, and you get to argue with people — and I loved arguing with people.”

But all that changed one day, sometime during his junior year, when he happened to flip through his physics textbook. In an idle moment at school, he turned to the very back of the book and started to read the chapter about special relativity.

Physics, he discovered, put mathematics and science into an almost fantastical perspective. “Ideas that come out of that one chapter are time travel, atomic bombs, things warping when they go really fast, and all these things that shouldn’t be real, but are,” Hulcher says.

Hulcher is currently a senior at MIT, majoring in physics as well as computer science and electrical engineering, with a minor in math. “I love the creative process and figuring out how elegant solutions to real problems arise out of seeming chaos,” he says.

He is a recipient of the 2017 Marshall Scholarship, awarded each year to up to 40 U.S. students who will pursue graduate degrees at universities in the United Kingdom. Next year, Hulcher will be working toward a PhD in high energy physics at Cambridge University, where he hopes to work on both experimental and theoretical problems of the Standard Model of particle physics, which governs every aspect of the known universe except for gravity.

“Beautiful math”

Hulcher was born and raised in Montgomery, Alabama. His mother and father are managers for Alabama’s environmental management agency. Hulcher grew up playing basketball with his younger brother in the family’s backyard. The brothers, who towered over their classmates — Hulcher is 6 feet 4 inches tall and his “little” brother, Jacob, is 6 feet 8 inches — joined their church league, and eventually played for their middle and high school teams.  

Along with basketball, Hulcher played football and was on the track and field team, balancing an unrelenting schedule of games and practices with an increasingly challenging course load. Hulcher attended the Montgomery Catholic Preparatory School System from kindergarten through high school in Montgomery, where he was valedictorian and a National Merit Scholar. In his freshman year he began taking math and physics classes with Joe Profio, a teacher who, recognizing that Hulcher was one of the top students in his class, urged him to join the school’s math teams.

Hulcher soon found himself taking long drives to math competitions across the state with Profio and his classmates. During those drives, Profio would talk about math at a deeper level than he could present in class, and Hulcher credits his passion for physics and math to these inspiring talks.

“Our conversations obliterated the idea that the only beauty in the world is found in an imaginary place in a book — beauty was all around me, if I would only look through the right lens,” Hulcher says.

It was around that time that Hulcher says “the wheels started cranking to do science.” The answer to how and where to direct this newfound momentum came from an unlikely source, another TV show.

“I was watching NCIS one day, and one of the characters is from MIT, and I thought, ‘I’m starting to like more science. I should apply there,’ and I did,” Hulcher recalls.

Computing, a physics problem

When Hulcher set foot on the campus for the first time — also the first time he had been anywhere north of Washington, D.C. — he was immediately drawn to the physics seminars held during Campus Preview Weekend.

“I remember an event called something like ‘physics til you drop,’ and two students were standing at a blackboard, doing physics until 5 or 6 am, long past when I could stay awake,” Hulcher says. “People would ask them questions about quantum mechanics, string theory, general relativity, anything, and they would try to answer them on the board. I was pretty hooked.”

He quickly landed on physics as a major but also chose computer science and electrical engineering, a decision based largely on conversations with his roommate, who was also majoring in the subject. When Hulcher took classes that explored quantum computing — the idea that quantum elements such as elementary particles can perform certain calculations vastly more efficiently than classical computers — he realized “all of computing is not just a computer science problem. It’s a physics problem. That’s just cool.”

Seeing through plasma

In the summer following his sophomore year, Hulcher traveled to Geneva, Switzerland, to work at the Compact Muon Solenoid experiment (CMS) at CERN’s Large Hadron Collider, the world’s largest and most powerful particle accelerator. There, he helped to implement an alarm system that monitors the accelerator’s major systems and distributes information to key people in the event of a failure.

He returned again the following summer, this time as a theorist. The LHC uses giant magnets to steer beams of atoms, such as lead ions, toward each other at close to the speed of light. Hulcher, working as a research assistant with Krishna Rajagopal of MIT’s Department of Physics and the Center for Theoretical Physics, was interested in the hot plasma of quarks and gluons produced when two lead ions collide.

“The plasma doesn’t last very long before it returns to some other state of matter,” Hulcher says. “You don’t even have time to blast it with light to see it; it would just disappear before the light got there. So you need to use events inside it to study it.”

Those events involve jets of particles that spew out from the plasma following a collision between two lead ions. Hulcher worked with Rajagopal and Daniel Pablos, a University of Barcelona graduate student, to help implement a model for how these jets of particles propagate through the resulting plasma. Hulcher recently helped to present the team’s results at a workshop in Paris and is finishing up a paper to submit to a journal — his first publication.

The prism of physics

In addition to his research work, Hulcher has racked up a good amount of teaching experience. As a teaching assistant for MIT’s Department of Physics, he has graded weekly problem sets for classes in classical mechanics and electricity and magnetism. He tutors fellow students in electrical engineering and computer science subjects, and he has spent the last year as eligibles chair of the MIT chapter of the engineering honor society Tau Beta Pi. Through the MIT International Science and Technology Initiatives (MISTI), Hulcher has traveled around the world, to Italy, Mexico, and most recently, Israel, teaching students subjects including physics, electrical engineering, and entrepreneurship.

Of all the relationships he’s developed through his time at MIT, he counts those with most of his teammates as some of the strongest. Hulcher joined MIT’s football team as a freshman offensive lineman; he says he will remember hanging out on long nights, p-setting with his friends from the football team. He will also remember MIT as a really long rollercoaster, he says.

As for what’s next, Hulcher says the plan for now is “to keep liking physics.” If that happens, he hopes to become a researcher and professor, to help students see the world through physics.

“I fell in love with physics,” Hulcher says. “I appreciate light bouncing off a mirror, and smoke billowing up, and light moving through it in a different way. I appreciate looking up at the stars and thinking about what’s out there. The small things I took for granted when I didn’t know much about them, I appreciate now. Everything is just a little prettier.”


Topics: Profile, Students, Awards, honors and fellowships, Energy, Mathematics, MISTI, Physics, Quantum computing, Undergraduate, School of Science, School of Engineering, SHASS, Athletics, Sports and fitness, Laboratory for Nuclear Science, Center for Theoretical Physics

Original Link

These Smart Glasses Automatically Adjust to Your Eyes

Imagine glasses that could bring everything into focus, shifting prescriptions from near to farsighted and back again in moments. It’s not possible with today’s glass lenses, but a breakthrough in what are called liquid lenses could make smart glasses that do exactly that. They could put an end to bifocals, and you’d only ever need that one pair of adjustable spectacles for the rest of your life.

Researchers at the University of Utah produced a prototype pair of these glasses by using flexible lenses and piezoelectric pistons. For users who are both nearsighted and farsighted, the lens’ curvature is gradually shifted between optical powers by the pistons. As the lenses change they can bring into focus any point along the way. This is possible because the lenses aren’t made of rigid materials, as they would be in traditional glasses. Instead, two flexible membranes encase glycerol, a highly refractive liquid, to comprise each lens. The lenses stretch and become more convex as transparent pistons push them forward, and more concave as the pistons move back.

To adapt to an individual’s particular vision, the glasses require two pieces of information. One is the prescription, which the user enters into a mobile app. The other is the distance to the desired focal point, which is measured by a sensor in the center of the frames. A microcontroller in one stem of the glasses stores the prescription information and receives the distance measure from the sensor. It plugs those numbers into a specialized algorithm, which in turn determines how much voltage to apply to the piezoelectric pistons to achieve the desired curvature.

The glasses, whose battery is stored in one stem of the frame, work for up to six hours before you have to recharge them. Now that they have a functioning prototype, the researchers hope to integrate eye-tracking technology, which could provide even more precise focus adjustments.

With its thick stems and wide frames, the laser-cut acrylic prototype may seem a bit clunky. But for now, function is more important than form. Still, someday soon the researchers may shift their focus to streamlining the design and sending customizable glasses to a store near you.

Original Link

Waymo’s Fight With Uber Might Be the First Shot in a Self-Driving Car IP War

Waymo filed a lawsuit yesterday accusing Uber of stealing the secret designs of circuit boards and laser ranging lidars used in its self-driving cars. Like all legal complaints, it ends with a long wish list of “reliefs” it wants the federal court in California to deliver, but the ramifications of this lawsuit could stretch far beyond financial damages or legal costs.

“For years I’ve warned about a potential automated driving patent war that could rival the notorious smartphone patent war,” says Bryant Walker-Smith, a law professor at the University of South Carolina and an expert in self-driving regulations. As autonomous vehicles transition from amusing gimmicks to money-making products, who controls the key intellectual property could determine which companies thrive and which fall by the wayside.

The pressure to succeed means that even companies with a financial interest in one another—Waymo’s parent company Alphabet owns about 7 percent of Uber—can find it worthwhile to sue.

On the face of it, Waymo’s lawsuit is a typical allegation of trade-secret misappropriation, says Robert Gomulkiewicz, a professor of intellectual property law at the University of Washington. “When an employee leaves to go work for a competitor, and that person has taken confidential trade-secret information, you want to stop them from using that information—stop them in their tracks,” he says.

Waymo’s next step could be to apply for a temporary restraining order or a preliminary injunction, preventing Uber and its Otto subsidiary from using Waymo’s lidar technology. This might be aimed at Otto’s self-driving trucks, or even at Uber’s autonomous taxis, which are currently carrying passengers in Pittsburgh and Tempe, Arizona.

“Waymo can’t stop Uber from moving forward on all fronts, but if this information is particularly valuable, they might want to stop Uber from using this particular technology right now,” says Gomulkiewicz.

Of course, Waymo’s accusation is only one side of the story. When IEEE Spectrum asked Uber several weeks ago whether it or Otto had licensed any intellectual property from Waymo, or if its technology was all developed in-house, Uber made no comment. But Anthony Levandowski, the talented but maverick engineer whom Waymo accused of downloading a massive trove of technical data before his departure, said in an interview last year: “I want to be supersensitive of protecting the confidentiality of Google’s information.”

Waymo is also coy about naming the other former employees who have since moved to Uber. The lawsuit says that a supply chain manager downloaded confidential manufacturing information and an engineer downloaded proprietary research data, but it stops short of either naming them or accusing them of wrongdoing. IEEE Spectrum has identified one of the individuals, but the person did not respond to a request for comment.

“It’s possible that Waymo is not confident that the employees have violated trade-secret law and they’re afraid of some countersuit if they prematurely put those names into a public complaint,” says Gomulkiewicz. He thinks that Waymo might not even be counting on the case ever going to trial: “Sometimes the end game is to engage with the company and work out some kind of cross-license or even a license for the trade-secret information.”

A straight-up licensing deal could be very profitable for a young company like Waymo, which only recently spun out from Alphabet. But Uber is no doubt already poring over its own, smaller trove of patents to find some that Waymo may be infringing upon.

Ultimately, Waymo’s decision to weaponize its intellectual property could affect vulnerable self-driving startups much more than Uber, a multinational behemoth valued at over US $60 billion. “Companies will discover that trivial yet essential parts of automated driving have already been patented,” Walker-Smith said in an email. “Google’s patent for driving on the left side of the lane when passing a truck comes to mind. These kind of patents could stop startups without a large defensive patent portfolio from even entering the field.”

Original Link

JustBoom Turns Your Raspberry Pi Into a Hi-Fi Powerhouse

photo of the Raspberry Pi model Photo: Randi Klett

From its very first version, the Raspberry Pi attracted interest as a small, inexpensive, and network-friendly home multimedia player. It comes with built-in HDMI and composite video for sending signals to television sets, and most versions have an analog 3.5-millimeter audio socket. Consequently, there’s a fleet of Raspberry Pi-compatible software packages designed to play video and audio files using a stripped-down, TV-friendly interface and which can be networked for multiroom audio.

However, while the high-definition video output is sufficient for viewing most movies (unless you have one of the latest 4K screens, of course), the analog audio is not up to the standards of dedicated audiophiles. As a result, a number of companies have started offering various expansion options to improve the Pi’s audio capabilities and ease of integration into existing hi-fi setups.

We wanted to try out one of these systems at IEEE Spectrum—Executive Editor Glenn Zorpette is our resident audiophile—and JustBoom’s product line caught my eye due to the flexibility and audio quality promised by its wide range of add-on HAT (“hardware attached on top”) and stand-alone boards. JustBoom kindly sent us a selection of its kits, and so we tried them out to see if they could deliver.

The first thing I realized is that the very flexibility and range that attracted me to JustBoom’s products means that figuring out what is supposed to go with what can induce some head-scratching. JustBoom does provide a flowchart-style product guide, but it’s somewhat confusing.

So to simplify matters for Pi owners: If you are looking to connect headphones, active speakers (the kind that have their own power supply), or other audio equipment by way of a 3.5-mm or RCA socket, you want a DAC HAT, which is a high-audio-resolution digital-to-analog converter. JustBoom’s DAC HATs come in two flavors: a US $38 one for all the Model A and Model B versions of the Raspberry Pi, and a $25 one for the smaller Pi Zero. If you want to hook your Pi up to a sound system using an optical audio port, use the Digi HAT, which again comes in two flavors. If you want to power passive speakers (the kind with just two wires running into the back of each) directly from the Pi, go with the $76 Amp HAT, which combines a DAC and an amplifier. (If you want the option of being able to listen using headphones and passive speakers with the same Pi, you can buy a DAC HAT and combine it with a $75 stand-alone Amp board that sits on top of the HAT.) JustBoom sells cases that enclose all the various combinations.

Putting the pieces together was straightforward. Each HAT came with mounting screws and plastic separators to hold the board in place once it was pushed onto the Pi’s general purpose input/output (GPIO) connector. Using three Pi’s, I assembled a DAC With Amp, an Amp HAT, and a Digi HAT. (I would have tried the supplied HATs for the Pi Zero as well, but Pi Zeros tend to be constantly out of stock!) The amplifier boards have a socket for an external 8- to 24-volt power supply. This also feeds into the Pi, so you can dispense with the USB cable normally used to power the Pi. If you want to deliver the maximum power the amplifiers can output—55 watts peak—you’ll need a 24-volt supply rated at over 3.1 amperes.

The cases, each with a nice glossy black finish, snap rather than screw together. (I personally prefer screws, as you avoid having to hold pieces together in just the right way and with just the right amount of pressure lest they pop out at the wrong moment during assembly.)

Installing the software was also straightforward: I formatted a microSD card and copied over a NOOBS boot image from the Raspberry Pi Foundation website. NOOBS lets a user easily install a number of operating systems on the Pi via a network connection. I used NOOBS to install some stripped-down operating systems—in this case the OSMC and LibreElec distributions—that are designed to support a single multimedia player application.

photo of JustBoom audio output testing Photo: Randi Klett Working the Options: We tested JustBoom’s audio outputs for passive and active speakers. The amplifiers proved powerful enough to overload our passive speakers at the highest volumes, so we used an oscilloscope to check for waveform clipping.

JustBoom provides software guides for several popular players and operating systems, with more on the way. On my first attempt, I tried configuring the DAC With Amp setup to work with the popular OSMC package, but I couldn’t get any of the hardware settings to stick, and couldn’t get any sound out of either the DAC (via the 3.5-mm socket) or the amp speaker connectors using a small passive speaker. After some frustrated poking around the JustBoom website for some missing step and general googling, I tried again using LibreElec. This OS/media player combo proved to have the advantage of a much simpler configuration process and the welcome ability to actually deliver sound out of all the various connectors.

Sticking with LibreElec, I tried the Amp HAT with some much larger passive speakers provided by our audiophile executive editor. Sound from a sample set of high-resolution music files also provided by him (including classical and rock music) was soon pouring out over an impressive range of volumes. At the very loudest point of one rock song, the Pi spontaneously rebooted, most likely because of the anemic 800-milliampere power supply I was using. Nonetheless, even with this supply, it was possible to play music at a level well beyond the comfort threshold for an indoor setup.

Summoning Spectrum’s executive editor for his final verdict, I cycled through the various configurations, including using active speakers to compare the Pi’s native analog output to the DAC’s. Zorpette was pleasantly surprised by the quality of the output, which he feels can stand its ground with even much pricier home DACs or amplifiers.

This article appears in the March 2017 print issue as “Hi-Fi With Your Pi.”

Original Link

“Stop, Thief!” Waymo Screams at Uber

Google’s self-driving car spinoff, Waymo, is charging a former employee, now at Uber, with stealing trade secrets by the gigabytes—claiming damages of at least US $500 million. What’s more, Waymo reconstructs the alleged crime in amazing detail, including an attempt by the main alleged perpetrator to cover his tracks.

Waymo says in a blog post that the secrets involve its lidar sensing system, the crown jewel in its self-driving design. Waymo recently announced home-designed lidar sets with overlapping ranges and functions, the first such system in the self-driving business.

“Recently, we received an unexpected email,” Waymo says. “One of our suppliers specializing in lidar components sent us an attachment (apparently inadvertently) of machine drawings of what was purported to be Uber’s lidar circuit board — except its design bore a striking resemblance to Waymo’s unique lidar design.”

And the chief thief, Waymo says, is Anthony Levandowski. Levandowski is listed as an inventor on many Google lidar patents, and Google’s acquisition of his startup, 510 Systems, was foundational to the self-driving car effort that became Waymo. 

Soon after leaving Google, Levandowski founded Otto, a self-driving truck company that Uber bought for $680 million in August, a mere nine months after its launch. 

Hmm. How did Levandowski build Otto to near-unicorn status in just nine months? Here’s Waymo’s answer:

“We found that six weeks before his resignation this former employee, Anthony Levandowski, downloaded over 14,000 highly confidential and proprietary design files for Waymo’s various hardware systems, including designs of Waymo’s lidar and circuit board,” Waymo’s blog says. 

“To gain access to Waymo’s design server, Mr. Levandowski searched for and installed specialized software onto his company-issued laptop,” Waymo continues. “Once inside, he downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation. Then he connected an external drive to the laptop. Mr. Levandowski then wiped and reformatted the laptop in an attempt to erase forensic fingerprints.”

Gad, Holmes, how do you do it? Gosh, Uber, what’s your answer? 

“We take the allegations made against Otto and Uber employees seriously and we will review this matter carefully,” an Uber spokeswoman said in an email sent out Thursday night.

So, for once we may have an huge, Silicon Valley kerfuffle over IP that doesn’t turn on a fine point of the law. In fact, if Waymo’s detailed allegations pass muster, it’d be more like a smoking gun, a bag of loot emblazoned with a dollar sign, and a full-motion video capture of the deed itself.  

And the takeaway: not all apparent talent-poaching is really about talent. 

In September, Sebastian Thrun, an ex-Googler who helped start up its car project, interpreted the Otto acquisition as just another case of recruitment en masse. “Uber has just bought a half-a-year-old company [Otto] with 70 employees for almost $700 million,” Thrun said, according to Recode. “If you look at GM, they spent $1 billion on its acquisition of Cruise. These are mostly talent acquisitions. The going rate for talent these days is $10 million.”

Okay, so maybe Cruise’s people are worth their weight in gold, or some other precious substance.  But it could just be that the fat Otto purchase price all went for just one guy—and his external hard drive.

Original Link

Browser Fingerprinting Tech Works Across Different Browsers for the First Time

Browsing the Web just got a little less anonymous. The software that lets websites identify you by certain characteristics of your computer and software was usually thwarted if you switched browsers. But now computer scientists have developed new browser fingerprinting software that identifies users across Web browsers with a degree of accuracy that beats the most sophisticated single-browser techniques used today.

The new method, created by Yinzhi Cao, a computer science professor at Lehigh University, in Pennsylvania, accurately identifies 99.24 percent of users across browsers, compared to 90.84 percent of users identified by AmIUnique, the most advanced single-browser technique.

Browser fingerprinting is an online tracking technique commonly used to authenticate users for retail and banking sites and to identify them for targeted advertising. By combing through information available from JavaScript and the Flash plugin, it’s possible for third parties to create a “fingerprint” for any online user.

That fingerprint includes information about users’ browsers and screen settings—such as screen resolution or which fonts they’ve installed—which can then be used to distinguish them from someone else as they peruse the Web.

In the past, though, these techniques worked only if people continued to use the same browser—once they switched, say, to Firefox from Safari, the fingerprint was no longer very useful. Now, Cao’s method allows third parties to reliably track users across browsers by incorporating several new features that reveal information about their devices and operating systems.

Cao, along with his colleagues at Lehigh and Washington University, in St. Louis, began creating their tech by first examining the 17 features included in AmIUnique, the popular single-browser fingerprinting system, to see which ones might also work across browsers.

For example, one feature that AmIUnique relies on is screen resolution. Cao found that screen resolution can actually change for users if they adjust their zoom levels, so it’s not a very reliable feature for any kind of fingerprinting. As an alternative, he used a screen’s ratio of width to height because that ratio remains consistent even when someone zooms in.

Cao borrowed or adapted four such features from AmIUnique for his own cross-browser technique, and he also came up with several new features that revealed details about users’ hardware or operating systems, which remain consistent no matter which browser they open.

The new features he developed include an examination of a user’s audio stack, graphics card, and CPU. Overall, he relied on a suite of 29 features to create cross-browser fingerprints.

To extract that information from someone’s computer, Cao wrote scripting languages that force a user’s system to perform 36 tasks. The results from these tasks include information about the system, such as the sample rate and channel count in the audio stack. It takes less than a minute for the script to complete all 36 tasks.

To test the accuracy of his 29-point method, Cao recruited 1,903 helpers from Amazon Mechanical Turks and Microworkers. He asked them to visit a website from multiple browsers and found that the method worked across many popular browsers, including Google Chrome, Internet Explorer, Safari, Firefox, Microsoft Edge Browser, and Opera, as well as a few obscure ones, such as Maxthon and Coconut.

Cao tried removing several of the 29 features, and their related tasks, to see if he could use even fewer to achieve the same degree of accuracy, but he found that doing so lowered the accuracy slightly each time. “One is not a standout,” he says.

The only browser that his method didn’t work on was Tor. Earlier this month, Cao published the open source code for his technique so that anyone could use it. His next step? To work on more ways that users can avoid being fingerprinted across browsers, should they wish to opt out.

Original Link

How EPA Calculates the Cost of Environmental Compliance for Electricity Generators

graphic link to the landing page for The Full Cost of Electricity

People pay for electricity directly, out of pocket, when they pay their electric bill. But they may also pay in an indirect way, when they bear the environmental and health costs associated with pollution from electricity generation. With a new EPA administrator recently installed, how these costs are calculated is under new scrutiny. The University of Texas Energy Institute’s Full Cost of Electricity Study includes estimates of these environmental pollution costs as one part of the full system cost of electricity.

There is a well-established body of literature at the intersection of toxicology, epidemiology, and economics; it’s one that also governs how the Environmental Protection Agency estimates the benefits of regulations that reduce pollution from power plants. As part of the University of Texas Energy Institute’s Full Cost of Electricity (FCe-) Study my colleagues and I took a deep dive into the cost of these environmental externalities. Our goal: Describe in detail how the EPA estimates the dollar value of pollution reductions.

Whenever the EPA proposes a major new rule, it undertakes a rigorous analysis, comparing a benefit estimate with its estimate of the societal costs of complying with the proposed rule. Our analysis [PDF] illustrates how the EPA completed this kind of analysis for three recent and major rules targeting fossil-fueled power plants: the Cross State Air Pollution Rule (regulating pollutant transport to downwind communities), the Mercury and Air Toxics Rule, and the Clean Power Plan (regulating greenhouse gas emissions).

In each of these three rulemakings, the EPA concluded that the health and environmental benefits greatly exceeded compliance costs, even though in some cases compliance costs were in the billions of dollars.

These analyses are not without controversy. Many dispute the dollar value that the EPA places on a premature death, and many others disagree with the value assigned to a ton of carbon emissions. For the mercury rule and the greenhouse gas rule, benefits dwarf costs only because of so-called co-benefits—reduction of pollution other than the pollutant targeted by the rule.

These and other measurement issues are laid out in our white paper, “EPA’s Valuation of Environmental Externalities from Electricity Production” [PDF].

David Spence is a professor at the McCombs School of Business and School of Law, part of the University of Texas at Austin.

Original Link

The 5G Frontier: Millimeter Wireless

opening illustration for Reflections column Illustration: Dan Page

There is an eternal quality to how technology evolves. As existing areas get overworked, new frontiers open up at the fringes. Then innovators rush in to occupy the new territory before it, in turn, becomes overworked. There is an example of such a frontier today in wireless communications. IEEE’s 5G wireless initiative has the goal of serving many more users with much higher transmission speeds. But with the existing cellular bands tightly packed, where does all the required additional network capacity come from?

In contrast with the traditional radio-spectrum management view of scarce capacity, where a finite amount of spectrum must be divided up among users, communication theorists see wireless capacity as virtually unlimited. Capacity can be increased indefinitely by going to ever smaller cells and higher frequencies that offer more bandwidth, while greater efficiency can be achieved with advanced signal processing and new spectrum-sharing policies. Among all these approaches, the greatest immediate impact would be achieved by moving to the higher frequencies in the millimeter range—the region of 30 to 300 gigahertz, where bandwidth is available and plentiful.

But in many ways, millimeter-wave wireless truly is a frontier. Today the millimeter band is largely uninhabited and inhospitable, as signals using these wavelengths run up against difficult propagation problems. Even when signals travel through free space, attenuation increases with frequency, so usable path lengths for millimeter waves are short, roughly 100 to 200 meters. Such distances could be accommodated with the smaller cell sizes envisioned in 5G, but there are numerous other impediments. Buildings and the objects in and around them, including people, block the signal. Rain and foliage further attenuate millimeter waves, and diffraction—which can bend longer wavelengths around occluding objects—is far less effective. Even surfaces that might be conveniently nicely reflective at longer wavelengths appear rougher to millimeter waves, and so diffuse the signal.

So there may be gold in that frontier, but it is going to be very difficult to mine. Nevertheless, you never know until you try. I’m reminded of Marconi’s successful transatlantic transmission in 1901, when physicists insisted that the signal would fly off into space. Recently a team at NYU has been experimenting with millimeter-wave transmission within the urban canyons of New York City. Like the physicists of yesteryear, I would have said that this would never work. But the data show otherwise. They demonstrated a surprising amount of coverage despite the buildings, pedestrian and vehicular traffic, and general chaos typical of dense cities. Granted, there are a number of holes in the coverage, but initial results are encouraging.

When I expressed some surprise at these findings, someone pointed out to me that in its line-of-sight dependence, millimeter-wave propagation might be likened to that of visible light, and that the nighttime world isn’t as dark as might be expected. Taking this analogy to heart, I prowled my house on a dark night, the lone source of illumination a weak light at the end of a long corridor. I discovered dim light in unexpected places. I wondered: How did the light get here? Even so, nearby rooms might be caves of darkness. All the while, I was conscious of the strong Wi-Fi signal that followed me everywhere I went.

So the millimeter-wave frontier is going to be a difficult one, but we engineers are good at this kind of challenge, and we’re not without tools. For one thing, at these small wavelengths, we can build postage-stamp-size phased-array antennas, and high-speed electronics allow us to use advanced techniques that have been pioneered at longer wavelengths.

All this sophisticated technology so we will be able to view 3D video of cats while walking down a busy city street. Or, hopefully, some other use.

This article appears in the March 2017 print issue as “The Millimeter Wireless Frontier.”

Original Link

Smart Road to Parked Car: Talk to Me

When cars start talking both to one another and to traffic signals, they will sometimes suffer from the cellphone user’s bane: dropped calls. That’s because there won’t always be enough cars roaming around to serve as partners or relays.

So why not have access to parked cars? They’re shiny, they’re rested, and they are there precisely when you need them most—when few cars are plying the roads. 

Until now, most concepts for smart roads have planned to fill gaps in the chain of communication with relay stations. You find such roadside units, as they’re called, in Japan and in parts of Europe. The problem is, each one carries an upfront cost of around US $50,000, according [PDF] to the U.S. General Accounting Office. Throw in maintenance and amortization and the annual cost can average $18,000.

“That’s why the U.S. government hasn’t deployed them massively,” says Ozan Tonguz, a professor of electrical engineering at Carnegie Mellon University, in Pittsburgh.

Tonguz hopes to get around the weak infrastructure by wringing more networking power out of the cars themselves. “The idea is that once people park their cars, this amenity is already there for others. The analogy I’d draw is to Skype. If you download that app on your laptop, in a way you agree to have your laptop  be used as a node for other people. What goes around comes around.” 

Tonguz and colleagues at CMU have founded a company, called Virtual Traffic Lights, to market just such a cooperative network service. The company’s wider goals include getting traffic lights to interact with traffic to optimize wait time. But none of that can happen before infrastructure arrives, and the company can’t afford to wait—and wait—for that. Thus the emphasis on parked cars.

When you park your car, its computer directs the radio to listen for beacons from other relay stations. It uses that data to estimate how good the coverage is, compares that estimate with those of nearby cars and any other relay stations, and decides whether to become an active relay or to go into power-saving “sleep” mode. Of course, it does all this while the engine is turned off, using power drawn from the battery.

Wouldn’t an active—that is, talkative—parked car finally drain its battery? Not so, Tonguz says: “We estimate that the average time a car is parked, at your work, is 6.6 hours. To function as a roadside unit, that car would draw only about 4 percent of the battery.”

That sip of current would go to a vehicle-to-vehicle communications standard that the U.S. Department of Transportation wants all new cars to carry by the early 2020s. It’s called dedicated short-range communications (DSRC), and it’s based on the 5.9-gigahertz band. The 2017 Cadillac CTS will be the first car equipped with such a radio.

“When a vehicle is moving, there is obstruction, for instance from high-rise buildings,” Tonguz says. “Then there’s the fading signal, as you move away. A parked car’s DSRC range can go to 1,000 meters. But from a practical perspective, it’s more like 300 meters—two or three blocks—when there’s a clear line of sight, and 140 meters when there isn’t. That’s a respectable range for a parked car.”

More tech (and patents for it) are involved, of course. Not all parked cars would be tapped, just a handful that algorithms would select according to a number of criteria, including matters of efficiency, like keeping the number of relay handoffs to a minimum. And such stationary relays would matter most during the quiet time between rush hours.

Perhaps the single greatest theme of the emerging world of cybercars is the drive to economize on capital. If most people give up on owning cars and instead rely on ride-hailing services operated by robotic systems, vehicles would work far more and sit idle far less.

And if those parked cars we still had were to pull their weight by serving as relay stations, so much the better, Tonguz argues. “We are saying there is a resource here.”

Original Link

Borophene Takes Big Step Towards Electronic Devices

Just three years ago, we were reporting on the first tentative steps in producing a two-dimensional (2D) form of boron that came to be known as borophene. Since then most of the work with borophene has been aimed at synthesis as well as characterization of the material’s properties.

Now researchers at Northwestern University—led by Mark Hersam, a Northwestern professor at the forefront of investigating the potential of a variety of 2D materials—have taken a significant step beyond merely characterizing borophene and have started to move towards making nanoelectronic devices from it.

In research described in the journal Science Advances, Hersam’s team has for the first time combined borophene with another material to create a heterostructure, which is a fundamental building block for electronic devices. Since this work represents the first demonstration of a borophene-based heterostructure, the researchers believe that it will guide future and ongoing research into using borophene for nanoelectronic applications.

Heterostructures are the combination of multiple heterojunctions, which is where layers of different 2D materials meet. By stacking layers of materials with different properties on top of each other—such as a conductor with an insulator—you can tailor the electronic properties of the heterostructures to create functional devices.

Of course, there is a growing list of 2D materials from which to form heterostructures, but borophene offers a fairly rare quality in the “flatlands” of 2D materials: it’s a 2D metal.

“As a 2D metal, borophene helps fill a void in the family of 2D nanoelectronic materials,” said Hersam in an e-mail interview with IEEE Spectrum. “Fundamentally, borophene is also interesting since there is no 3D layered version of boron (i.e., there is no boron version of graphite).  Therefore, borophene is relatively unique among 2D materials in that it only exists in the atomically thin limit.”

While this is an interesting property of borophene, it also makes it a challenge to synthesize because you can only make it in pristine, ultrahigh vacuum environments. The relatively high chemical reactivity of borophene also presents challenges for handling it in ambient conditions. 

Hersam believes that one of the key outcomes from this most recent research was not just combining a borophene with a semiconductor, but also better understanding the chemistry of borophene so that it will become easier to manipulate.

“We are at an early stage of the development cycle of borophene,” said Hersam. “The material was first synthesized fairly recently, and we are now just learning about its chemistry and how to integrate it with other materials.  More work is required before the full potential of borophene is realized in electronic applications.”

While the Northwestern researchers have developed a completely ultrahigh vacuum (UHV)-based process for forming borophene-based heterostructures, they can only reliably handle the material in UHV processes, which creates experimental challenges. 

Hersam recognizes that they will need to develop reliable encapsulation and/or passivation schemes that allow the borophene to be removed from a UHV environment so that a practical device could actually be fabricated and tested.

Another big challenge: how to transfer borophene from the present silver growth substrate to an electrically insulating substrate. 

Hersam added: “When borophene is on silver (both of which are metallic), the silver substrate electrically shorts out the borophene, which creates serious problems for any device applications.”

Original Link

New white paper: Semico Research describes how companies could double SoC profitability through post-silicon debug

CAMBRIDGE, UK, 23rd February 2017: Engineering teams could more than double the profitability of their complex SoC projects and reduce their development costs by a quarter by improving their debug and monitoring strategy, according to a new white paper published by Semico Research in association with UltraSoC.

The white paper considers the increasing cost and complexity of SoC design, and includes a detailed analysis of typical engineering costs in areas from architectural design through validation, verification and bring-up. It then calculates the impact of time-to-market acceleration on shipment volumes, ASPs, overall revenue and profitability, for a typical SoC over a 24-month market window.

UltraSoC CEO Rupert Baines commented: “Hardware-based monitoring and debug is usually treated as a minor technical issue. But what Semico’s independent analysis reveals is a real business impact: on costs, on revenue and on profit. It’s why we’re convinced that this is an issue for the CFO and general management, as well as for engineering. This white paper supports that argument with hard numbers and solid analysis based on Semico’s extensive knowledge of real world SoC design projects.”

Semico’s research compares two SoC development teams undertaking similar designs, with or without the use of on-chip debug and monitoring technology. It reveals a 27% reduction in the total cost of an SoC development project, and a startling 2.3x uplift in profitability over the typical 24-month market window, due mainly to accelerated market entry, increased shipment volumes and improved ASPs.

According to Semico, even after the design is complete and the SoC is shipped, on-chip monitoring and debug brings further benefits, because it dramatically reduces the cost of finding a bug in the system once it has entered the market and is in the field. The same technology can also be used for gathering real usage data and in-life performance optimization.

Modern verification tools are so good that silicon-level bugs are rare. However, such tools do not help at the system level. Today’s SoCs are so complex, often with multiple cores, hierarchical interconnects and many sub-systems, that bugs arise from their interactions. Blocks that are correct at unit test fail because of the dependencies that only emerge in real silicon at run time. UltraSoC’s embedded debug IP helps identify these subtle problems arising from systemic complexity post-silicon, in bring-up and in the field.

The full white paper, which runs to 15 pages and includes numerous figures and tables, can be downloaded now from the UltraSoC website.

About UltraSoC
UltraSoC is an independent provider of SoC infrastructure that enables rapid development of embedded systems based on advanced SoC devices. The company is headquartered in Cambridge, United Kingdom. For more information visit http://www.ultrasoc.com

Contacts
UltraSoC
Media
Andy Gothard:
andy.gothard@ultrasoc.com
+44 7768 082 044
Twitter: @ultrasoc

Original Link

11.016J The Once and Future City (MIT)

Course Features

Course Description

Class website: The Once & Future City

What is a city? What shapes it? How does its history influence future development? How do physical form and institutions vary from city to city and how are these differences significant? How are cities changing and what is their future? This course will explore these and other questions, with emphasis upon twentieth-century American cities. A major focus will be on the physical form of cities—from downtown and inner-city to suburb and edge city—and the processes that shape them.

These questions and more are explored through lectures, readings, workshops, field trips, and analysis of particular places, with the city itself as a primary text. In light of the 2016 centennial of MIT’s move from Boston to Cambridge, the 2015 iteration of the course focused on MIT’s original campus in Boston’s Back Bay, and the university’s current neighborhood in Cambridge. Short field assignments, culminating in a final project, will provide students opportunities to use, develop, and refine new skills in “reading” the city.

Original Link

5.95J Teaching College-Level Science and Engineering (MIT)

Don’t show me this again

Welcome!

This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left.

MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.

No enrollment or registration. Freely browse and use OCW materials at your own pace. There’s no signup, and no start or end dates.

Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don’t offer credit or certification for using OCW.

Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)

Learn more at Get Started with MIT OpenCourseWare

Original Link

Episode 100: Let’s build the internet of moving things

It’s our 100th podcast, which would be a big deal if Kevin Tofel and I were a TV show hoping for syndication, but in the podcast world it means we’ve been at this for almost two years. YAY! We took a brief stroll down memory lane before digging into the week’s news covering new LTE chips for the IoT from Intel and Qualcomm as well as a report from ARM and The Economist that highlights slow growth in enterprise IoT projects. We talk about a few things to see at Mobile World Congress next week, discuss the Orbi router and also share our thoughts on Somfy motorized shades, female personal assistants and shopping from Google Home.

Google’s Home speaker and AI assistant.

For our guest this special week, I speak with Jaoa Barros, CEO and founder of Veniam, about what happens when we treat cars and buses as roving nodes on a mesh network. Venian calls this creating the internet of moving things, and it’s a big, awesome idea. We cover everything from the connectivity needs to autonomous cars to how connected transportation makes cities smarter. You’ll like it.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Jaoa Barros, CEO of Veniam
Sponsors: Ayala Networks and SpinDance

  • Somewhat bad news for enterprise IoT adoption
  • How do I like the Orbi router from Netgear?
  • Amazon Prime or Google Express?
  • Building a mixed, mobile, mesh network is a hard to say and hard to do
  • Cars can be sensors and hotspots for the smart city

Original Link

3Q: Julien de Wit on the discovery of seven temperate, nearby worlds

Today, an international team including astronomers from MIT and the University of Liège in Belgium has announced the discovery of seven Earth-sized planets orbiting a nearby star just 39 light years from Earth. All seven planets appear to be rocky, and any one of them may harbor liquid water, as they are each within an area called the habitable zone, where temperatures are within a range suitable for sustaining liquid water on a planet’s surface.

The discovery marks a new record, as the planets make up the largest known number of habitable-zone planets orbiting a single star outside our solar system. The results are published today in the journal Nature.

Julien de Wit, a postdoc in the Department of Earth, Atmospheric, and Planetary Sciences, is heading up the team’s study of the planets’ atmospheres, the compositions of which may offer up essential clues as to whether these planets harbor signs of life. De Wit and principal investigator Michael Gillon of the University of Liège will be presenting the group’s results in a talk at MIT on February 24.

MIT News spoke with de Wit about the solar system’s new terrestrial neighbors and the possibility for life beyond our planet.

Q: What can you tell us so far about these seven planets?

A: These planets are the first found beyond the edge of our solar system, with the winning combination of being Earth-sized, temperate, and well-suited for imminent atmospheric studies. Temperate means that they can possibly harbor liquid water at their surface, while well-suited for atmospheric studies means that owing to the star they orbit and how close to the Earth it is, we will be able to get exquisite insights into their atmospheres within the next decade.

The planets are tightly packed around a small, cool, red dim star called TRAPPIST-1, located just 39 light years from Earth. TRAPPIST-1 is an ultracool dwarf star, estimated to be about 2,550 kelvins, versus our sun, which boils at around 5,800 kelvins.

The planets are so tightly packed that the seven of them are found within a distance of TRAPPIST-1 that is five times smaller than the distance from the sun to Mercury. This is so close that, depending on the planet, a year would last between 1.5 and 20 days. These planets are also most likely tidally locked, meaning that they always show the same hemisphere to their star, like the Moon does to the Earth, implying that the star never rises or sets, but stays fixed in the sky.

The small size of the star (about 11 percent the radius of the sun) is an essential part of the interest of this system. The planets were detected using the transit technique, which searches for a flux drop in a star’s brightness when a planet passes in front of it. As the flux drop is directly related to the planet-to-star area ratio, the smaller the star, the easier the detection of a planet. The signal of TRAPPIST-1’s planets is for instance about 80 times larger than what it would be if they were orbiting a star like our sun.

All of these planets are the best targets found so far to search for signs of life, and it is remarkable that they are all transiting the same star. This means that the system will allow us to study each planet in great depth, providing for the first time a rich perspective on a different planetary system than ours, and on planets around the smallest main sequence stars.

We have initiated a worldwide reconnaissance effort that spans the electromagnetic spectrum from the UV to radio, to study this system in more depth. Here at MIT, and with a large group of international experts around the world, graduate student Mary Knapp is co-leading the search for signs of planetary magnetic fields in radio, while I am leading the atmospheric reconnaissance with the Hubble Space Telescope. With observations of this system taken by Hubble last May, we have already ruled out the presence of puffy, hydrogen-dominated atmospheres around the two innermost planets, which means that they are not “mini-Neptunes” that would be uninhabitable, but are terrestrial like Mercury, Venus, Earth, and Mars. We are currently processing observations of the new planets and should gain new insights soon.

Q:  Take us back to the moment of discovery. What tipped you all off that all of these planets might actually be terrestrial, and possibly even Earth-like?

A: It was such an incredible day. On Sept. 19, 2016, NASA’s Spitzer Space Telescope had started its 20-day-long monitoring of TRAPPIST-1 to search for flux drops. On Oct. 6, the first part of the data corresponding to the first 10 days of observation were released on NASA’s secured servers. Now, the fun fact is that on that day, Michael Gillon was stationed in Morocco, with a very bad internet connection, and couldn’t start playing with the data. Fortunately for his nerves, four other researchers (Jim Ingalls, Brice-Olivier Demory, Sean Carey, and I) could access the data. When I downloaded it and performed a quick processing, we had a pure, jaw-dropping, “never-seen-before” moment: By eye, I could count five more transits than expected over a short 10-day window — simply insane. After a quick iteration with Michael, we thought then that the system was containing three more planets, one being a super-Earth. But we realized quickly that what appeared to be a super-Earth was actually two planets transiting at the same time.

Our verdict: four more planets, all Earth-sized. When the second half of the data arrived on Oct. 27, we all gathered online for a debrief and cheers (with Trappist beers!). It was such an exhilarating moment.

Q: What are the chances that there may be life on one or more of these planets, and what will it take to find out?

A: We have literally no idea, but we have a chance of figuring that out soon! So far, we know that the planets could be great candidates, as they have the size of the Earth and are temperate. We now need to determine their surface conditions. This requires (1) obtaining a tight constraint on their masses, (2) assessing the type of atmospheres they have, (3) determining if they (may) actually harbor surface liquid water, and (4) searching for signs of life (i.e., biosignatures). What this will take is a significant multidisciplinary effort over the next 20 to 25 years.

When planets are close together and their orbits are in a certain spacing, they interact with each other through gravity, causing the timing of their transits to change a little as the planets tug on each other. By measuring this change, we can determine the mass of the planets. By knowing precisely the size and mass of the planets, we can determine their bulk density, and geophysicists can then help us better understand their interiors.

We will also assess their atmosphere types with a scaled-up version of our reconnaissance programs. Over the next two years, we are hoping to leverage Hubble’s capabilities to search for the presence of water- or methane-dominated atmospheres.

In the future, upcoming observatories like the James Webb Space Telescope will help us constrain the planets’ atmospheric composition, temperature, and pressure profiles — all essential information for determining the surface conditions possible over their globes.

It is important to point out here that obtaining these constraints will only be possible if we have a complete and unbiased understanding of how the light of the star going through the planet atmospheres is affected by the different components as a function of the temperature, pressure, and other gases. Then and only then, will we be able to assess the habitability of the planet.

A key part in searching for signs of life on these planets will be to determine what exactly is a sign of life, or biosignature. This is where the insight of biochemists will be essential. Fortunately, here at MIT we are already tackling this question. Indeed, Professor Sara Seager, together with postdoc Janusz Petkowski and William Bains at Cambridge University, are currently investigating the chemical space that life can occupy, to create a list of biosignatures which we will use in the future to determine if the gases detected are indicative of the presence of life on these planets.


Topics: Astronomy, Astrophysics, EAPS, Earth and atmospheric sciences, NASA, Planetary science, Research, Satellites, School of Science, Space, astronomy and planetary science

Original Link

[VIDEO] How to Design and Build a TDA2050 Stereo Amplifier

See what each component does, how to set the gain and bandwidth, and layout the PCB and wiring. I also play some music so you can hear what it sounds like.

The post [VIDEO] How to Design and Build a TDA2050 Stereo Amplifier appeared first on Circuit Basics.

Original Link

Seven MIT researchers win 2017 Sloan Research Fellowships

Seven MIT researchers from four departments are among the 126 American and Canadian researchers awarded 2017 Sloan Research Fellowships, the Alfred P. Sloan Foundation announced today.

New MIT-affiliated Sloan Research Fellows are: Mohammad Alizadeh, the TIBCO Career Development Assistant Professor in Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Semyon Dyatlov, an assistant professor of mathematics; Nikta Fakhri, an assistant professor of physics; Kerstin Perez, an assistant professor of physics; Aaron Pixton, an assistant professor of mathematics; Caroline Uhler, an assistant professor of electrical engineering and computer science, and a member of the Institute for Data, Systems, and Society and of the Laboratory for Information and Decision Systems; and Alexander Wolitzky, an associate professor of economics.

The new fellows also includes new faculty member Virginia Vassilevska Williams, the Steven and Renee Finn Career Development Associate Professor of Electrical Engineering and Computer Science and a member of CSAIL, who is being honored for work done at Stanford University, before she joined MIT in January 2017.

Awarded annually since 1955, the Sloan Research Fellowships are given to early-career scientists and scholars whose achievements and potential identify them as rising stars among the next generation of scientific leaders. This year’s recipients are drawn from 60 colleges and universities across the United States and Canada.

“The Sloan Research Fellows are the rising stars of the academic community,” said Paul L. Joskow, president of the Alfred P. Sloan Foundation, in a press release. “Through their achievements and ambition, these young scholars are transforming their fields and opening up entirely new research horizons. We are proud to support them at this crucial stage of their careers.”

Administered and funded by the foundation, the fellowships are awarded in eight scientific fields: chemistry, computer science, economics, mathematics, evolutionary and computational molecular biology, neuroscience, ocean sciences, and physics. To qualify, candidates must first be nominated by fellow scientists and subsequently selected by an independent panel of senior scholars. Fellows receive $60,000 to be used to further their research.

Since the beginning of the program, 43 Sloan Fellows have earned Nobel Prizes, 16 have won the Fields Medal in mathematics, 69 have received the National Medal of Science, and 16 have won the John Bates Clark Medal in economics.

For a complete list of this year’s winners, visit the Sloan Research Fellowships website.


Topics: Sloan fellows, Faculty, School of Science, School of Engineering, SHASS, Mathematics, Physics, Electrical Engineering & Computer Science (eecs), Economics, awards, Awards, honors and fellowships

Original Link

Tiny fibers open new windows into the brain

For the first time ever, a single flexible fiber no bigger than a human hair has successfully delivered a combination of optical, electrical, and chemical signals back and forth into the brain, putting into practice an idea first proposed two years ago. With some tweaking to further improve its biocompatibility, the new approach could provide a dramatically improved way to learn about the functions and interconnections of different brain regions.

The new fibers were developed through a collaboration among material scientists, chemists, biologists, and other specialists. The results are reported in the journal Nature Neuroscience, in a paper by Seongjun Park, an MIT graduate student; Polina Anikeeva, the Class of 1942 Career Development Professor in the Department of Materials Science and Engineering; Yoel Fink, a professor in the departments of Materials Science and Engineering, and Electrical Engineering and Computer Science; Gloria Choi, the Samuel A. Goldblith Career Development Professor in the Department of Brain and Cognitive Sciences, and 10 others at MIT and elsewhere.

The fibers are designed to mimic the softness and flexibility of brain tissue. This could make it possible to leave implants in place and have them retain their functions over much longer periods than is currently possible with typical stiff, metallic fibers, thus enabling much more extensive data collection. For example, in tests with lab mice, the researchers were able to inject viral vectors that carried genes called opsins, which sensitize neurons to light, through one of two fluid channels in the fiber. They waited for the opsins to take effect, then sent a pulse of light through the optical waveguide in the center, and recorded the resulting neuronal activity, using six electrodes to pinpoint specific reactions. All of this was done through a single flexible fiber just 200 micrometers across — comparable to the width of a human hair.

Previous research efforts in neuroscience have generally relied on separate devices: needles to inject viral vectors for optogenetics, optical fibers for light delivery, and arrays of electrodes for recording, adding a great deal of complication and the need for tricky alignments among the different devices. Getting that alignment right in practice was “somewhat probabilistic,” Anikeeva says. “We said, wouldn’t it be nice if we had a device that could just do it all.”

After years of effort, that’s what the team has now successfully demonstrated. “It can deliver the virus [containing the opsins] straight to the cell, and then stimulate the response and record the activity — and [the fiber] is sufficiently small and biocompatible so it can be kept in for a long time,” Anikeeva says.

Since each fiber is so small, “potentially, we could use many of them to observe different regions of activity,” she says. In their initial tests, the researchers placed probes in two different brain regions at once, varying which regions they used from one experiment to the next, and measuring how long it took for responses to travel between them.

The key ingredient that made this multifunctional fiber possible was the development of conductive “wires” that maintained the needed flexibility while also carrying electrical signals well. After much work, the team was able to engineer a composite of conductive polyethylene doped with graphite flakes. The polyethylene was initially formed into layers, sprinkled with graphite flakes, then compressed; then another pair of layers was added and compressed, and then another, and so on. A member of the team, Benjamin Grena, a recent graduate in materials science and engineering, referred to it as making “mille feuille,” (literally, “a thousand leaves,” the French name for a Napoleon pastry). That method increased the conductivity of the polymer by a factor of four or five, Park says. “That allowed us to reduce the size of the electrodes by the same amount.”

One immediate question that could be addressed through such fibers is that of exactly how long it takes for the neurons to become light-sensitized after injection of the genetic material. Such determinations could only be made by crude approximations before, but now could be pinpointed more clearly, the team says. The specific sensitizing agent used in their initial tests turned out to produce effects after about 11 days.

The team aims to reduce the width of the fibers further, to make their properties even closer to those of the neural tissue. “The next engineering challenge is to use material that is even softer, to really match” the adjacent tissue, Park says. Already, though, dozens of research teams around the world have been requesting samples of the new fibers to test in their own research.

“The authors report some remarkably sophisticated designs and capabilities in multifunctional fiber devices, where they create a single platform for colocalized expression, recording, and illumination in optogenetics studies of brain function,” says John Rogers,  professor of materials science and engineering, biomedical engineering, and neurological surgery at Northwestern University, who was not associated with this research. “These types of advances in technologies and tools are essential to progress in neuroscience research,” he says.

The research team included members of MIT’s Research Laboratory of Electronics, Department of Electrical Engineering and Computer Science, McGovern Institute for Brain Research, Department of Chemical Engineering, and Department of Mechanical Engineering, as well as researchers at Tohuku University in Japan and Virginia Polytechnic Institute. It was supported by the National Institute of Neurological Disorders and Stroke, the National Science Foundation, the MIT Center for Materials Science and Engineering, the Center for Sensorimotor Neural Engineering, and the McGovern Institute for Brain Research.


Topics: Research, School of Engineering, Materials Science and Engineering, DMSE, Research Laboratory of Electronics, Chemistry, Electrical Engineering & Computer Science (eecs), Brain and cognitive sciences, Nanoscience and nanotechnology, Health, School of Science

Original Link

Want an Energy-Efficient Data Center? Build It Underwater

illustration of server farm under the sea Illustration: MCKIBILLO

When Sean James, who works on data-center technology for Microsoft, suggested that the company put server farms entirely underwater, his colleagues were a bit dubious. But for James, who had earlier served on board a submarine for the U.S. Navy, submerging whole data centers beneath the waves made perfect sense.

This tactic, he argued, would not only limit the cost of cooling the machines—an enormous expense for many data-center operators—but it could also reduce construction costs, make it easier to power these facilities with renewable energy, and even improve their performance.

Together with Todd Rawlings, another Microsoft engineer, James circulated an internal white paper promoting the concept. It explained how building data centers underwater could help Microsoft and other cloud providers manage today’s phenomenal growth in an environmentally sustainable way.

At many large companies, such outlandish ideas might have died a quiet death. But Microsoft researchers have a history of tackling challenges of vital importance to the company in innovative ways, even if the required work is far outside of Microsoft’s core expertise. The key is to assemble engineering teams by uniting Microsoft employees with colleagues from partner companies.

The four of us formed the core of just such a team, one charged with testing James’s far-out idea. So in August of 2014, we started to organize what soon came to be called Project Natick, dubbed that for no particular reason other than that our research group likes to name projects after cities in Massachusetts. And just 12 months later, we had a prototype serving up data from beneath the Pacific Ocean.

Project Natick had no shortage of hurdles to overcome. The first, of course, was keeping the inside of its big steel container dry. Another was figuring out the best way to use the surrounding seawater to cool the servers inside. And finally there was the matter of how to deal with the barnacles and other forms of sea life that would inevitably cover a submerged vessel—a phenomenon that should be familiar to anyone who has ever kept a boat in the water for an extended period. Clingy crustaceans and such would be a challenge because they could interfere with the transfer of heat from the servers to the surrounding water. These issues daunted us at first, but we solved them one by one, often drawing on time-tested solutions from the marine industry.

But why go to all this trouble? Sure, cooling computers with seawater would lower the air-conditioning bill and could improve operations in other ways, too, but submerging a data center comes with some obvious costs and inconveniences. Does trying to put thousands of computers beneath the sea really make sense? We think it does, for several reasons.

For one, it would offer a company like ours the ability to quickly target capacity where and when it is needed. Corporate planners would be freed from the burden of having to build these facilities long before they are actually required in anticipation of later demand. For an industry that spends billions of dollars a year constructing ever-increasing numbers of data centers, quick response time could provide enormous cost savings.

The reason underwater data centers could be built more quickly than land-based ones is easy enough to understand. Today, the construction of each such installation is unique. The equipment might be the same, but building codes, taxes, climate, workforce, electricity supply, and network connectivity are different everywhere. And those variables affect how long construction takes. We also observe their effects in the performance of our facilities, where otherwise identical equipment exhibits different levels of reliability depending on where it is located.

As we see it, a Natick site would be made up of a collection of “pods”—steel cylinders that would each contain possibly several thousand servers. Together they’d make up an underwater data center, which would be located within a few kilometers of the coast and placed between 50 and 200 meters below the surface. The pods could either float above the seabed at some intermediate depth, moored by cables to the ocean floor, or they could rest on the seabed itself.

Once we deploy a data-center pod, it would stay in place until it’s time to retire the set of servers it contains. Or perhaps market conditions would change, and we’d decide to move it somewhere else. This is a true “lights out” environment, meaning that the system’s managers would work remotely, with no one to fix things or change out parts for the operational life of the pod.

Now imagine applying just-in-time manufacturing to this concept. The pods could be constructed in a factory, provisioned with servers, and made ready to ship anywhere in the world. Unlike the case on land, the ocean provides a very uniform environment wherever you are. So no customization of the pods would be needed, and we could install them quickly anywhere that computing capacity was in short supply, incrementally increasing the size of an underwater installation to meet capacity requirements as they grew. Our goal for Natick is to be able to get data centers up and running, at coastal sites anywhere in the world, within 90 days from the decision to deploy.

Most new data centers are built in locations where electricity is inexpensive, the climate is reasonably cool, the land is cheap, and the facility doesn’t impose on the people living nearby. The problem with this approach is that it often puts data centers far from population centers, which limits how fast the servers can respond to requests.

For interactive experiences online, these delays can be problematic. We want Web pages to load quickly and video games such as Minecraft or Halo to be snappy and lag free. In years to come, there will be more and more interaction-rich applications, including those enabled by Microsoft’s HoloLens and other mixed reality/virtual reality technologies. So what you really want is for the servers to be close to the people they serve, something that rarely happens today.

It’s perhaps a surprising fact that almost half [PDF] the world’s population lives within 100 kilometers of the sea. So placing data centers just offshore near coastal cities would put them much closer to customers than is the norm today.

If that isn’t reason enough, consider the savings in cooling costs. Historically, such facilities have used mechanical cooling—think home air-conditioning on steroids. This equipment typically keeps temperatures between 18 and 27 °C, but the amount of electricity consumed for cooling is sometimes almost as much as that used by the computers themselves.

More recently, many data-center operators have moved to free-air cooling, which means that rather than chilling the air mechanically, they simply use outside air. This is far cheaper, with a cooling overhead of just 10 to 30 percent, but it means the computers are subject to outside air temperatures, which can get quite warm in some locations. It also often means putting the centers at high latitudes, far from population centers.

What’s more, these facilities can consume a lot of water. That’s because they often use evaporation to cool the air somewhat before blowing it over the servers. This can be a problem in areas subject to droughts, such as California, or where a growing population depletes the local aquifers, as is happening in many developing countries. Even if water is abundant, adding it in the air makes the electronic equipment more prone to corrosion.

Our Natick architecture sidesteps all these problems. The interior of the data-center pod consists of standard computer racks with attached heat exchangers, which transfer the heat from the air to some liquid, likely ordinary water. That liquid is then pumped to heat exchangers on the outside of the pod, which in turn transfer the heat to the surrounding ocean. The cooled transfer liquid then returns to the internal heat exchangers to repeat the cycle.

Of course, the colder the surrounding ocean, the better this scheme will work. To get access to chilly seawater even during the summer or in the tropics, you need only put the pods sufficiently deep. For example, at 200 meters’ depth off the east coast of Florida, the water remains below 15 °C all year round.

Our tests with a prototype Natick pod, dubbed the “Leona Philpot” (named for an Xbox game character), began in August 2015. We submerged it at just 11 meters’ depth in the Pacific near San Luis Obispo, Calif., where the water ranged between 14 and 18 °C.

Over the course of this 105-day experiment, we showed that we could keep the submerged computers at temperatures that were at least as cold as mechanical cooling can achieve and with even lower energy overhead than the free-air approach—just 3 percent. That energy-overhead value is lower than any production or experimental data center of which we are aware.

Because there was no need to provide an on-site staff with lights to see, air to breathe, parking spaces to fight over, or big red buttons to press in case of emergency, we made the atmosphere in the data-center pod oxygen free. (Our employees managed the prototype Natick pod from the comfort of their Microsoft offices.) We also removed all water vapor and dust. That made for a very benign environment for the electronics, minimizing problems with heat dissipation and connector corrosion.

tech illustration Illustration: MCKIBILLO The Long View: Future data-center pods will be considerably longer than the prototype and would each contain a large number of server racks. The electronics will be cooled by transferring waste heat to the surrounding seawater using internal and external heat exchangers.

Microsoft is committed to protecting the environment. In satisfying its electricity needs, for example, the company uses renewable sources as much as possible. To the extent that it can’t do that, it purchases carbon offsets. Consistent with that philosophy, we are looking to deploy our future underwater data centers near offshore sources of renewable energy—be it an offshore wind farm or some marine-based form of power generation that exploits the force of tides, waves, or currents.

These sources of energy are typically plentiful offshore, which means we should be able to match where people are with where we can place our energy-efficient underwater equipment and where we would have access to lots of green energy. Much as data centers today sometimes act as anchor tenants for new land-based renewable-energy farms, the same may hold true for marine energy farms in the future.

Another factor to consider is that conventionally generated electricity is not always easily available, particularly in the developing world. For example, 70 percent of the population of sub-Saharan Africa has no access to an electric grid. So if you want to build a data center to bring cloud services closer to such a population, you’d probably need to provide electricity for it, too.

Typically, electricity is carried long distances at 100,000 volts or higher, but ultimately servers use the same kinds of low voltages as your PC does. To drop the grid power to a voltage that the servers can consume generally requires three separate pieces of equipment. You also need backup generators and banks of batteries in case grid power fails.

Locating underwater data centers alongside offshore sources of power would allow engineers to simplify things. First, by generating power at voltages closer to what the servers require, we could eliminate some of the voltage conversions. Second, by powering the computers with a collection of independent wind or marine turbines, we could automatically build in redundancy. This would reduce both electrical losses and the capital cost (and complexity) associated with the usual data-center architecture, which is designed to protect against failure of the local power grid.

An added benefit of this approach is that the only real impact on the land is a fiber-optic cable or two for carrying data.

The first question everyone asks when we tell them about this idea is: How will you keep the electronics dry? The truth is that keeping things dry isn’t hard. The marine industry has been keeping equipment dry in the ocean since long before computers even existed, often in far more challenging contexts than anything we have done or plan to do.

The second question—one we asked ourselves early on—is how to cool the computers most efficiently. We explored a range of exotic approaches, including the use of special dielectric liquids and phase-change materials as well as unusual heat-transfer media such as high-pressure helium gas and supercritical carbon dioxide. While such approaches have their benefits, they raise thorny problems as well.

While we continue to investigate the use of exotic materials for cooling, for the near term we see no real need. Natick’s freshwater plumbing and radiator-like heat exchangers provide a very economical and efficient cooling mechanism, one that works just fine with standard servers.

A more pertinent issue, as we see it, is that an underwater data center will attract sea life, in effect forming an artificial reef. This process of colonization by marine organisms, called biofouling, starts with single-celled creatures, which are followed by somewhat larger organisms that feed on those cells, and so on up the food chain.

When we deployed our Natick prototype, crabs and fish began to gather around the vessel within 24 hours. We were delighted to have created a home for those creatures, so one of our main design considerations was how to maintain that habitat while not impeding the pod’s ability to keep its computers cool.

In particular, we knew that biofouling on external heat exchangers would disrupt the flow of heat from those surfaces. So we explored the use of various antifouling materials and coatings—even active deterrents involving sound or ultraviolet light—in hopes of making it difficult for life to take hold. Although it’s possible to physically clean the heat exchangers, relying on such interventions would be unwise, given our goal to keep operations as simple as possible.

Thankfully, the heat exchangers on our Natick pod remained clean during its first deployment, despite it being in a very challenging setting (shallow and close to shore, where ocean life is most abundant). But biofouling remains an area of active research, one that we continue to study with a focus on approaches that won’t harm the marine environment.

The biggest concern we had by far during our test deployment was that equipment would break. After all, we couldn’t send a tech to some server rack to swap out a bad hard drive or network card. Responses to hardware failures had to be made remotely or autonomously. Even in Microsoft’s data centers today, we and others have been working to increase our ability to detect and address failures without human intervention. Those same techniques and expertise will be applied to Natick pods of the future.

How about security? Is your data safe from cyber or physical theft if it’s underwater? Absolutely. A Natick site would provide the same encryption and other security guarantees of a land-based Microsoft data center. While no people would be physically present, sensors would give a Natick pod an excellent awareness of its surroundings, including the presence of any unexpected visitors.

You might wonder whether the heat from a submerged data center would be harmful to the local marine environment. Not likely. Any heat generated by a Natick pod would rapidly be mixed with cool water and carried away by the currents. The water just meters downstream of a Natick vessel would get a few thousandths of a degree warmer at most.

So the environmental impact would be very modest. That’s important, because the future is bound to see a lot more data centers get built. If we have our way, though, people won’t actually see many of them, because they’ll be doing their jobs deep underwater.

This article appears in the March 2017 print issue as “Dunking the Data Center.”

About the Authors

Ben Cutler, Spencer Fowers, Jeffrey Kramer, and Eric Peterson all work at Microsoft Research in Redmond, Wash.

Original Link

New Record: Paralyzed Man Uses Brain Implant to Type Eight Words Per Minute

“What did you enjoy the most about your trip to the Grand Canyon?” the Stanford researchers asked. 

In response, a cursor floated across a computer screen displaying a keyboard and confidently picked out one letter at a time. The woman controlling the cursor didn’t have a mouse under her hand, though. She’s paralyzed due to amyotrophic lateral sclerosis (also called Lou Gehrig’s disease) and can’t move her hands. Instead, she steered the cursor using a chip implanted in her brain.

“I enjoyed the beauty,” she typed. 

The woman was one of three participants in a study, published today in the journal eLife, that broke new ground in the use of brain-computer interfaces (BCIs) by people with paralysis. The woman who took the Grand Canyon trip demonstrated remarkable facility with a “free typing” task in which she answered questions however she chose. Another participant, a 64-year-old man paralyzed by a spinal cord injury, set a new record for speed in a “copy typing” task. Copying sentences like “The quick brown fox jumped over the lazy dog,” he typed at a relatively blistering rate of eight words per minute. 

That’s four times as fast as the previous world’s best, says Stanford neurosurgeon Jaimie Henderson, a senior member of the research team. Further improvements to the user interface—including the kind of auto-complete software that’s standard on smartphones—should boost performance dramatically.  

This experimental gear is far from being ready for clinical use: To send data from their implanted brain chips, the participants wear head-mounted components with wires that connect to the computer. But Henderson’s team, part of the multiuniversity BrainGate consortium, is contributing to the development of devices that can be used by people in their everyday lives, not just in the lab. “All our research is based on helping people with disabilities,” Henderson tells IEEE Spectrum

Although getting brain surgery and having an implant installed is a drastic move, Henderson’s team recently surveyed people with paralysis regarding their willingness to use a variety of different BCI technologies. About 30 percent of respondents with injuries high on their spinal cords said they’d get a brain implant if it could be made wireless, and if it allowed them to type just two words per minute. 

Here’s how the system works: The tiny implant, about the size of a baby aspirin, is inserted into the motor cortex, the part of the brain responsible for voluntary movement. The implant’s array of electrodes record electrical signals from neurons that “fire” as the person thinks of making a motion like moving their right hand—even if they’re paralyzed and can’t actually move it. The BrainGate decoding software interprets the signal and converts it into a command for the computer cursor. 

Interestingly, the system worked best when the researchers customized it for each participant. To train the decoder, each person would imagine a series of different movements (like moving their whole right arm or wiggling their left thumb) while the researchers looked at the data coming from the electrodes and tried to find the most obvious and reliable signal. 

Each participant ended up imagining a different movement to control the cursor. The woman with ALS imagined moving her index finger and thumb to control the cursor’s left-right and up-down motions. Henderson says that after a while, she didn’t have to think about moving the two digits independently. “When she became facile with this, she said it wasn’t anything conscious; she felt like she was controlling a joystick,” he says. The man with the spinal cord injury imagined moving his whole arm as if he were sliding a puck across a table. “Each participant settled on control modality that worked best,” Henderson says. 

This experiment built on prior work by Henderson’s team involving both humans and monkeys. In a human study in 2015, the researchers tried out a new decoding algorithm that was better able to determine the user’s intended cursor direction. They demonstrated its effectiveness with a typing task that used a type of word-prediction software called Dasher, which enabled an ALS patient to type six words per minute. 

Next came a primate study in 2016, which tested a further improvement to the algorithm: an ability to determine not just cursor direction but also understand when the user wanted to click on something. With this point-and-click system, the monkeys proved adept at moving the cursor to highlighted targets on the screen. By putting the highlighted targets on a display of letters, the researchers got the monkeys to mimic a typing task. Constructing sentences like Hamlet’s “To be or not to be—that is the question,” the monkeys achieved an impressive rate of 12 words per minute. But of course, they didn’t know what they were doing.

This latest study used the improved algorithms in the more naturalistic setting of a question-and-answer session. Study coauthor Paul Nuyujukian, director of Stanford’s new Brain Interfacing Laboratory, says they didn’t know what to expect from the free typing challenge. 

“There was a piece of this that absolutely could not be answered until we got to a clinical study,” says Nuyujukian. “What happens when someone is synthesizing what they’re trying to communicate, and then communicating it in real time? That’s something we could not determine in the monkey lab.” It seemed possible that the mental act of deciding on an answer could interfere with the communication process, he says.   

But the decoder proved up to the task, marking an important step toward a practical communication technology that people could use in their own homes.

Still, the challenges that remain are significant: BrainGate researchers want to make a system that’s fully implantable, wireless, and doesn’t require frequent recalibration to keep the decoder working properly. Henderson says investigators throughout the BrainGate group are now working on all those fronts.

Once the BCI gear is refined, it could be used to control other things besides a typing system. People with paralysis could use their brain signals to control wheelchairs, robotic arms, or even electrodes that stimulate their dormant muscles. 

The Stanford researchers describe their work with the study’s three participants as a partnership, and say the three not only tested theories but also contributed to the technology’s development. In every testing session, say the researchers, the participants would give useful feedback about the system’s operation.

Nuyujukian recalls one occasion when the researchers were working with the woman who has ALS to test an altered algorithm for cursor control. “She said, ‘Something’s different, it doesn’t move right,’” Nuyujukian remembers. “After the second day of testing, we figured out that she was right; we had introduced a bug in the code.” 

Original Link

New Paradigm in Microscopy: Atomic Force Microscope on a Chip

Ever since the 1980s, when Gerd Binnig of IBM first heard that “beautiful noise” made by the tip of the first scanning tunneling microscope (STM) dragging across the surface of an atom and later developed the atomic force microscope (AFM), these microscopy tools have been the bedrock of nanotechnology research and development.

AFMs have continued to evolve over the years, and at one time, IBM even looked into using them as the basis of a memory technology in the company’s Millipede project. Despite all this development, AFMs have remained bulky and expensive devices, costing as much as US $50,000.

Now, researchers at the University of Texas (UT) Dallas have turned this paradigm on its ear by developing an AFM that uses microelectromechanical (MEMS) technology. The upshot: The entire AFM fits onto a computer chip about one square centimeter in size.

In research described in the journal IEEE Journal of Microelectromechanical Systems, the scientists connected the MEMS-based AFM to a small printed circuit board containing all the circuitry, sensors, and other miniaturized components that control the device’s movements.

While the reduced size, and the likely lower cost associated with the economies of scale for MEMS devices look to be the clearest benefits, it is in how these AFMs take their measurements that truly sets them apart from previous AFM technology. AFMs have always mapped the surface of a material by recording the vertical displacement necessary to maintain a constant force on the cantilevered probe tip as it scans a sample’s surface.

Instead of depending on a constant force or distance, the tip of the device developed by the UT scientists oscillates up and down perpendicular to the sample, touching and then lifting off of the surface—a mode known in AFM circles as the “tapping mode”. This technique stands in stark contrast to the widely used contact-mode AFM. The UT  researchers have developed a novel approach to implementing the tapping mode in an AFM by using a single piezoelectric transducer simultaneously for actuation and sensing, which removes the need for using an optical sensor.

Since the amplitude of the oscillation wants to change as it interacts with sample, this atomic force microscope creates an image by maintaining the amplitude of oscillation.

“Performing the measurement in tapping mode reduces the forces applied to the sample, which is important when scanning fragile samples such as biological specimens,” explained Reza Moheimani, professor of mechanical engineering at UT Dallas and coauthor of the research, in an email interview with IEEE Spectrum.

In tests of the prototype MEMS AFM, the UT Dallas researchers introduced samples into the AFM using the same approach mechanism of a commercial macroscale AFM. The printed circuit board containing the MEMS device was lowered onto the samples using a stepper motor.

“We imagine that a final version of the system would use a similar approach, with a compact motorized mechanism being used to bring the MEMS die in contact with the sample from above,” says Moheimani.

Because the sample would have to be prepared in a similar way as current AFM imaging, the setup would not need to be in a vacuum, Moheimani says. “At this stage, imaging within a liquid environment is not feasible with this device. However, future iterations of the MEMS AFM may be designed with this capability in mind,” Moheimani adds.

The prototype MEMS AFM was designed so that it could be fabricated using a straightforward MEMS process, with all mechanical scanning and sensing components contained in a single silicon-on-insulator MEMS die.

“There were significant challenges in designing the device to comply with the requirements of the fabrication process while obtaining the desired characteristics in terms of mechanical bandwidth, displacement range, and cross coupling,” says Moheimani.

The UT Dallas researchers had to employ an iterative design process in which the testing and characterization of an earlier fabricated version of the device allowed them to better exploit the capabilities of the fabrication process and significantly improve performance in areas such as cross-coupling reduction and reliability of signal routing.

Moheimani and his colleagues already have their sights set on the next engineering challenges for the MEMS AFM. For instance, the sharp probe at the end of the MEMS AFM’s cantilever is fabricated in a postprocessing step using electron beam deposition. The researchers are currently working to integrate the fabrication of the probe into the MEMS fabrication process. This will simplify the fabrication of the device and improve its durability.

Additionally, the researchers are looking to miniaturize the supporting electronics and control systems. They’re trying out prototype hardware in conjunction with an application-specific integrated circuit (ASIC) that interfaces with the MEMS die. This would make it possible to connect with the end user’s PC via a single USB connection.

Moheimani adds: “Our ultimate aim is to develop a system that can perform video-rate AFM imaging using a single MEMS chip. The current device successfully demonstrates many of the capabilities required, and future iterations of the device will be designed to further work towards this goal. Such a device would enable high-speed AFM imaging to be performed using a portable and cost-effective system.”

Original Link

Institute Professor Emerita Mildred Dresselhaus, a pioneer in the electronic properties of materials, dies at 86

Mildred S. Dresselhaus, a celebrated and beloved MIT professor whose research helped unlock the mysteries of carbon, the most fundamental of organic elements — earning her the nickname “queen of carbon science” — died Monday at age 86.

Dresselhaus, a solid-state physicist who was Institute Professor Emerita of Physics and Electrical Engineering and Computer Science, was also nationally known for her work to develop wider opportunities for women in science and engineering. She died at Mount Auburn Hospital in Cambridge, Massachusetts, following a brief period of poor health.

“Yesterday, we lost a giant — an exceptionally creative scientist and engineer who was also a delightful human being,” MIT President L. Rafael Reif wrote in an email today sharing the news of Dresselhaus’s death with the MIT community. “Among her many ‘firsts,’ in 1968, Millie became the first woman at MIT to attain the rank of full, tenured professor. She was the first solo recipient of a Kavli Prize and the first woman to win the National Medal of Science in Engineering.”

“Millie was also, to my great good fortune, the first to reveal to me the wonderful spirit of MIT,” Reif added. “In fact, her down-to-earth demeanor was a major reason I decided to join this community. … Like dozens of young faculty and hundreds of MIT students over the years, I was lucky to count Millie as my mentor.”

A winner of both the Presidential Medal of Freedom (from President Barack Obama, in 2014) and the National Medal of Science (from President George H.W. Bush, in 1990), Dresselhaus was a member of the MIT faculty for 50 years. Beyond campus, she held a variety of posts that placed her at the pinnacle of the nation’s scientific enterprise.

Dresselhaus’s research made fundamental discoveries in the electronic structure of semi-metals. She studied various aspects of graphite and authored a comprehensive book on fullerenes, also known as “buckyballs.” She was particularly well known for her work on nanomaterials and other nanostructural systems based on layered materials, like graphene, and more recently beyond graphene, like transition metal dichalcogenides and phosphorene. Her work on using quantum structures to improve thermoelectric energy conversion reignited this research field.

Institute Professor Emerita Mildred Dresselhaus recounted her career for an MIT oral history project in 2007.

Video: MIT Video Productions

A strong advocate for women in STEM

As notable as her research accomplishments was Dresselhaus’s longstanding commitment to promoting gender equity in science and engineering, and her dedication to mentorship and teaching.

In 1971, Dresselhaus and a colleague organized the first Women’s Forum at MIT as a seminar exploring the roles of women in science and engineering. She received a Carnegie Foundation grant in 1973 to support her efforts to encourage women to enter traditionally male dominated fields of science and engineering. For a number of years, she led an MIT seminar in engineering for first-year students; designed to build the confidence of female students, it always drew a large audience of both men and women.

Just two weeks ago, General Electric released a 60-second video featuring Dresselhaus that imagined a world where female scientists like her were celebrities, to both celebrate her achievements as well as to encourage more women to pursue careers in science, technology, engineering, and mathematics.

Dresselhaus co-authored eight books and about 1,700 papers, and supervised more than 60 doctoral students.

“Millie’s dedication to research was unparalleled, and her enthusiasm was infectious,” says Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science and head of MIT’s Department of Electrical Engineering and Computer Science (EECS). “For the past half-century, students, faculty and researchers at MIT and around the world have been inspired by her caring advice. I was very fortunate to have had her as a mentor, and as an active member of the EECS faculty. She made such a huge impact on MIT, and her contributions will long be remembered.”

Diverted from teaching to physics

Born on Nov. 11, 1930, in Brooklyn and raised in the Bronx, Mildred Spiewak Dresselhaus attended Hunter College, receiving her bachelor’s degree in 1951 and then winning a Fulbright Fellowship to study at Cambridge University.

While she had planned to become a teacher, Rosalyn Yalow — who would go on to win the 1977 Nobel Prize in physiology or medicine — encouraged Dresselhaus to pursue physics instead. She ultimately earned her MA from Radcliffe College in 1953 and her PhD in 1958 from the University of Chicago, where she studied under Nobel laureate Enrico Fermi. From 1958 to 1960, Dresselhaus was a National Science Foundation Postdoctoral Fellow at Cornell University.

Dresselhaus began her 57-year association with MIT in the Solid State Division of Lincoln Laboratory in 1960. In 1967, she joined what was then called the Department of Electrical Engineering as the Abby Rockefeller Mauze Visiting Professor, a chair reserved for appointments of distinguished female scholars. She became a permanent member of the electrical engineering faculty in 1968, and added an appointment in the Department of Physics in 1983.

In 1985, Dresselhaus became the first female Institute Professor, an honor bestowed by the MIT faculty and administration for distinguished accomplishments in scholarship, education, service, and leadership. There are usually no more than 12 active Institute Professors on the MIT faculty.

Scientific leadership and awards

In addition to her teaching and research, Dresselhaus served in numerous scientific leadership roles, including as the director of the Office of Science at the U.S. Department of Energy; as president of the American Physical Society and of the American Association for the Advancement of Science; as chair of the governing board of the American Institute of Physics; as co-chair of the recent Decadal Study of Condensed Matter and Materials Physics; and as treasurer of the National Academy of Sciences.

Aside from her Medal of Freedom — the highest award bestowed by the U.S. government upon American civilians — and her Medal of Science, given to the nation’s top scientists, Dresselhaus’s extensive honors included the IEEE Medal of Honor for “leadership and contributions across many fields of science and engineering”; the Enrico Fermi Award from the U.S. Department of Energy for her leadership in condensed matter physics, in energy and science policy, in service to the scientific community, and in mentoring women in the sciences; and the prestigious Kavli Prize for her pioneering contributions to the study of phonons, electron-phonon interactions, and thermal transport in nanostructures. She was also an elected member of the National Academy of Sciences and the National Academy of Engineering.

Active on campus

Always an active and vibrant presence at MIT, Dresselhaus remained a notable influence on campus until her death. She continued to publish scientific papers on topics such as the development of 2-D sheets of thin electronic materials, and played a role in shaping MIT.nano, a new 200,000-square-foot center for nanoscience and nanotechnology scheduled to open in 2018.

In 2015, Dresselhaus delivered the keynote address at “Rising Stars in EECS,” a three-day workshop for female graduate students and postdocs who are considering careers in academic research. Her remarks, on the importance of persistence, described her experience studying with Enrico Fermi. Three-quarters of the students in that program, she said, failed to pass rigorous exam requirements.

“It was what you did that counted,” Dresselhaus told the aspiring scientists, “and that followed me through life.”

Dresselhaus is survived by her husband, Gene, and by her four children and their families: Marianne and her husband, Geoffrey, of Palo Alto, California; Carl, of Arlington, Massachusetts; Paul and his wife, Maria, of Louisville, Colorado; and Eliot and his wife, Françoise, of France. She is also survived by her five grandchildren — Elizabeth, Clara, Shoshi, Leora, and Simon — and by her many students, whom she cared for very deeply.

Gifts in her memory may be made to MIT.nano.


Topics: Faculty, Electrical Engineering & Computer Science (eecs), Physics, Obituaries, Carbon, Materials Science and Engineering, Nanoscience and nanotechnology, Women in STEM, School of Science, School of Engineering, MIT.nano

Original Link

Roger Ver – Bitcoin Jesus, Bitcoin Elder & Investor

Roger Ver, an early-stage Bitcoin investor and head of bitcoin.com, discusses the state of Bitcoin in early 2017.

From his investements in Ripple, Blockchain.info, Bitpay and Kraken to this Fintech guru’s backstory on how he obtained the “bitcoin.com” domain (and then had to forcibly re-claim it), this is an interesting interview you won’t want to miss.

We discuss the never-ending, politicized block size debate, scaling, transaction speed, and bitcoin unlimited mining pool. Listen to this fascinating interview below. For more Bitcoin & Crypto World news, latest trends & analysis subscribe to the Bitcoin Podcast.

Original Link

Adchain with Ken Brook

Online advertising is a system of transactions that involve many different players. The user visits a publisher’s website; the publisher notifies an exchange that the user is on the website; the exchange presents an opportunity to a marketplace that can buy that opportunity to show the end user the ad. And this a simple example.

The transactions in online advertising are as opaque and rife with fraud as Wall Street–but less regulated. Blockchain technology presents an opportunity to bring more transparency to the advertising ecosystem using a shared ledger.

Ken Brook is the CEO of VidRoll, a video advertising company. Using his experience in the adtech business, Ken is working on Adchain, a shared ledger for adtech. In this episode, we explore adtech, blockchains, and Adchain.

Show Notes

Russian Cyberforgers Steal Millions a Day with Fake Sites

Transcript

Sponsors

GoCD is an on-premise, open source, continuous delivery tool. Get better visibility into and control of your teams’ deployments with GoCD. Say goodbye to deployment panic and hello to consistent, predictable deliveries. Visit gocd.io for a free download. 

Apica System helps companies with their end-user experience, focusing on availability and performance. Test, monitor, and optimize your applications with Apica System. Apica is hosting an upcoming webinar about API basics for big data analytics. You can also find past webinars, such as how to optimize websites for fast load time.

Original Link

Teledyne DALSA Signs DBI Technology Transfer and License Agreement with Invensas Corporation

Posted 2/21/2017

WATERLOO, Canada – February 21, 2017 – Teledyne DALSA, a Teledyne Technologies company and one of the world’s foremost pure-play MEMS foundries has signed a technology transfer and license agreement for Direct Bond Interconnect (DBI®) technology with Invensas Corporation, a wholly owned subsidiary of Tessera Holding Corporation (Nasdaq: TSRA). This agreement enables Teledyne DALSA to leverage Invensas’ semiconductor wafer bonding and 3D interconnect technologies to deliver next-generation MEMS and image sensor solutions to customers in the automotive, IoT, and consumer electronics markets.

“DBI technology is a key enabler for true 3D-integrated MEMS and image sensor solutions,” said Edwin Roks, president of Teledyne DALSA. “We are excited about the prospect of developing new products and providing new foundry services to our customers that utilize this technology. By working closely with Invensas, we will be able to move more quickly to deploy this capability efficiently and effectively.”

DBI technology is a low temperature hybrid wafer bonding solution that allows wafers to be bonded instantaneously with exceptionally fine pitch 3D electrical interconnect without requiring bond pressure. The technology is applicable to a wide range of semiconductor devices including MEMS, image sensors, RF Front Ends and stacked memory.

“We are pleased that Teledyne DALSA, a recognized leader in digital imaging products and MEMS solutions, has chosen our DBI technology to accelerate the development and commercialization of their next generation MEMS and image sensor products,” said Craig Mitchell, president of Invensas. “As device makers look for increasingly powerful semiconductor solutions in smaller packages, the need for cost-efficient, versatile 3D technologies is greater than ever before. We are confident that the superior performance and manufacturability of DBI technology will help Teledyne DALSA deliver tremendous value to their customers.”

About Tessera Holding Corporation

Tessera Holding Corporation is the parent company of DTS, FotoNation, Invensas and Tessera. We are one of the world’s leading product and technology licensing companies. Our technologies and intellectual property are deployed in areas such as premium audio, computational imaging, computer vision, mobile computing and communications, memory, data storage, 3D semiconductor interconnect and packaging. We invent smart sight and sound technologies that enhance and help to transform the human connected experience.

About Teledyne DALSA

Teledyne DALSA is an international technology leader in sensing, imaging, and specialized semiconductor fabrication. Our image sensing solutions span the spectrum from infrared through visible to X-ray; our MEMS foundry has earned a world-leading reputation. In addition, through our subsidiaries Teledyne Optech and Teledyne Caris, we deliver advanced 3D survey and geospatial information systems. Teledyne DALSA employs approximately 1,400 employees worldwide and is headquartered in Waterloo, Canada. For more information, visit www.teledynedalsa.com.

###

All trademarks are registered by their respective companies.
Teledyne DALSA reserves the right to make changes at any time without notice.

 

Media Relations Contact:

Geralyn Miller

Manager, Media Relations

+1 519-886-6000

geralyn.miller@teledyne.com

Original Link

Teledyne DALSA’s Newest Genie Nano camera features ON Semi’s 18Mpixel Sensor

Posted 2/21/2017

WATERLOO, Canada – February 21, 2017 – Teledyne DALSA, a Teledyne Technologies company and global leader in machine vision technology, today introduced a new model to its high-value Genie™ Nano GigE Vision camera series. Built around the ON Semiconductor® AR1820HS 18 Megapixel, backside illuminated (BSI) image sensor, this new camera (C4900) brings system integrators and OEMs even greater choice from a growing number of Genie Nano models powered by more than 40 industry-leading CMOS imagers.

The Genie Nano C4900 delivers improved network integration, a rolling shutter and superb low-light performance as a result of its 1/2.3-inch format, 1.25 μm active-pixel digital imager with A-PixHS™ BSI technology.

Genie Nano cameras combine standard gigabit Ethernet technology (supporting GigE Vision 1.2) with Teledyne DALSA’s Trigger-to-Image-Reliability framework to dependably capture and transfer images from the camera to the host PC. Developed for an expanding number of industrial imaging applications, including intelligent traffic systems, printed circuit board inspection and metrology, Genie Nano gives customers high picture quality, high resolution, and high-speed imaging without distortion. Genie Nano cameras are available in a number of models implementing different sensors, image resolutions, and feature sets, in monochrome or color versions and take full advantage of the Sapera™ LT Software Development Kit (SDK).

Key Features:

  • Trigger-to-Image-Reliability for easy system control and debugging
  • Active Resolution of 4912 x 3684 pixels
  • Electronic Rolling Shutter (ERS) with Global Reset Release (GRR) function
  • Small footprint / light weight: 44mm x 29mm x 21mm / 46 grams
  • Wide temperature range (-20 to 60°C) for imaging in harsh environments
  • Support for Linux operating platform

Genie Nano cameras feature a robust design backed by a 3-year warranty. Please visit the Genie Nano product page for more information. For sales enquiries, visit our contact page, and for full resolution images, our online media kit.

About Teledyne DALSA’s Machine Vision Products and Services 
Teledyne DALSA is a world leader in the design, manufacture and deployment of digital imaging components for the machine vision market.  Teledyne DALSA image sensors, cameras, smart cameras, frame grabbers, software, and vision solutions are used in thousands of automated inspection systems around the world and across multiple industries including semiconductor, solar cell, flat panel display, electronics, automotive, medical, packaging and general manufacturing. For more information, visit 
www.teledynedalsa.com/imaging.

About Teledyne DALSA 
Teledyne DALSAis an international technology leader in sensing, imaging, and specialized semiconductor fabrication. Our image sensing solutions span the spectrum from infrared through visible to X-ray; our MEMS foundry has earned a world-leading reputation. In addition, through our subsidiaries Teledyne Optech and Teledyne Caris, we deliver advanced 3D survey and geospatial information systems. Teledyne DALSA employs approximately 1400 employees worldwide and is headquartered in Waterloo, Canada. For more information, visit www.teledynedalsa.com.

###

All trademarks are registered by their respective companies. 
Teledyne DALSA reserves the right to make changes at any time without notice.

 

Media Contact:
Geralyn Miller
Teledyne DALSA
Tel: +1-519-886-6001 x2187
Email: geralyn.miller@teledyne.com          
 

Sales Contacts:
Sales.americas@teledynedalsa.com
Sales.europe@teledynedalsa.com
Sales.asia@teledynedalsa.com

Original Link

Ford Robocar to Ford Engineers: Wake Up!

The Ford Motor Company is skipping Level 3 autonomy—when the driver must be prepared to take the wheel—and going straight to Level 4, when there is no steering wheel at all. The reason? Its own engineers were falling asleep during Level 3 test drives.

“These are trained engineers who are there to observe what’s happening,” Raj Nair, Ford’s product development chief, told Automotive News. “But it’s human nature that you start trusting the vehicle more and more and that you feel you don’t need to be paying attention.”

Apparently, the Ford engineers kept nodding off even when every attempt was made to keep them on their toes. Bells and alarms did no good, nor did putting in a second engineer to ride shotgun. He nodded off, too. It was this spectacle that convinced Ford honchos to double down on the damn-the-stopgap push to full autonomy, which Google’s Waymo pioneered.

Previously, Ford had leaned toward that idea, but hedged its bets by trying to improve driver-assistance systems until they achieved full autonomy. Just six months ago, Randal Visintainer, director for autonomous vehicle development at Ford, told IEEE Spectrum that both approaches—which he termed “top down” and “bottom up”—were still under review. “The question is, how far down can we take that [first approach], and when do the two approaches meet?”

Other automakers still favor using the stopgap of Level 3, defined as a self-driving system that might, at any moment, give the driver just 10 seconds to wake up and take command. Just last month, at CES, Audi announced that it would release a Level 3 car within a year, then aim for Level 4 some three years later.

Ten seconds may seem like plenty of time, but it sure seems short when you’re dreaming.

Original Link

One-Step Optogenetics for Hacking the Nervous System

Engineers have taken one of biotech’s hottest tools—optogenetics—and made it better. The 12-year-old technique, which enables scientists to control brain cells with light, typically requires a multi-step process and several surgeries on animal models. Polina Anikeeva at the Massachusetts Institute of Technology (MIT) and her colleagues came up with an engineering solution that combines those steps into one, and improves the function of the device. The group described their invention today in the journal Nature Neuroscience.

Optogenetics enables researchers to hack into the body’s electrical system with far more precision than traditional electrical stimulation. The technique involves genetically altering specific neurons so that they can be turned on or off with a simple flash of light.

The tool is useful for figuring out the functions of neural circuits—fire up a select few brain cells and see how the body responds. A mouse might run faster or eat more or become aggressive, depending on which neurons were manipulated.

So far, optogenetics research has been limited to animal models. That’s partly because the tool is invasive and the process rather protracted. First the animal’s brain cells must be genetically altered. One way to do that is to incorporate a light-sensitizing gene into a viral vector—the non-infectious kind—and inject it into the brain using a small syringe.

The genetic modifications cause the neurons to produce proteins and other cellular elements that, when exposed to light, allow ions to enter. An influx of sodium ions will activate the neuron, causing it to fire, and starting a chain reaction among the neurons connected to it. An influx of chloride ions, on the other hand, will inhibit the neuron.

Once the genetic modifications have taken hold, researchers implant a device that delivers light—usually with silica optical fibers or light-emitting diodes—to the modified cells. Then researchers can start turning neurons on and off and correlating it with behavioral changes.

In a third step, the resulting electrical activity in the brain is recorded. That information helps scientists be sure of which neurons are firing or not firing during behavioral experiments. But recording requires implantation of electrodes that, in combination with light sources, makes the experiment too cumbersome, so scientists often skip it.

Each step—gene delivery, light implant, and recording electrodes—typically requires a separate surgical procedure. And all three have to be directed to exactly the same spot in the brain. “You can be pretty sure” that you got it in the same place, “but not 100 percent sure,” says Anikeeva at MIT, who led the study.

MIT’s probe integrates all three pieces into one device implanted with one surgery. The key was the electrode: a custom-designed, highly conductive, very thin polymer composite, which can record and transmit neural signals. The material turned out to be so conductive that MIT was able to make the electrodes super small—small enough to fit six of them in the probe.

The group made the conductive polymer using layers of polyethylene sprinkled with graphite. “It’s kind of like a layered cake,” says Anikeeva. “We literally sprinkled on the graphite like sugar,” and melted and pressed the layers together in a high temperature vacuum.

The smallness of the electrodes left room for the other two elements: a polymer waveguide to deliver light and two microfluidics channels to deliver the gene-carrying virus. All together, the cylinder probe was still half the diameter of typical optogenetics implants, and more flexible.

photo of mouse with one-step optogentics implant Photo: MIT

Anikeeva’s group tested the device in mice in a set of experiments. In one study, they delivered a light-sensitive gene construct into an area of the mouse brain called the medial prefrontal cortex, where activating neurons is known to make mice run faster. Sure enough, with Anikeeva’s probe implanted, the mice darted around their confines faster than the control group.

German and Swiss researchers four years ago developed an all-in-one optogenetics probe, and published a report on it in the journal Lab on a Chip. But that design hasn’t been adopted by many optogenetics research labs. “We loved that paper,” says Anikeeva. “It was a really great pioneering demonstration,” but the design process isn’t conducive to production in large quantities, and the materials aren’t well suited for optogenetics, she says.

By contrast, Anikeeva’s probe is made by a thermal drawing process, in which they fabricate a large scale version of the device, and the heat and stretch the structure hundreds of meters long. The thread-thin fiber can then be chopped into hundreds of research-sized pieces.

Anikeeva says that since she first presented at a conference in July 2016 an early version of the probe, she has received several requests from researchers wanting to use the device for a variety of applications: studying nerve circuitry linked to anxiety and addiction, peripheral nerves, and even motor nerves used to control prosthetics. Anikeeva plans to make the tool freely available. “We will send fibers to anyone who wants them,” she says.

Original Link

New resource for optical chips

The Semiconductor Industry Association has estimated that at current rates of increase, computers’ energy requirements will exceed the world’s total power output by 2040.

Using light rather than electricity to move data would dramatically reduce computer chips’ energy consumption, and the past 20 years have seen remarkable progress in the development of silicon photonics, or optical devices that are made from silicon so they can easily be integrated with electronics on silicon chips.

But existing silicon-photonic devices rely on different physical mechanisms than the high-end optoelectronic components in telecommunications networks do. The telecom devices exploit so-called second-order nonlinearities, which make optical signal processing more efficient and reliable.

In the latest issue of Nature Photonics, MIT researchers present a practical way to introduce second-order nonlinearities into silicon photonics. They also report prototypes of two different silicon devices that exploit those nonlinearities: a modulator, which encodes data onto an optical beam, and a frequency doubler, a component vital to the development of lasers that can be precisely tuned to a range of different frequencies.

In optics, a linear system is one whose outputs are always at the same frequencies as its inputs. So a frequency doubler, for instance, is an inherently nonlinear device.

“We now have the ability to have a second-order nonlinearity in silicon, and this is the first real demonstration of that,” says Michael Watts, an associate professor of electrical engineering and computer science at MIT and senior author on the new paper.

“Now you can build a phase modulator that is not dependent on the free-carrier effect in silicon. The benefit there is that the free-carrier effect in silicon always has a phase and amplitude coupling. So whenever you change the carrier concentration, you’re changing both the phase and the amplitude of the wave that’s passing through it. With second-order nonlinearity, you break that coupling, so you can have a pure phase modulator. That’s important for a lot of applications. Certainly in the communications realm that’s important.”

The first author on the new paper is Erman Timurdogan, who completed his PhD at MIT last year and is now at the silicon-photonics company Analog Photonics. He and Watts are joined by Matthew Byrd, an MIT graduate student in electrical engineering and computer science, and Christopher Poulton, who did his master’s in Watts’s group and is also now at Analog Photonics.

Dopey solutions

If an electromagnetic wave can be thought of as a pattern of regular up-and-down squiggles, a digital modulator perturbs that pattern in fixed ways to represent strings of zeroes and ones. In a silicon modulator, the path that the light wave takes is defined by a waveguide, which is rather like a rail that runs along the top of the modulator.

Existing silicon modulators are doped, meaning they have had impurities added to them through a standard process used in transistor manufacturing. Some doping materials yield p-type silicon, where the “p” is for “positive,” and some yield n-type silicon, where the “n” is for “negative.” In the presence of an electric field, free carriers — electrons that are not associated with particular silicon atoms — tend to concentrate in n-type silicon and to dissipate in p-type silicon.

A conventional silicon modulator is half p-type and half n-type silicon; even the waveguide is split right down the middle. On either side of the waveguide are electrodes, and changing the voltage across the modulator alternately concentrates and dissipates free carriers in the waveguide, to modulate an optical signal passing through.

The MIT researchers’ device is similar, except that the center of the modulator — including the waveguide that runs along its top — is undoped. When a voltage is applied, the free carriers don’t collect in the center of the device; instead, they build up at the boundary between the n-type silicon and the undoped silicon. A corresponding positive charge builds up at the boundary with the p-type silicon, producing an electric field, which is what modulates the optical signal.

Because the free carriers at the center of a conventional silicon modulator can absorb light particles — or photons — traveling through the waveguide, they diminish the strength of the optical signal; modulators that exploit second-order nonlinearities don’t face that problem.

Picking up speed

In principle, they can also modulate a signal more rapidly than existing silicon modulators do. That’s because it takes more time to move free carriers into and out of the waveguide than it does to concentrate and release them at the boundaries with the undoped silicon. The current paper simply reports the phenomenon of nonlinear modulation, but Timurdogan says that the team has since tested prototypes of a modulator whose speeds are competitive with those of the nonlinear modulators found in telecom networks.

The frequency doubler that the researchers demonstrated has a similar design, except that the regions of p- and n-doped silicon that flank the central region of undoped silicon are arranged in regularly spaced bands, perpendicular to the waveguide. The distances between the bands are calibrated to a specific wavelength of light, and when a voltage is applied across them, they double the frequency of the optical signal passing through the waveguide, combining pairs of photons into single photons with twice the energy.

Frequency doublers can be used to build extraordinarily precise on-chip optical clocks, optical amplifiers, and sources of terahertz radiation, which has promising security applications.

“Silicon has had a huge renaissance within the optical communication space for a variety of applications,” says Jason Orcutt, a researcher in the Physical Sciences Department at IBM’s Thomas J. Watson Research Center. “However, there are still remaining application spaces — from microwave photonics to quantum optics — where the lack of second-order nonlinear effects in silicon has prevented progress. This is an important step towards addressing a wider range of applications within the mature silicon-photonics platforms around the world.”

“To date, efforts to achieve second-order nonlinear effects in silicon have focused on hard material-science problems,” Orcutt adds. “The [MIT] team has been extremely clever by reminding the physics community what we shouldn’t have forgotten. Applying a simple electric field creates the same basic crystal polarization vector that other researchers have worked hard to create by far more complicated means.”


Original Link

Industry Schools Government About the Future of Self-Driving Cars

Earlier this week, the U.S. House Subcommittee on Digital Commerce and Consumer Protection held a hearing on Self-Driving Cars: Road to Deployment. I know, it sounds super boring, and most of it was: if you’ve been following the space for a while, nothing in the prepared statements will surprise you all that much, even though the witnesses at the hearing included industry heavy hitters like Gill Pratt from TRI, GM’s Vice President of Global Strategy Mike Ableson, and Anders Karrberg, Vice President of Government Affairs at Volvo Car Group, as well as Lyft’s Vice President of Public Policy Joseph Okpaku and Nidhi Kalra, Co-Director and Senior Information Scientist, at the RAND Center for Decision Making Under Uncertainty.

What was interesting, however, was the question and answer session. It’s an open look at what the house members think is important, and the answers from the witnesses are on the fly. Remember, these are the people who are making self-driving car policy talking directly to the people who are making self-driving cars: What was talked about at this hearing could potentially shape the direction of the entire industry. There’s over an hour of questioning, and we’ve watched it all. But we opted to transcribe only the most interesting bits.

If you’d prefer to subject yourself to the entire Q&A session, it starts here. Again, note that we’ve just excerpted what seems most interesting to us, so this is not a complete transcript.

Frank Pallone (D-N.J.): Volvo has said that it will skip Level 3 automation and go from Level 2 to Level 4. Can you explain that decision?

Anders Karrberg, Volvo: At Level 3, the car is doing the driving. The car is doing the monitoring. But, the driver is the fallback. So, you could end up in situations where the driver has to take back control, and that could happen within seconds. We are concerned about the Level 3 stage, and therefore we are targeting Level 4.

Nidhi Kalra, RAND: I agree. There is evidence to suggest that Level 3 may show an increase in traffic crashes, and so it is defensible and plausible for automakers to skip Level 3. I don’t think there’s enough evidence to suggest that it be prohibited at this time, but it does post safety concerns that a lot of automakers are recognizing and trying to avoid.

Pallone: Volvo has said that it will take complete liability at level 4. Can you explain that decision?

Karrberg: Car makers should take liability for any system in the car. So, we have declared that if there’s a function to the [autonomous driving] system when operating autonomously, we would take the product liability.

Pallone: How real is the threat of hacking, especially in the autonomous context?

Gill Pratt, TRI: I think it’s important to understand that as serious as this threat is, there are also mitigations that we can employ. First of all, we can make sure that the safety technology on the car does not depend on the wireless network in order to operate. Our philosophy is that all the safety functions have to be self-sufficient on the car itself.

Gregg Harper (R-Miss.): Can you elaborate on the work that [your companies] are doing to provide transportation and access [for people with disabilities] in the future?

Mike Ableson, GM: It’s a very exciting opportunity for these communities. While we recognize the potential benefits, there’s obviously a whole lot more work that needs to be done. However, inside General Motors we have a specifically designated employee resource group composed of people with various physical challenges, and they’re already working with our engineering group on the potential for self-driving vehicles going forward.

Pratt: Our president, Akio Toyoda, decided to change the company policy on autonomous driving as the result of a meeting with a blind person, who asked him, “Can I enjoy the mobility of your cars as well?” I wanted to add one more thing: we can’t forget about aging society. Right now, in the United States, 13 percent of our population is over age 65. Because of the baby boom, in 15 years, that fraction will grow from 13 percent to 20 percent, and this is an extraordinary thing. My sister and I had the experience of having to take away the car keys from my father, because he was too elderly to drive. That’s something I don’t think anyone should have to go through. Our goal is to make that not have to happen in the future.

Karrberg: We fully recognize the potential for safe driving cars to bring a happier life to disabled people. Every Sunday I meet my father, who just turned 100, and he asks me every time: When can I have this car? For Volvo initially, we are targeting commuting, because that’s where we think the biggest benefit and interest for consumers are.

Joseph Okpaku, Lyft: We’ve already heard from the disabled community about how much ridesharing has increased their quality of life and mobility, same thing for the senior population. In terms of the potential to have that same impact with autonomous vehicles, the role that ridesharing plays is the ability to bring AVs to the market at a scale that would address this issue in a broad and sweeping way.

Debbie Dingell (D-Mich.): Are there specific things that congress should avoid doing that would stifle the development of autonomous vehicles?

Ableson: We wouldn’t want to see [the] government taking steps to specify a specific technology or specific solution. I think as long as we keep in mind that the goal is to prove that the vehicles are safer than drivers today, the NHTSA guidelines published last year are a very good step in that direction, in that they specify what the expectations are before vehicles are deployed in a driverless fashion.

Pratt: An evidence based approach is really the best one, where the government sets what the criteria are for performance at the federal level, but does not dictate what the ways are to meet that particular level of performance.

Brett Guthrie (R-Ky.): Do self-driving cars have to be perfect to allow them on [the road], and how do we get to the point where they’re safe enough?

Ableson: I think the point is that there’s no way to prove statistically that something’s perfect. We have to agree on the metrics by which going to use to show that the vehicle is better than human drivers, and is therefore appropriate to start deploying without drivers. That’s why this testing in real world is so important, because you’ll see those real-life conditions that we all deal with on a daily basis as human drivers, and we’ll make sure that the vehicles can react appropriately.

Pratt: This is a question that we’re thinking about extremely deeply now. We feel that there may need to be a safety factor multiplying human performance. In other words, if an autonomous car is only slightly better than the average human driver, that may not be good enough, because emotionally, we can empathize with a human driver that has an accident because that could have happened to us. On the other hand, when a machine makes a mistake, our empathy is much less. We don’t know what the safety factor has to be, and what we would like is to work collaboratively with government to try to figure out what that answer is. We’re worried that it may not be one.

Doris Matsui (D-Calif.): Can you provide your perspective on where regulation might be needed at both the state and federal levels?

Pratt: It’s the federal government that we believe should take the leading role. As you may know, in California there’s a requirement, if you’re doing autonomous car development, that you report to the government what your disconnection rate is—every time that your car has a failure of a certain kind. That’s not such a bad idea, but that information then becomes publically available, and it creates a perverse incentive, and the incentive is for companies to try to make that figure look good, because the public is watching. And that perverse incentive then causes the company to not try to test the difficult cases, but to test the easy cases, to make their score look good. We think it’s very important that there be deep thought about this kind of issue before these rules are made, and I think that concentrating that thought in the federal government is the best idea.

David McKinley (R-W.Va.): What is the projected additional cost per vehicle?

Pratt: The costs presently are very high, in the many thousands if not tens of thousands of dollars. Part of the reason that you’re seeing a push to use it in rideshare systems at the beginning is because there you can amortize the cost over a higher utilization of rideshare vehicles. However, we should keep in mind the incredible rate of decreasing costs in the electronics industry, particularly with scale. Think about your cellphone, and the cost of the camera that’s inside your cellphone, which rivals some of the best cameras that you could buy for personal or professional use in the past, and these now cost pennies to put inside of a cellphone. So we don’t know the actual numbers, but we are confident that the costs will decrease very rapidly.

Karrberg: Yes, the systems will be expensive at the start and come down in cost in the following years, but you should also know that you save costs on fender benders, car insurance is likely to go down, and also fuel economy will be improved.

Gus Bilirakis (R-Fla.): Where does the work on V2V communication fit into the overall blueprint of deploying self-driving cars?

Pratt: Vehicle to vehicle as well as vehicle to infrastructure communication is of critical importance to autonomous vehicles. Of course, we drive using our own eyes to see other vehicles, but the potential is there for autonomous vehicles to use not only the sensors on the vehicle itself, but also sensors on neighboring vehicles in order to see the world better. And so, for example if you’re going around a corner, and there’s some trees or a building that’s blocking the view, vehicle to vehicle communication can give you the equivalent of x-ray vision, because you’re seeing not only your view, but also the view from other cars as well.

It’s going to be pretty hard to make a vehicle that’s safe in all conditions. That’s this Level 5 vehicle that we keep talking about. And the standards may be very high, because again, it’s a machine, it’s not a human being, so our ability to empathize and forgive will be low. We have to give ourselves every possible tool in the tool chest in order to try and solve this problem. So I think that vehicle to vehicle and vehicle to infrastructure is extremely important, and that saving spectrum for that use is also very important.

Bilirakis: In Lyft’s view, what are some societal and economic benefits that we could expect to see from the deployment of self-driving cars?

Joseph Okpaku, Lyft: We often talk about the benefit that Lyft in its current form as a ridesharing platform has financially for drivers, but one of the things that I think often gets lost in the conversation is how important transportation is for economic upward mobility on the passenger side, meaning that one of the biggest factors for economic opportunity is access to reliable and quick transportation.

We’ve already seen some of the impacts we’ve had, we believe, on the customer side, just by providing safe and quick and reliable options to jobs, to get to and from work, that previously didn’t exist. So, if you buy that concept, and you apply it across a grand scale that an AV platform can provide, then I think the economic opportunity that it confers is really significant, and it can really help a lot of people who are in economic need get to and from their jobs [without having to] rely on insufficient public transportation options. In addition, [there is] the ability for non-emergency transportation. We’ve seen ridesharing start to partner with organizations on that front already; I think the ability to do that on a greater rate, or more efficient rate, expands once you include autonomous vehicle technology into the mix.

Original Link

Flexibility Gives Nanothread Brain Probes Long-Term Durability

For over two decades, electrodes implanted in the brain have made it possible to electrically measure the activity of individual neurons. While the technology has continued to progress over the years, the implanted probes have continued to suffer from poor recording ability brought on by biocompatibility issues, limiting their efficacy over the long term.

It turns out that size matters: In this case, the smaller the better. Researchers at the University of Texas at Austin have developed neural probes made from a flexible nanoelectronic thread (NET). These probes are so thin and tiny that when they are implanted, they don’t trigger the human body to create scar tissue, which limits their recording efficacy. Without that hindrance, the threadlike probes can work effectively for months, making it possible to follow the long-term progression of such neurovascular and neurodegenerative diagnoses as strokes and Parkinson’s and Alzheimer’s diseases.

In research described in the journal Science Advances, UT researchers fabricated the multilayered nanoprobes out of five to seven nanometer-scale functional layers with a total thickness of around 1 micrometer.

“The thickness of all the functional layers, including recording electrodes, interconnects, two to three layers of insulation are all on the nanoscale,” explained Chong Xie, an assistant professor and one of the paper’s co-authors, in an e-mail interview with IEEE Spectrum. “This ultra-small thickness is crucial for the ultra-flexibility that is necessary to completely suppress tissue reactions to the implanted probes and leads to reliable long-term neural recording.”

Xie adds that the device’s flexibility is not the result of choosing the softest materials, but instead by engineering the dimensions and geometry of the device—in particular the ultra-thin thickness.

The probe functions just like currently used neural electrodes. The recording sites (electrodes) are evenly spaced along the probe, which are individually addressed by micro-fabricated interconnects that connect to bonding pads for external inputs/outputs.

In the demonstration of the device, the electrodes were implanted into a living mouse cortex. The voltage signal is recorded at a high sampling rate (greater than 20 kiloHertz) detected on the electrodes, which represents the electrical activities from nearby neurons, including action potentials and local field potentials.

Xie and his colleagues faced some difficult technical challenges to make the neural probes extremely thin and to keep them implanted in a living brain with continued function for months. The researchers already knew from other work that larger neural probes made from thicker and stronger materials experienced structural damage that reduced their function after being implanted for just a few weeks. The key, according to Xie, was engineering the structure, materials, and fabrication details so that the interlayer adhesion is greatly enhanced. They also were surprised with how well some of the materials behaved.

“It is quite amazing that insulation layers with just hundreds of nanometers in thickness are sufficient to protect all the electrical components including millimeter-long interconnects in a living mouse brain for months,” explained Xie. “From our previous experience we knew that the photoresists we were using make good insulation with low defect rates, which was the reason for our material choice, but we were still surprised about its supreme performance.”

While the initial results of the probe were positive, there remain some pretty significant challenges ahead. The implementation strategy for the probes was tricky business.

These probes have a cross section area greater than 100 times smaller than that of a human hair and cannot free stand in air. As a result, the researchers had to come up with a way to precisely manipulate them to targeted positions and engage with microscale shuttle devices with micrometer accuracy. If this technology is to ever reach beyond the lab, this part of the process will need to be optimized.

“The surgical procedure for implanting these probes in a clinical environment needs to be further optimized,” said Xie. “We need to design new devices and an implantation strategy for clinical application. We have already started collaboration with neurosurgeons to develop the next generation of NETs for human patients.”

The next steps in the research will look to test the neural probes in primates. In the meantime, Xie and his colleagues are looking to improve the underlying technology.

Xie added: “I am continuing to enrich the technologies for novel neural interfaces in rodent models, including high-density recording and chronic recording over years and longer.”

Original Link

“Discrimination affects us all”

Aerospace engineer Aprille Joy Ericsson ’86, Instrument Project Manager at NASA’s Goddard Space Flight Center in Maryland and an alumna of MIT’s Department of Aeronautics and Astronautics, recalled Wednesday how a conversation with Martin Luther King Jr. affected a Hollywood actress’s career decision — and in turn helped to inspire Ericsson and many others of her generation to enter the world of aerospace engineering.

Nichelle Nichols, the actress who played Lieutenant Uhura on the original Star Trek series, was not under contract, Ericsson explained in her keynote talk at MIT’s 43rd annual celebration of King’s life and work. “King shared with her that Star Trek was one of the few TV shows he would let his children watch, primarily because of her role as chief technical officer on the Starship Enterprise,” which was so different than most portrayals of African-American women on television. After her conversation with King, Nichols reconsidered her plans to leave the show. She went on to provide a role model that Ericsson said helped propel her and many others into a career in the space program.

“Space travel has become a routine part of our daily lives,” though it remains a dangerous occupation, Ericsson said. Recalling the daring commitment that President John F. Kennedy made, launching the U.S. toward landing on the moon, “I believe that challenge is before us again,” she said.

Ericsson graduated from MIT just four months after the first space shuttle disaster, the Challenger accident in 1986. She earned her doctorate at Howard University and soon after went to work for NASA. “I followed my dream to explore space,” she says. But that road was not without its obstacles. “Discrimination affects us all,” she said. And yet, “inclusion of women and minorities” in working teams of all kinds, “is imperative. When I work with science and engineering teams, I know that each one on that team is important.”

“We scientists are agents of change,” she said. “Let’s embrace [Star Trek creator] Gene Roddenberry’s vision of diversity in space. We must work together across the differences of skin color, gender, and religion. … We are making this journey together, in a drive to make this world a better place.”

Ericsson suggested that people should think of their lives as if they were governed by an imaginary bank, which each day credits us with 86,400 seconds, or one day’s worth — but wipes out the balance at the end of the day. Make use of that time, and remember that it’s fleeting, she said: “I say, invest it! Please make the most of today and every day.”

“We’re all capable of making an impact,” she said, quoting King’s statement: “The time is always right to do what’s right.”

MIT President L. Rafael Reif, speaking to the MLK celebration audience, described a number of steps the Institute has taken in the last year to “make our community stronger and more inclusive.” These include the creation of a new Academic Council Working Group on Community, the recruitment of new specialists in multicultural mental health care, new sessions on diversity added to undergraduate orientation, and an increase of more than 10 percent in student aid.

 “We live in a moment when some fundamental assumptions seem to be in question — about how we should conduct ourselves as individuals and as a society,” he said. Given that, he said, it’s worth reiterating some “unwritten rules” that govern life in the MIT community.

“At MIT, when our community is at its best, racism, bigotry, and discrimination are out of bounds, period. Diminishing or excluding others because of their identity — whether race, religion, gender, sexual orientation, disability, social class, nationality, or any other aspect — is unthinkable and unacceptable,” he said.

He added that “It’s also out of the question to bully others, period. Such behavior is simply beneath us — because we value each other as members of our community and respect each other as fellow human beings. Intellectually, we are a community where prejudice — prejudging — is anathema. In the MIT community I love, our personal interactions benefit when we behave as we do in our intellectual work: Assume less and ask more, to learn more. Refrain from jumping to conclusions on superficial evidence. And listen as closely and as much as we can.”

Reif said that “when people of many backgrounds work together to address big human challenges — whether it’s climate change or fresh water access or Alzheimer’s — they come to value each other as human beings, united in a struggle larger than themselves.”

Reflecting on the deep divisions facing this country today, Reif added that “The coming months and years may put great pressure on us as a community. Whatever we face together, it is of the utmost importance that MIT remains a place that can endure, and grow from, the challenge of dissenting views — a community that makes room for us all.”

The event also featured reflections on King’s legacy by senior Rasheed Auguste and graduate student Faye-Marie Vassel. Auguste described a three-step process for addressing injustice in the world. Step one is recognizing a particular injustice that troubles you, he said. For example, “2016 was especially difficult for a lot of people.” After the election, “the weight of hateful rhetoric and injustice took its toll. … Regardless of political position, my MIT needed healing and support on Nov. 9. … It hurts to see people you know suffer and not being able to tell them ‘It’ll be okay.’ Because even though it may be reassuring, it might not be true.”

The second step, he said, is to “find a community of change-makers. Chances are, your issue is not as unique as you think. You are not, and have never been alone in your pain.” He went on to describe his meetings with MIT leadership in seeking ways to improve the inclusiveness of the Institute — as a result of which, he said “I felt empowered in the process, like I had a valued contribution to making our ideas, our compromises, our solutions, real.”

The final step, Auguste said, is to carry out the list of actions developed in step two. “Step three is putting in the work to make the justice a reality. … This amazing mentality exists at MIT: If you want something, go chase it. And if it doesn’t exist yet, then make it. People really live by this. So if you want justice, you have to chase it, to fight for it. You cannot settle for ambivalence, indifference, or passivity.”

Vassel, a doctoral student in biology, described some of the challenges of her own background, as the product of an interracial immigrant couple. (Her mother is a Russian from Uzbekistan and her father is Afro-Caribbean from Jamaica.) “Education in this country is still not equal and just for all.” She added, “I hope we all see the dangers of any political message that relies on dividing people. Quoting Dr. King, she said, “Injustice must be rooted out by strong, persistent and determined action. … Always remember to lift your voice!”

The MLK celebration also included the presentation of this year’s Dr. Martin Luther King Jr. Leadership Awards, which were given to Michael Beautyman, Catherine Gamon, Maryanne Kirkbride, Kristala Prather, Tremaan Robbins, Reginald Van Lee, and the Muslim Student Association.


Original Link

Bertram Kostant, professor emeritus of mathematics, dies at 88

Bertram Kostant, professor emeritus of mathematics at MIT, died at the Hebrew Senior Rehabilitation Center in Roslindale, Massachusetts, on Thursday, Feb. 2, at the age of 88.

Kostant was a professor of mathematics at MIT from 1962 until 1993, when he officially retired, but he continued his active life in research, traveling and lecturing at various universities and conferences around the world.

His legacy spans six decades and 107 published papers, and his ability to connect seemingly diverse ideas led to remarkable results that formed the cornerstone of rich and fruitful theories both in mathematics and theoretical physics. He held a deep passion for truth, for understanding, and for beauty; and an unshakeable faith that these things are woven together.

Bertram Kostant was born on May 24, 1928 in Brooklyn, New York. He graduated from Peter Stuyvesant High School in 1945. After studying chemical engineering for two years at Purdue University, he switched to mathematics, having fallen in love with the subject in the classes of Arthur Rosenthal and Michael Golomb, who were recent immigrants from Germany. In 1950 he earned a bachelor’s degree with distinction in mathematics.

Kostant was awarded an Atomic Energy Commission Fellowship for graduate studies at the University of Chicago. There, he found a stimulating environment. Influences on him included Marshall Stone, Adrian Albert, Shing Shen Chern, Paul Halmos, Irving Kaplansky, Irving Segal. Through Andre Weil, Kostant was exposed to the ideas of the Bourbaki group in thinking about and writing down mathematics. Edwin Spanier’s course on Lie groups used Chevalley’s text. He often said, “the sheer beauty of it all resonated with me.” This was the beginning of his lifelong passion for Lie groups — the continuous families of symmetries at the core of great parts of geometry, mathematical physics, and even algebra. His work ultimately touched almost every corner of Lie theory: algebraic groups and invariant theory, the geometry of homogeneous spaces, representation theory, geometric quantization and symplectic geometry, Lie algebra cohomology, Hamiltonian mechanics, and much more.

Kostant received an MS in mathematics in 1951, and under Irving Segal, his PhD in 1954, with a thesis titled, “Representations of a Lie algebra and its enveloping algebra on a Hilbert space.”

Between 1953 and 1956 Kostant was a member of the Institute for Advanced Study in Princeton. In 1955-56 he was a Higgins Lecturer at Princeton University, where he investigated the “holonomy groups” arising in differential geometry and worked deepen our understanding of the structure of the so-called “simple” Lie algebras.

From 1956 to 1962, Kostant was a faculty member at the University of California at Berkeley, becoming a full professor in 1962. He was a member of the Miller Institute for Basic Research from 1958 to 1959.

In 1962 Kostant joined the faculty at MIT, where he remained for the rest of his career. He was devoted to his weekly seminars in Lie theory. Over the years he supervised more than 20 PhD students — among them, the differential geometer James Simons — and served as a mentor to many postdocs and young faculty members. He worked with great energy and success to build MIT’s faculty in Lie theory and representation theory.

In the early 1960s, Kostant began to develop his “method of coadjoint orbits” and “geometric quantization” relating symplectic geometry to infinite-dimensional representation theory. Geometric quantization provides a way to pass between the geometric pictures of Hamiltonian mechanics and the Hilbert spaces of quantum mechanics. His ideas have been at the heart of several very different mathematical disciplines ever since.

Again and again, Kostant was able to make powerful use of the relationships he found between deep and subtle mathematics and much simpler ideas. For example, in the early 1960s he proved a purely algebraic result about “tridiagonal” matrices. In the 1970s, he used that result and the ideas of geometric quantization to study Whittaker models (which are at the heart of the theory of automorphic forms) and the Toda lattice (a widely studied model for one-dimensional crystals).

Kostant received many awards and honors. He was a Guggenheim Fellow in 1959-60 (in Paris), and a Sloan Fellow in 1961-63. In 1962 he was elected to the American Academy of Arts and Sciences, and in 1978 to the National Academy of Sciences. In 1982 he was a fellow of the Sackler Institute for Advanced Studies at Tel Aviv University. In 1990 he was awarded the Steele Prize of the American Mathematical Society, in recognition of his 1975 paper, “On the existence and irreducibility of certain series of representations.”

In 2001, Kostant was a Chern Lecturer and Chern Visiting Professor at Berkeley. He received honorary degrees from the University of Córdoba in Argentina in 1989, the University of Salamanca in Spain in 1992, and Purdue University in 1997. The latter, from his alma mater, was an honorary Doctor of Science degree, citing his fundamental contributions to mathematics and the inspiration he and his work provided to generations of researchers.

In May 2008, the Pacific Institute for Mathematical Sciences hosted a conference: “Lie Theory and Geometry: the Mathematical Legacy of Bertram Kostant,” at the University of British Columbia, celebrating the life and work of Kostant in his 80th year. In 2012, he was elected to the inaugural class of fellows of the American Mathematical Society. Last June, Kostant traveled to Rio de Janeiro for the Colloquium on Group Theoretical Methods in Physics, where he received the prestigious Wigner Medal, “for his fundamental contributions to representation theory that led to new branches of mathematics and physics.”

Kostant is survived by his wife, Ann, of 49 years; daughter Abbe Kostant Smerling of Lexington, Massachusetts; son Steven Kostant of Chevy Chase, Maryland; daughter Elizabeth Loew of Stoughton, Massachusetts; son David Amiel of Glendale, California; daughter Shoshanna Kostant of Boston, Massachusetts; nine grandchildren; and two great-grandchildren.

A memorial will be held at MIT in late May. Further information will be posted on the MIT Department of Mathematics website.
 


Original Link

Episode 99: Tim Cook’s HomeKit setup and Echo mania

This week we have sales estimates on the Amazon Echo, a new way to unlock your August locks and a hub that may talk to both HomeKit and legacy Z-wave and ZigBee connected devices. We also cover several networking stories ahead of Mobile World Congress involving AT&T’s IoT network, a satellite-backed LoRa network and Nokia’s plans to offer an IoT-grid network on a wholesale basis. Finally, I explain what worked and what didn’t about my effort to secure my home by splitting off into two networks. Kevin also discusses the new Google smart watches and we share Tim Cook’s HomeKit routines.

This week’s guest runs the Techstars IoT accelerator and drives investing for the Techstars Fund in the internet of things. Jenny Fielding explains the trends she’s seeing in startups, what makes a good IoT exit and some of the challenges facing industrial internet startups. She also talks about how to get around them and shares the secret beginnings of Sphero, the maker of the BB-8 toy robot. Enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Jenny Fielding, managing director of Techstars IoT
Sponsors: Ayla Networks and SpinDance

  • If you have an Echo buy this one device to start a smart home
  • Satellite was made for the internet of things
  • Dividing networks don’t really work
  • Where will the next IoT hub develop?
  • What kind of IoT startup should I build?

Original Link

AI Predicts Autism From Infant Brain Scans

Twenty-two years ago, researchers first reported that adolescents with autism spectrum disorder had increased brain volume. During the intervening years, studies of younger and younger children showed that this brain “overgrowth” occurs in childhood.

Now, a team at the University of North Carolina, Chapel Hill, has detected brain growth changes linked to autism in children as young as 6 months old. And it piqued our interest because a deep-learning algorithm was able to use that data to predict whether a child at high-risk of autism would be diagnosed with the disorder at 24 months.

The algorithm correctly predicted the eventual diagnosis in high-risk children with 81 percent accuracy and 88 percent sensitivity. That’s pretty damn good compared with behavioral questionnaires, which yield information that leads to early autism diagnoses (at around 12 months old) that are just 50 percent accurate.

“This is outperforming those kinds of measures, and doing it at a younger age,” says senior author Heather Hazlett, a psychologist and brain development researcher at UNC.

As part of the Infant Brain Imaging Study, a U.S. National Institues of Health–funded study of early brain development in autism, the research team enrolled 106 infants with an older sibling who had been given an autism diagnosis, and 42 infants with no family history of autism. They scanned each child’s brain—no easy feat with an infant—at 6-, 12-, and 24 months.

The researchers saw no change in any of the babies’ overall brain growth between 6- and 12-month mark. But there was a significant increase in the brain surface area of the high-risk children who were later diagnosed with autism. That increase in surface area was linked to brain volume growth that occurred between ages 12 and 24 months. In other words, in autism, the developing brain first appears to expand in surface area by 12 months, then in overall volume by 24 months.

The team also performed behavioral evaluations on the children at 24 months, when they were old enough to begin to exhibit the hallmark behaviors of autism, such as lack of social interest, delayed language, and repetitive body movements. The researchers note that the greater the brain overgrowth, the more severe a child’s autistic symptoms tended to be.

Though the new findings confirmed that brain changes associated with autism occur very early in life, the researchers did not stop there. In collaboration with computer scientists at UNC and the College of Charleston, the team built an algorithm, trained it with the brain scans, and tested whether it could use these early brain changes to predict which children would later be diagnosed with autism.

It worked well. Using just three variables—brain surface area, brain volume, and gender (boys are more likely to have autism than girls)—the algorithm identified up eight out of 10 kids with autism. “That’s pretty good, and a lot better than some behavioral tools,” says Hazlett.

To train the algorithm, the team initially used half the data for training and the other half for testing—“the cleanest possible analysis,” according to team member Martin Styner, co-director of the Neuro Image Analysis and Research Lab at UNC. But at the request of reviewers, they subsequently performed a more standard 10-fold analysis, in which data is subdivided into 10 equal parts. Machine learning is then done 10 times, each time with 9 folds used for training and the 10th saved for testing. In the end, the final program gathers together the “testing only” results from all 10 rounds to use in its predictions.

Happily, the two types of analyses—the initial 50/50 and the final 10-fold—showed virtually the same results, says Styner. And the team was pleased with the prediction accuracy. “We do expect roughly the same prediction accuracy when more subjects are added,” said co-author Brent Munsell, an assistant professor at College of Charleston, in an email to IEEE. “In general, over the last several years, deep learning approached that have been applied to image data have proved to be very accurate,” says Munsell.

But, like our other recent stories on AI out-performing medical professionals, the results need to be replicated before we’ll see a computer-detected biomarker for autism. That will take some time, because it is difficult and expensive to get brain scans of young children for replication tests, emphasizes Hazlett.

And such an expensive diagnostic test will not necessarily be appropriate for all kids, she adds. “It’s not something I can imagine being clinically useful for every baby being born.” But if a child were found to have some risk for autism through a genetic test or other marker, imaging could help identify brain changes that put them at greater risk, she notes.   

Original Link

The Kentucky Startup That Is Teaching Coal Miners to Code

Coal’s role in American electricity generation is fast diminishing. A few large coal-mining companies declared bankruptcy last year, and several coal power plants have been shuttered. The biggest loss in all this has been felt by the tens of thousands of coal miners who have been laid off. But despite the U.S. president’s campaign pledges, those jobs are going to be hard to bring back. Besides competition from natural gas and cheaper renewables, coal mining, and mining in general, is losing jobs to automation.

But now, a small startup in the middle of Appalachian coal country has a forward-looking plan to put miners back to work. Pikeville, Ky.-based Bit Source has trained displaced coal industry veterans in Web and software development.

The retrained workers now design and develop websites, tools, games, and apps. Bit Source is proving that coal miners can, indeed, learn how to code, and do it well.

“Appalachia has been exporting coal for a long time,” says Justin Hall, the company’s president. “Now we want to export code. We’ve got blue-collar coders. It’s the vision of the future of work: How do you train a workforce and adapt it for technology.”

Bit Source’s co-owners are Lynn Parish, who worked for over 40 years in the coal industry, and Rusty Justice, who calls himself an “unapologetic hillbilly.” The duo, who run an excavation company, bought an old Coca-Cola bottling plant in 2014 and turned it into a software development space. They got over 900 applications in response to a radio call for 10 jobs.

The 10 they chose include a former mechanic, a mine safety inspector, and an underground miner. That first group was given 22 weeks of coding and software development training paid through U.S. Department of Labor grants, and now work full time with Bit Source, designing and developing websites, tools, games, and apps.

Some were casually familiar with software, but there was a steep learning curve. They learned quickly enough, though, says Hall. They started with basic HTML Web pages, and moved on to learning CSS, Javascript, and Drupal. Among other things, the coders have designed Pikeville’s economic development and tourism websites, as well as websites for the eastern Kentucky employment program and some local businesses. They are now getting certified in the Unity game engine. “Once they got in and started working with the tools and products we provided, they were able to problem solve,” Hall says. “Coal-miners are tech-oriented people. They’re engineers that get dirty.”

Others are picking up on this idea. A handful of companies in eastern Kentucky are now “rubber stamping our business model and basically doing what we’ve done,” Hall says. And it’s not just limited to Kentucky. In Waynesburg, heart of Pennsylvania’s coal country, the non-profit Mined Minds is offering free coding classes to laid-off coal workers. Its goal is to “seed the growth of technology hubs within areas in economic need in Pennsylvania and West Virginia, so that the information revolution can be the fuel to drive these areas into the future.”

The current challenge for Bit Source is to scale up and be a sustainable for-profit business. The company has received a wave of publicity for a good reason, But Hall says that doesn’t necessarily mean more work. The local market has quickly dried up, and the region is economically strained because of the exodus of mining companies, he says. He notes that Bit Source is now seeking to expand by finding projects elsewhere.

Original Link

An Autonomous Passenger Drone Seems Like a Terrible Idea

At the World Government Summit in Dubai on Monday, the head of the city’s Roads & Transportation Agency announced that the Chinese-made EHang 184 single passenger drone will begin regular operations in July as an autonomous taxi service. Supervised over 4G from the ground, the drone would be able to autonomously fly a single slim passenger and one even slimmer piece of luggage across distances of up to 50 kilometers at speeds of around 100 kilometers per hour. 

To be clear, this drone exists, it flies, and, strictly speaking, there is no specific technological reason why EHang and Dubai couldn’t do exactly what they’re saying they’re going to do. And that’s what’s so scary.

The reason we’re bothering to write about this thing at all is because it’s easy to get excited about the following promotional video from the Dubai transit agency, showing a real vehicle flying:

At this point, the important thing to remember about any autonomous drone, no matter how big, is that getting it to fly is not the difficult part. Strap enough motors and batteries to just about anything and you can get it off the ground. What’s difficult is controlling it, especially in adverse conditions, challenging environments, or if something goes wrong. 

As far as I know, there is no way of surviving a total (or perhaps even partial) motor or software failure on a drone like this. With an airplane or a helicopter, even if absolutely everything dies on you, you still have a reasonable chance (if you know what you’re doing) of landing the aircraft so that you’ll live, and maybe even walk away. With this drone, there’s simply nothing you could do—even if you were an experienced drone pilot, which most passengers won’t be.

At the very least, it seems like a ballistic parachute would be an absolute necessity. But that still wouldn’t guarantee your safety, and your odds would be worse if you fly over urban areas most of the time. (We should mention that some research has looked at the possibility of maintaining control in drones that lose motors. But it’s not clear whether a human would be able to survive those kinds of maneuvers.)

Beyond the safety of the aircraft itself, there’s also the issue of autonomy. Anyone who’s learned to fly knows how much time you spend looking around for other aircraft that might pose a collision risk. Even with GPS, radar, PCAS, and the assistance of ground controllers, flying through crowded airspace or at low altitude is potentially dangerous. You can’t just set GPS waypoints on a map and expect to move mindlessly from one to another. I don’t see any kind of autonomous sense-and-avoid feature on this drone (with the possible exception of a camera gimbal underneath it that may be used for landing pad detection). The “will be monitored from the ground using 4G” assurance is fine in theory, but sketchy in practice. Let’s say you lose connectivity; there’s no good Plan B. Either the drone attempts to fly itself without (as far as we know) adequate sensors, or it hands things over to the passenger, who most likely has no idea what they’re doing.

This is the fundamental problem with drone taxis: I’m not convinced there’s a safe way to fail. With an autonomous car, the failure mode is pulling over, or even just coming to a stop in the middle of the road if absolutely necessary. With an autonomous aircraft, you can’t do that. The only way I see passenger drones being realistic at all is if they include a trained human pilot, a highly distributed propulsion system, and a ballistic parachute. And to be honest, I’m skeptical that drone taxis would be much more than a novelty, anyway. The potential for autonomous cars to increase the speed and efficiency of short- and medium-distance travel is so enormous that, within the next decade or two, intracity drones would likely be more trouble than they’re worth.

I admit, it would be kinda cool to see a drone taxi service operating in Dubai this year. If they manage to pull it off, I’ll be very impressed, but personally, I don’t think I’d ever, ever ride in one.

Original Link

Safe at any speed

Yiou He is ready to get to full speed. On a recent weekend in California, she felt the thrill of victory: She and MIT classmates became the first to successfully shoot a levitating Hyperloop pod down a 1-mile vacuum tube during a SpaceX Hyperloop competition. “We proved our design worked,” she says with satisfaction.

Tesla Motors and SpaceX CEO Elon Musk envisions the Hyperloop as the “fifth mode of transportation.” It’s a concept dreamed up by Musk that involves the delivery of people through a system of tubes maintained in a near-vacuum that connect major cities. Dramatically reducing air friction, the pods travel at close to the speed of sound, using low-energy propulsion systems.

Looking for ways to accelerate the development of a functional prototype, in 2015 Musk created an international competition challenging university students to design and build the best Hyperloop. In January of 2016, a group of MIT students beat out teams from 115 other universities and 20 countries to earn the Best Overall Design Award. Their victory set them on the road to their next task: to build a functional pod capable of safely shooting through a tunnel at hundreds of miles per hour.

The MIT Hyperloop Team pod flies through a mock Hyperloop tube at 90 kph during a recent SpaceX Hyperloop competition. Watch carefully, and you’ll see the wheel stops rotating, demonstrating that the pod achieved stable magnetic levitation.

Video: MIT Hyperloop Team

In Cambridge, He and the rest of the 35-person team — which includes students in aeronautics, mechanical engineering, electrical engineering, and business management — each worked more than 10 hours per week (sometimes much more) on the project while also attending classes and working on PhD theses and research work. They designed a small pod for 250 mph — and last May unveiled the first ever physical Hyperloop prototype in the world.

Last month, they showed up in California to give it a go. The MIT Hyperloop Team was one of only three of the 27 competing teams that passed a litany of safety and design tests, earning the right to run their pods on the Hyperloop track. Of these, the Delft University of Technology (Netherlands) team earned the highest score overall. Technical University of Munich (Germany) secured the award for the fastest pod. MIT placed third overall and won an award for safety and reliability.

“This is an exciting project. And the competition is not a one-time thing,” says He. She adds that many of the current MIT team members will be moving on due to graduation. He, a graduate student in the Department of Electrical Engineering and Computer Science, is game to keep the pod work alive: “I’m ready to transfer knowledge to the next generation team.”

For Max Opgenoord, team captain and a graduate student in the Department of Aeronautics and Astronautics, the recent SpaceX event was the culmination of an effort that dates back to June 2015. Opgenoord and four other students started the team just after the SpaceX competition was announced. At first, they met in classrooms late at night. Eventually, they attracted other students who felt the same way they did about the project.

“A whole new transportation system is both super exciting and necessary,” says Opgenoord. He says a key goal this weekend was to accomplish magnetic levitation of the pod. “Can we show levitation?” he asked before leaving. “That is what matters.” He is thrilled, upon return, to say, yes, they could.

“If you watch video, you can see that the wheel on the pod stops rotating at some point, showing that we have stable magnetic levitation,” says Opgenoord. He adds that TU Munich covered the longest distance, but they were using a wheeled pod. “Using magnetic levitation is much more efficient at higher speeds,” he says.

Speeds of 600 mph are envisioned for commuting between cities. In fact, the Hyperloop Pod Competition II at SpaceX this summer, which will be open to new and existing teams, is focused on a single criterion: maximum speed.

Opgenoord says the current MIT Hyperloop Team is signing off with the knowledge that in 2016, they unveiled the very best pod design — and now, they’ve built a safe and reliable pod that is both capable of magnetic levitation and imminently scalable. “Obviously, we wanted to come in first this weekend — but what we’ve accomplished is in reality worth more than the prizes.”

Meanwhile, He says there is more fun ahead. “We think it technically feasible to build a Hyperloop. You need a lot of political willpower and capital to do it, and that’s not something we’ve investigated — but technically, it is possible to do it. And that is just really cool.”


Original Link

What Role Do Household Incomes Play in the Full Cost of Electricity?

Advertisement

Energywise

IEEE Spectrum’s energy, power, and green tech blog, featuring news and analysis about the future of energy, climate, and the smart grid.

Newsletter Sign Up

Sign up for the EnergyWise newsletter and get biweekly news on the power & energy industry, green technology, and conservation delivered directly to your inbox.

image of dollar sign Energy

The Full Cost of Electricity

An initiative to identify and quantify the cost of electric power generation and delivery

photograph of rooftop solar installation Energy

Will Rooftop Solar Really Add to Utility Costs?

Study shows significant PV generation can be integrated in the grid with little or no additional cost to utilities and their customers

Advertisement

Students at Tilotama's solar-powered tuition centre in Pondikote village, Odisha, India Green Tech

Off-Grid Electrification Financing Is Failing

Off-grid systems garner a piddling 1% of electricity investment in the African and Asian countries that need them most

Illustration: The Solutions Project Energy

A Road Map to 100 Percent Renewable Energy in 139 Countries by 2050

Wind, solar, hydropower, and geothermal could power most of the world, argues Mark Jacobson and team

A photo illustration shows a side profile of U.S. President Donald Trump surrounded by black and yellow nuclear warheads on a red background Aerospace

Commentary: Trump, Engineering Advisers, and the North Korea Crisis

An EE on the U.S. president’s team would be a good influence right now

white lines connect to form the outline of a human brain on a black background Energy

DOE Backs AI for Clean-Tech Investors

Data science company Rho AI will use machine learning to build better networks for clean-tech companies and private investors

One of two Westinghouse AP1000 reactors to remain unfinished at South Carolina's VC Summer nuclear power plant Energy

South Carolina’s $9 Billion Nuclear Boondoggle Fits a Global Pattern of Troubles

South Carolina utilities abandoned a pair of troubled reactors projected to cost more than twice their original price

Volvo Cars CEO Hakan Samuelsson standing in front of a blue Volvo car and a white Volvo car Transportation

Will Volvo Really Kill the Gasoline Engine?

Why the carmaker’s audacious plan to electrify every Volvo from 2019 may stall out

Clean coal technology suffered a setback when efforts to start up the gasification portion of an IGCC plant in Mississippi were halted. Energy

The Three Factors That Doomed Kemper County IGCC

Clean coal technology suffered a setback when efforts to start up the gasification portion of an IGCC plant in Mississippi were halted

Illustration: iStockPhoto Energy

Can the U.S. Grid Work With 100% Renewables? There’s a Scientific Fight Brewing

The world running on wind, sunlight, and hydropower—championed by Stanford’s Mark Jacobson—has captured the public imagination, but it faces a fierce attack from 21 climate and power-grid experts

Team Tech demands resistance to U.S. withdrawal from the Paris Agreement. Energy

U.S. Tech Titans Vow to Resist Trump’s Paris Pullout

Leaders of major tech companies say they have good reason to stay the course—even if the federal government won’t

Rex Tillerson used the melting Arctic to defend U.S. involvement in global climate policy Energy

Energy Policy Defies Jobs-Versus-Environment Rhetoric

Methane rules and the Paris Accord expose friction within the GOP and the Trump administration over climate and energy policy

artist illustration of people with arms raised Energy

Consumers Express Community Values With Their Electricity Choices

Values are personal, not necessarily logical, and when applied to electricity choices they can impact the market in unpredictable ways

The incubator agency’s 2017 budget victory last week still says little about its fate in 2018 Energy

ARPA-E Survives Brush With Trump Administration Axe

The incubator agency’s 2017 budget victory last week still says little about its fate in 2018

Two utility-scale energy storage units as deployed at a substation in California look like white and black cargo containers on cement pedestals Energy

Energy Storage Rose From California Crisis

A crisis that threatened Southern California’s electric grid enabled energy storage to demonstrate its flexibility and rapid deployment

illustration of scale with light bulb balanced against dollar signs Energy

How Much Does the U.S. Government Subsidize Electricity Generating Technologies?

When considering only the portion of fossil fuel support that relates to electric power, renewables receive far more federal help

Dandelions and green grass are pictured in front of four parabolic cooling towers, which are sometimes used at nuclear power plants. Energy

Facing Threats, Nukes Work to Polish Their Green Cred

Unfavorable market conditions are forcing some nuclear power plants to close, removing a carbon-free source of power from the U.S. grid. Plant operators are fighting back

A series of four globes on an orange background. The oceans of each globe are progressively more red Energy

Will Earth’s Climate Get More Sensitive to CO2? Only Better Satellites Can Say

A mathematical rethink suggests that carbon dioxide will warm Earth more in the future than it does today. But better satellites—such as those Trump wants to scrap—are needed to reduce climate uncertainty

Photograph of President Trump shaking hand of hard-hat-wearing coal miner Energy

Commentary: Photo Ops with Coal Miners Offer No Substitute for Fact-Based Climate Policy

The sweeping attack on climate action that President Trump demanded in his executive order is likely to prove but short-lived relief for coal miners who cheered him at the EPA

Original Link

Nuclear Experts: High Radiation Estimates at Fukushima No Surprise to Us

With two robot-probe operations apparently encountering increasingly high radiation levels inside the crippled Fukushima Daiichi nuclear plant during the past three weeks, some media reports suggested the radiation count was climbing rapidly. It didn’t help temper that view when plant operator Tokyo Electric Power Company (TEPCO) had to prematurely halt the second operation last Thursday to yank out the robot probe. Radiation had begun to dim the view of the attached cameras, threatening to leave the robot blinded and therefore unable to retrace its steps and escape the rubble.

The first operation, conducted at the end of January, used a remote-controlled robot equipped with a camera attached to a 10.5-meter-long telescopic rod. Captured video and stills showed images of a dark mass of rubble inside the No. 2 reactor’s primary containment vessel near the pedestal that supports the reactor vessel.

Analysis of the images, meant to determine whether the rubble encountered is corium (a mix of melted fuel and other materials), is still ongoing.

A TEPCO official explained that nuclear engineers conducted radiation lab tests prior to the operations taking place. This enabled the engineers to study the images taken in the first probe and estimate the different radiation levels—the highest of which was estimated to be 530 sieverts an hour. An estimate based on images taken during the second probe put the level as high as 650 sieverts an hour. (To put those numbers in context, when you take an abdominal X-ray, you’re exposed to about 4 millisieverts of radiation.)

TEPCO says it is not particularly surprised at these numbers given that its probes were approaching the reactor vessel. “And these are not direct measurements, but are based on the amount of image noise produced,” a company official emphasized. “There is also a plus or minus error margin of 30 percent.” 

Will Davis, a former U.S. Navy reactor operator, and a communications consultant to the American Nuclear Society who has followed the Fukushima accident since it began, agrees with that conclusion. 

“I don’t think we can realistically make assumptions about rising and lowering radiation levels in these camera-based detection methods yet,” he told IEEE Spectrum. “Not only is the presence of localized [radiation] point-sources possible, but there is also the possibility that streaming of radiation is talking place. In other words, we cannot say that all of the radiation in the containment vessel is coming from one unified lump of damaged fuel in the reactor vessel, and perhaps from a second unified lump sitting under it.”

Davis added that it is only to be expected that the closer the robot probes get to the damaged reactor, the higher the dose rates will be. “This has been expected since the beginning. And the high recent readings—even with the chance of up to 30 percent error—only confirms what experts already knew.”

He pointed out that comparably high radiation levels had been recorded in the aftermath of the Three Mile Island and Chernobyl nuclear accidents. 

TEPCO sent in the two robot probes to pave the way for a third operation planned for later this month. This third probe will use a remotely controlled Scorpion robot equipped with a camera, a dosimeter, and a temperature gauge. 

By contrast, the main purpose of the second probe was to remove sediment. The robot was outfitted with a water cannon and a scraper tool, as well as three cameras. The hope was to blast a path for the Scorpion, which cannot easily maneuver over uneven surfaces.

Despite the operation being halted early due to the impact of radiation, the company official said no further preparatory probes were planned. 

The official added that the information gleaned so far was not regarded as a negative, but rather as an aid in helping the engineers who are conducting these operations. “They are combining and analyzing everything right now, and this will help them determine whether to use the Scorpion or not, and what the next best step is to be.”

The American Nuclear Society’s Davis noted that just getting through the approach and planning stages that will precede the removal of the damaged nuclear fuel inside the reactor vessels and the primary containment vessels “is going to take a very long time, probably many, many years.”

But he also pointed out that while the new estimated radiation levels gleaned from the probes may shock people not following the cleanup closely, “it is important to remember that they are extremely localized and have no impact whatsoever to anyone outside the nuclear plant.”

Original Link

Ian Waitz to step down as dean of engineering

Ian A. Waitz will step down as MIT’s dean of engineering at the end of this academic year, concluding over six years of service.

Provost Martin Schmidt announced the news today in an email to the MIT community, praising Waitz’s collaborative vision that “has both bolstered local departments and encouraged the school and the Institute to reach beyond traditional disciplinary boundaries to expand the ways that engineering can address our most challenging problems.”

MIT President L. Rafael Reif adds, “Under Ian’s leadership, the School of Engineering has never been a stronger magnet for talent. With characteristic energy, optimism, and persistence, he has cultivated a dynamic community that unites the school’s many departments and links engineering to disciplines across MIT. And from the Sandbox Fund to the MIT Institute for Data, Systems, and Society, he spearheaded new initiatives that will have a lasting impact on our ability to develop our students’ ingenuity, tackle important problems for humanity, and deliver our best ideas to the world. We are deeply grateful to Ian for his collaborative leadership and his distinguished service.”

As dean of the School of Engineering (SoE), Waitz developed and implemented the school’s strategic plan, focusing on people, education, and innovation. He made a concerted effort to support faculty while refining the school’s primary academic departments and programs by increasing data-based decision-making, bolstering funding for teaching, addressing research underrecovery, and enabling more local control of resources and strategic direction.

“Without a doubt, the greatest thrill of the position has been the opportunity to live vicariously through the accomplishments of our exceptional students, staff, and faculty members,” Waitz said in a letter sent today to SoE colleagues. “It is a truly humbling experience when one understands the full breadth, depth, and impact of the School of Engineering at MIT. In partnership with our sister schools at MIT we are building a better world.”

Waitz, also the Jerome C. Hunsaker Professor of Aeronautics and Astronautics and former head of the Department of Aeronautics and Astronautics (from 2008 until his appointment as dean in 2011), has no immediate plans other than taking a year-long sabbatical.

“I am not sure what I will do next (the job does not leave a lot of free time for contemplating such things!), but I very much look forward to recharging, redirecting, and exploring new opportunities,” he conveyed in his letter to the SoE community. “Thank you for allowing me to serve you and the greatest engineering school on the planet.”

Of particular note during Waitz’s tenure has been the launch of two new Institute-encompassing endeavors: the Institute for Medical Engineering and Science (IMES) and the Institute for Data, Systems, and Society (IDSS). He has also worked to support and strengthen all of the school’s academic departments, including a renewal of civil and environmental engineering and growth in nuclear science and engineering.

Novel opportunities in residential education were also priorities for Waitz as dean. He co-launched the MIT Beaver Works Center, which supports collaborative efforts between Lincoln Laboratory and the MIT campus, and was a supporter and early participant in MITx and edX. He worked to strengthen several key MIT-wide educational programs, including the Gordon Engineering Leadership Program, the Undergraduate Practice Opportunities Program, and activities within the Office of Engineering Outreach Programs. Waitz worked with department heads to create ways for undergraduate students to pursue more flexible degrees and take courses remotely, and is currently championing a novel school-wide undergraduate degree option. Under his leadership, financial support for teaching in the school grew by over 30 percent.

In parallel, Waitz helped spark new programs and spaces for innovation and entrepreneurship, including the creation of the MIT Sandbox Innovation Fund, which provides all MIT students with an opportunity to move innovative ideas forward. He was a key part of a process that catalyzed the MIT Innovation Initiative, and he successfully articulated the need for expanded makerspaces on campus. Waitz also lobbied on behalf of the School of Engineering for the creation of MIT.nano, a new 200,000-square-foot center for nanoscience and nanotechnology, due to open in 2018.

Waitz established resource development personnel in all departments, which has led to a nearly threefold increase in yearly giving to the SoE during his tenure. He also strengthened partnerships with alumni, industry, and donors by highlighting the benefits of engaging with MIT broadly.

Waitz joined the MIT faculty in 1991, after earning his BS in 1986 from Penn State University, his MS in 1988 from George Washington University, and his PhD in 1991 from Caltech. In addition to scholarly publications, Waitz has contributed to several influential policy documents and scientific assessments, including a report to Congress on aviation and the environment. He holds three patents and has consulted for many organizations. He is a member of the National Academy of Engineering, a fellow of the American Institute of Aeronautics and Astronautics, and a member of the American Society of Mechanical Engineering and the American Society of Engineering Education. A dedicated teacher, he was honored with the 2002 MIT Class of 1960 Innovation in Education Award and an appointment as an MIT MacVicar Faculty Fellow in 2003.

Schmidt plans to appoint a faculty committee to advise him on the selection of the next dean of engineering. Members of the MIT community are welcome to send suggestions and ideas to engineering-search@mit.edu.


Original Link

A new contrast agent for MRI

A new, specially coated iron oxide nanoparticle developed by a team at MIT and elsewhere could provide an alternative to conventional gadolinium-based contrast agents used for magnetic resonance imaging (MRI) procedures. In rare cases, the currently used gadolinium agents have been found to produce adverse effects in patients with impaired kidney function.

The advent of MRI technology, which is used to observe details of specific organs or blood vessels, has been an enormous boon to medical diagnostics over the last few decades. About a third of the 60 million MRI procedures done annually worldwide use contrast-enhancing agents, mostly containing the element gadolinium. While these contrast agents have mostly proven safe over many years of use, some rare but significant side effects have shown up in a very small subset of patients. There may soon be a safer substitute thanks to this new research.

In place of gadolinium-based contrast agents, the researchers have found that they can produce similar MRI contrast with tiny nanoparticles of iron oxide that have been treated with a zwitterion coating. (Zwitterions are molecules that have areas of both positive and negative electrical charges, which cancel out to make them neutral overall.) The findings are being published this week in the Proceedings of the National Academy of Sciences, in a paper by Moungi Bawendi, the Lester Wolfe Professor of Chemistry at MIT; He Wei, an MIT postdoc; Oliver Bruns, an MIT research scientist; Michael Kaul at the University Medical Center Hamburg-Eppendorf in Germany; and 15 others.

Contrast agents, injected into the patient during an MRI procedure and designed to be quickly cleared from the body by the kidneys afterwards, are needed to make fine details of organ structures, blood vessels, and other specific tissues clearly visible in the images. Some agents produce dark areas in the resulting image, while others produce light areas. The primary agents for producing light areas contain gadolinium.

Iron oxide particles have been largely used as negative (dark) contrast agents, but radiologists vastly prefer positive (light) contrast agents such as gadolinium-based agents, as negative contrast can sometimes be difficult to distinguish from certain imaging artifacts and internal bleeding. But while the gadolinium-based agents have become the standard, evidence shows that in some very rare cases they can lead to an untreatable condition called nephrogenic systemic fibrosis, which can be fatal. In addition, evidence now shows that the gadolinium can build up in the brain, and although no effects of this buildup have yet been demonstrated, the FDA is investigating it for potential harm.

“Over the last decade, more and more side effects have come to light” from the gadolinium agents, Bruns says, so that led the research team to search for alternatives. “None of these issues exist for iron oxide,” at least none that have yet been detected, he says.

The key new finding by this team was to combine two existing techniques: making very tiny particles of iron oxide, and attaching certain molecules (called surface ligands) to the outsides of these particles to optimize their characteristics. The iron oxide inorganic core is small enough to produce a pronounced positive contrast in MRI, and the zwitterionic surface ligand, which was recently developed by Wei and coworkers in the Bawendi research group, makes the iron oxide particles water-soluble, compact, and biocompatible.

The combination of a very tiny iron oxide core and an ultrathin ligand shell leads to a total hydrodynamic diameter of 4.7 nanometers, below the 5.5-nanometer renal clearance threshold. This means that the coated iron oxide should quickly clear through the kidneys and not accumulate. This renal clearance property is an important feature where the particles perform comparably to gadolinium-based contrast agents.

Now that initial tests have demonstrated the particles’ effectiveness as contrast agents, Wei and Bruns say the next step will be to do further toxicology testing to show the particles’ safety, and to continue to improve the characteristics of the material. “It’s not perfect. We have more work to do,” Bruns says. But because iron oxide has been used for so long and in so many ways, even as an iron supplement, any negative effects could likely be treated by well-established protocols, the researchers say. If all goes well, the team is considering setting up a startup company to bring the material to production.

For some patients who are currently excluded from getting MRIs because of potential side effects of gadolinium, the new agents “could allow those patients to be eligible again” for the procedure, Bruns says. And, if it does turn out that the accumulation of gadolinium in the brain has negative effects, an overall phase-out of gadolinium for such uses could be needed. “If that turned out to be the case, this could potentially be a complete replacement,” he says.

Ralph Weissleder, a physician at Massachusetts General Hospital who was not involved in this work, says, “The work is of high interest, given the limitations of gadolinium-based contrast agents, which typically have short vascular half-lives and may be contraindicated in renally compromised patients.”

The research team included researchers in MIT’s chemistry, biological engineering, nuclear science and engineering, brain and cognitive sciences, and materials science and engineering departments and its program in Health Sciences and Technology; and at the University Medical Center Hamburg-Eppendorf; Brown University; and the Massachusetts General Hospital. It was supported by the MIT-Harvard NIH Center for Cancer Nanotechnology, the Army Research Office through MIT’s Institute for Soldier Nanotechnologies, the NIH-funded Laser Biomedical Research Center, the MIT Deshpande Center, and the European Union Seventh Framework Program.


Original Link

The heart of a far-off star beats for its planet

For the first time, astronomers from MIT and elsewhere have observed a star pulsing in response to its orbiting planet.

The star, which goes by the name HAT-P-2, is about 400 light years from Earth and is circled by a gas giant measuring eight times the mass of Jupiter — one of the most massive exoplanets known today. The planet, named HAT-P-2b, tracks its star in a highly eccentric orbit, flying extremely close to and around the star, then hurtling far out before eventually circling back around.

The researchers analyzed more than 350 hours of observations of HAT-P-2 taken by NASA’s Spitzer Space Telescope, and found that the star’s brightness appears to oscillate ever so slightly every 87 minutes. In particular, the star seems to vibrate at exact harmonics, or multiples of the planet’s orbital frequency — the rate at which the planet circles its star.

The precisely timed pulsations have lead the researchers to believe that, contrary to most theoretical model-based predictions of exoplanetary behavior, HAT-P-2b may be massive enough to periodically distort its star, making the star’s molten surface flare, or pulse, in response.

“We thought that planets cannot really excite their stars, but we find that this one does,” says Julien de Wit, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “There is a physical link between the two, but at this stage, we actually can’t explain it. So these are mysterious pulsations induced by the star’s companion.”

De Wit is a the lead author of a paper detailing the results, published today in Astrophysical Journal Letters.

Getting a pulse

The team came upon the stellar pulsations by chance. Originally, the researchers sought to generate a precise map of an exoplanet’s temperature distribution as it orbits its star. Such a map would help scientists track how energy is circulated through a planet’s atmosphere, which can give clues to an atmosphere’s wind patterns and composition.

With this goal in mind, the team viewed HAT-P-2 as an ideal system: Because the planet has an eccentric orbit, it seesaws between temperature extremes, turning

cold as it moves far away from its star, then rapidly heating as it swings extremely close.

“The star dumps an enormous amount of energy onto the planet’s atmosphere, and our original goal was to see how the planet’s atmosphere redistributes this energy,” de Wit says.

The researchers obtained 350 hours of observations of HAT-P-2, taken intermittently by Spitzer’s infrared telescope between July 2011 and November 2015. The dataset represents one of the largest ever taken by Spitzer, giving de Wit and his colleagues plenty of observations to allow for detecting the incredibly tiny signals required to map an exoplanet’s temperature distribution.

The team processed the data and focused on the window in which the planet made its closest approach, passing first in front of and then behind the star. During these periods, the researchers measured the star’s brightness to determine the amount of energy, in the form of heat, transferred to the planet.

Each time the planet passed behind the star, the researchers saw something unexpected: Instead of a flat line, representing a momentary drop as the planet is masked by its star, they observed tiny spikes — oscillations in the star’s light, with a period of about 90 minutes, that happened to be exact multiples of the planet’s orbital frequency.

“They were very tiny signals,” de Wit says. “It was like picking up the buzzing of a mosquito passing by a jet engine, both miles away.”

Lots of theories, one big mystery

Stellar pulsations can occur constantly as a star’s surface naturally boils and turns over. But the tiny pulsations detected by de Wit and his colleagues seem to be in concert with the planet’s orbit. The signals, they concluded, must not be due to anything in the star itself, but to either the circling planet or an effect in Spitzer’s instruments.

The researchers ruled out the latter after modeling all the possible instrumental effects, such as vibration, that could have affected the measurements, and finding that none of the effects could have produced the pulsations they observed.

“We think these pulsations must be induced by the planet, which is surprising,” de Wit says. “We’ve seen this in systems with two rotating stars that are supermassive, where one can really distort the other, release the distortion, and the other one vibrates. But we did not expect this to happen with a planet — even one as massive as this.”

“This is really exciting because, if our interpretations are correct, it tells us that planets can have a significant impact on physical phenomena operating in their host-stars,” says co-author Victoria Antoci, a postdoc at Aarhus University in Denmark. “In other words, the star ‘knows’ about its planet and reacts to its presence.”

The team has some theories as to how the planet might be causing its star to pulse. For example, perhaps the planet’s transient gravitational pull is disturbing the star just enough to tip it toward a self-pulsating phase. There are stars that naturally pulse, and perhaps HAT-P-2b is pushing its star toward that state, the way adding salt to a simmering pot of water can trigger it to boil over. De Wit says this is just one of several possibilities, but getting to the root of the stellar pulsations will require much more work.

“It’s a mystery, but it’s great, because it demonstrates our understanding of how a planet affects its star is not complete,” de Wit says. “So we’ll have to move forward and figure out what’s going on there.”

This research was supported, in part, by NASA’s Jet Propulsion Laboratory and Caltech.


Topics: Research, Astronomy, Astrophysics, EAPS, Planetary science, School of Science, Exoplanets, NASA, space, Space, astronomy and planetary science

Original Link

Self-made stars

The Phoenix cluster is an enormous accumulation of about 1,000 galaxies, located 5.7 billion light years from Earth. At its center lies a massive galaxy, which appears to be spitting out stars at a rate of about 1,000 per year. Most other galaxies in the universe are far less productive, squeaking out just a few stars each year, and scientists have wondered what has fueled the Phoenix cluster’s extreme stellar output.

Now scientists from MIT, the University of Cambridge, and elsewhere may have an answer. In a paper published today in the Astrophysical Journal, the team reports observing jets of hot, 10-million-degree gas blasting out from the central galaxy’s black hole and blowing large bubbles out into the surrounding plasma.

These jets normally act to quench star formation by blowing away cold gas — the main fuel that a galaxy consumes to generate stars. However, the researchers found that the hot jets and bubbles emanating from the center of the Phoenix cluster may also have the opposite effect of producing cold gas, that in turn rains back onto the galaxy, fueling further starbursts. This suggests that the black hole has found a way to recycle some of its hot gas as cold, star-making fuel.

“We have thought the role of black hole jets and bubbles was to regulate star formation and to keep cooling from happening,” says Michael McDonald, assistant professor of physics in MIT’s Kavli Institute for Astrophysics and Space Research. “We kind of thought they were one-trick ponies, but now we see they can actually help cooling, and it’s not such a cut-and-dried picture.”

The new findings help to explain the Phoenix cluster’s exceptional star-producing power. They may also provide new insight into how supermassive black holes and their host galaxies mutually grow and evolve.

McDonald’s co-authors include lead author Helen Russell, an astronomer at Cambridge University; and others from the University of Waterloo, the Harvard-Smithsonian Center for Astrophysics, the University of Illinois, and elsewhere.

Hot jets, cold filaments

The team analyzed observations of the Phoenix cluster gathered by the Atacama Large Millimeter Array (ALMA), a collection of 66 large radio telescopes spread over the desert of northern Chile. In 2015, the group obtained permission to direct the telescopes at the Phoenix cluster to measure its radio emissions and to detect and map signs of cold gas.

The researchers looked through the data for signals of carbon monoxide, a gas that is present wherever there is cold hydrogen gas. They then converted the carbon monoxide emissions to hydrogen gas, to generate a map of cold gas near the center of the Phoenix cluster. The resulting picture was a puzzling surprise.

“You would expect to see a knot of cold gas at the center, where star formation happens,” McDonald says. “But we saw these giant filaments of cold gas that extend 20,000 light years from the central black hole, beyond the central galaxy itself. It’s kind of beautiful to see.”

The team had previously used NASA’s Chandra X-Ray Observatory to map the cluster’s hot gas. These observations produced a picture in which powerful jets flew out from the black hole at close to the speed of light. Further out, the researchers saw that the jets inflated giant bubbles in the hot gas.

When the team superimposed its picture of the Phoenix cluster’s cold gas onto the map of hot gas, they found a “perfect spatial correspondence”: The long filaments of frigid, 10-kelvins gas appeared to be draped over the bubbles of hot gas.

“This may be the best picture we have of black holes influencing the cold gas,” McDonald says.

Feeding the black hole

What the researchers believe to be happening is that, as jet inflate bubbles of hot, 10-million-degree gas near the black hole, they drag behind them a wake of slightly cooler, 1-million-degree gas. The bubbles eventually detach from the jets and float further out into the galaxy cluster, where each bubble’s trail of gas cools, forming long filaments of extremely cold gas that condense and rain back onto the black hole as fuel for star formation.

“It’s a very new idea that the bubbles and jets can actually influence the distribution of cold gas in any way,” McDonald says.

Scientists have estimated that there is enough cold gas near the center of the Phoenix cluster to keep producing stars at a high rate for another 30 to 40 million years. Now that the researchers have identified a new feedback mechanism that may supply the black hole with even more cold gas, the cluster’s stellar output may continue for much longer.

“As long as there’s cold gas feeding it, the black hole will keep burping out these jets,” McDonald says. “But now we’ve found that these jets are making more food, or cold gas. So you’re in this cycle that, in theory, could go on for a very long time.”

He suspects the reason the black hole is able to generate fuel for itself might have something to do with its size. If the black hole is relatively small, it may produce jets that are too weak to completely blast cold gas away from the cluster.

“Right now [the black hole] may be pretty small, and it’d be like putting a civilian in the ring with Mike Tyson,” McDonald says. “It’s just not up to the task of blowing this cold gas far enough away that it would never come back.”

The team is hoping to determine the mass of the black hole, as well as identify other, similarly extreme starmakers in the universe.


Original Link

Congress to Curtail Methane Monitoring

Innovation in methane detection is booming amid tightened state and federal standards for oil and gas drillers and targeted research funding. Technology developers, however, may see their market diminished by a regulation-averse Republican Congress and president. Senate Republicans are expected to attempt to complete a first strike against federal methane detection and emissions rules as soon as this week.

Methane is a potent greenhouse gas responsible for an estimated one-fifth to one-one quarter of the global warming caused by humans since the Industrial Revolution, and oil and gas production puts more methane in the atmosphere than any other activity in the United States. Global warming, however, is not a moving issue for Republican leaders or President Donald Trump, who reject the scientific consensus on anthropogenic climate change.

What moves them are complaints from industries that “burdensome” regulations unnecessarily hinder job growth and—in the case of methane rules—domestic oil and gas output. The House of Representatives got the methane deregulation ball rolling on 3 February, voting along party lines to quash U.S. Bureau of Land Management rules designed to prevent more than a third of methane releases from nearly 100,000 oil and gas wells and associated equipment operating on federal and tribal lands.

The House vote is one of the first applications of the hitherto obscure Congressional Review Act of 1996, which gives Congress 60 legislative days to overturn new regulations. If the Senate concurs and President Trump signs, the resulting act will scrap the bureau’s ban on methane venting and flaring and its leak-monitoring requirements. It will also restrict the bureau from ever revisiting those mandates.

Next up on the Republican agenda for methane: Environmental Protection Agency rules that govern methane venting and leak monitoring at all new oil and gas operations across the United States.

Experts say the proposed rollbacks are heavy-handed and could have longstanding effects. “A huge mistake,” is what Mark Boling calls them. Boling is executive vice president for the Houston-based natural gas producer Southwestern Energy and says he prefers stronger voluntary action by industry over regulation. But Boling says Congress is at risk of overreaching.

Boling predicts that tying regulators’ hands on methane monitoring will have “unintended consequences” for oil and gas technology. “If we don’t have a requirement that industry…do something to improve the way it conducts its operations, then a market will not be created to drive innovation,” he says. 

Aileen Nowlan agrees. She manages the Methane Detectors Challenge launched by the Environmental Defense Fund, a New York City–based advocacy group. Eliminating monitoring mandates would penalize the minority of oil and gas producers that have really stepped up leak detection and repairs, says Nowlan, as well as the technology developers striving to give them better tools.

“The sign from the government that methane is less of a priority could dissuade entrepreneurs from putting their focus here,” says Nowlan. Technology development could well shift to other countries as a result, she predicts. 

That would be a shame given that U.S.-based sensor developers have made strides recently, thanks to spreading mandates and the financial support from both Nowlan’s challenge and the comparable MONITOR program funded by the Advanced Projects Research Agency-Energy (the U.S. Department of Energy’s tech incubator). 

Consider the continuous leak detection system from Longmont, Colo.–based Quanta3, which recently entered pilot testing in January at a Texas drill pad owned by Norwegian oil and gas giant Statoil. Quanta3 uses relatively inexpensive near-infrared tunable laser diodes developed for fiber-optic telecommunications, according to company founder Dirk Richter, a laser spectroscopy expert at the University of Colorado at Boulder. 

Richter says methane monitoring via Quanta3’s systems will be “a couple of orders of magnitude” cheaper than via today’s commercial spectrometers, which can run US $75,000, or via handheld infrared video cameras that are labor intensive and relatively insensitive. (Under windy or cold conditions, IR cameras may detect no more than 10 percent of methane releases, according to Stanford University research.)

A few states such as Colorado and California have their own methane-emissions mandates that will continue to provide regulatory pressure, even in the event of federal rollbacks. Both Richter and Boling say they hope states improve those regulations, which generally specify regular inspection via infrared cameras, by speeding up the process for approving newer technology such as Quanta3’s. 

Richter, meanwhile, says his firm is preparing to survive under deregulation. Quanta3’s impressive 20-parts-per-billion sensitivity can reveal very small leaks, catching deteriorating equipment before a larger failure that could interrupt production or squander economically significant quantities of natural gas. “We want to provide something that provides value even in the absence of regulations,” says Richter. 

Original Link

Nanoelectrode Array Sees Signals From Inside a Network of Cells

Researchers at Harvard University have developed a nanoelectrode array capable of imaging the electrical signals within the living cells. While other technologies have been able to measure these signals, this new complimentary metal oxide semiconductor nanoelectrode array (called a CNEA) can measure these signal across an entire network of cells.

“It’s similar to an imager combining the light signal from each detector in a pixel array to form a picture; the CNEA combines the electrical signals from within each cell to map the network level electrical activities of the entire cell culture,” explained Donhee Ham, a professor at Harvard involved in the research, in an e-mail interview with IEEE Spectrum.

This network-level intracellular recording capability can be used, for example, to examine the effect of pharmaceuticals on a network of heart muscle tissue, enabling tissue-based screening of drug candidates. It could also help better understand how cells communicate with each other across a network.

An array of nine spikey nanoelectrodes waiting for a network of cells to grow atop them. Image: Harvard University/Nature Nanotechnology

In research described in the journal Nature Nanotechnology, the CNEA device fabricated by the Harvard team looks like a normal CMOS integrated circuit, except that there are nanoscale electrodes fabricated on the surface of the chip and a cell culture chamber on top. 

The CNEA device contains an array of 1,024 (32×32) pixels with the distance between the center of each pixel to the center of the next one (the pitch) being 126 micrometers. Beside the nanoelectrode, each pixel includes an amplifier to record the electrophysiological events, a stimulator to manipulate the cell membrane voltage, and a memory to change the state of the pixel operation between recording and stimulation.

In operation, the cells are cultured right on top of the chip and are in direct contact with the nanoelectrodes. “The electrical signals from the cells are collected by the nanoelectrodes, amplified and multiplexed on the chip, transferred through a customized printed circuit board in which the chip is plugged into, and are finally read by a data-acquisition card and PC,” said Ham.

The look and operation of the CNEA device are very similar to microelectrode arrays (MEAs) and planar patch-clamp chips, which have been leading the way for how all-electrical electrophysiological imaging has been recorded. However, the CNEA device departs from this two technologies by managing to combine their greatest strengths in one device while eliminating their weaknesses.

MEA’s for example can only record signals from outside the cell. Those signals “do not have the high sensitivity characteristic of intracellular recording,” said Ham. Microfluidic patch-clamp arrays, can record high-precision intracellular signals, but they can’t do so at the level of a network of cells.

Electrical impulses sweep across a culture of cells. Video: Harvard University A wave of electrical impulses sweeps across a network of cells.

There were two big challenges in fabricating the CNEA devices, according to Ham. The primary one was constructing the nanoelectrodes on top of the CMOS chip, which was fabricated in a foundry. The second issue was developing a front-end circuit—the amplifiers etc.—that could interface with such small—thus high-impedance—electrodes.

While Ham and his colleagues were able to overcome these challenges, some key engineering work still remains if the device is to be commercialized. In particular, the fabrication process must be standardized and the electrode/cell interface must be better understood and optimized.

The next version of the system is already in the works: “The upgraded CMOS circuit will have more electrodes and a finer pitch, with better circuit performance and more functionality at the pixel level,” explained Ham. “The optimized nanoelectrodes will improve the signals acquired from cells and will also allow the platform to be used for different types of cells in both the in vitro (cell culture) and ex vivo (tissue slices) systems.”

Original Link

Scientists make huge dataset of nearby stars available to public

The search for planets beyond our solar system is about to gain some new recruits.

Today, a team that includes MIT and is led by the Carnegie Institution for Science has released the largest collection of observations made with a technique called radial velocity, to be used for hunting exoplanets. The huge dataset, taken over two decades by the W.M. Keck Observatory in Hawaii, is now available to the public, along with an open-source software package to process the data and an online tutorial.

By making the data public and user-friendly, the scientists hope to draw fresh eyes to the observations, which encompass almost 61,000 measurements of more than 1,600 nearby stars.

“This is an amazing catalog, and we realized there just aren’t enough of us on the team to be doing as much science as could come out of this dataset,” says Jennifer Burt, a Torres Postdoctoral Fellow in MIT’s Kavli Institute for Astrophysics and Space Research. “We’re trying to shift toward a more community-oriented idea of how we should do science, so that others can access the data and see something interesting.”

Burt and her colleagues have outlined some details of the newly available dataset in a paper to appear in The Astronomical Journal. After taking a look through the data themselves, the researchers have detected over 100 potential exoplanets, including one orbiting GJ 411, the fourth-closest star to our solar system.

“There seems to be no shortage of exoplanets,” Burt says. “There are a ton of them out there, and there is ton of science to be done.”

Splitting starlight

The newly available observations were taken by the High Resolution Echelle Spectrometer (HIRES), an instrument mounted on the Keck Observatory’s 10-meter telescope at Mauna Kea in Hawaii. HIRES is designed to split a star’s incoming light into a rainbow of color components. Scientists can then measure the precise intensity of thousands of color channels, or wavelengths, to determine characteristics of the starlight.

Early on, scientists found they could use HIRES’ output to estimate a star’s radial velocity — the very tiny movements a star makes either as a result of its own internal processes or in response to some other, external force. In particular, scientists have found that when a star moves toward and away from Earth in a regular pattern, it can signal the presence of an exoplanet orbiting the star. The planet’s gravity tugs on the star, changing the star’s velocity as the planet moves through its orbit.

“[HIRES] wasn’t specifically optimized to look for exoplanets,” Burt says. “It was designed to look at faint galaxies and quasars. However, even before HIRES was installed, our team worked out a technique for making HIRES an effective exoplanet hunter.”

For two decades, these scientists have pointed HIRES at more than 1,600 “neighborhood” stars, all within a relatively close 100 parsecs, or 325 light years, from Earth. The instrument has recorded almost 61,000 observations, each lasting anywhere from 30 seconds to 20 minutes, depending on how precise the measurements needed to be. With all these data compiled, any given star in the dataset can have several days’, years’, ore even more than a decade’s worth of observations.

“We recently discovered a six-planet system orbiting a star, which is a big number,” Burt says. “We don’t often detect systems with more than three to four planets, but we could successfully map out all six in this system because we had over 18 years of data on the host star.”

More eyes on the skies

Within the newly available dataset, the team has highlighted over 100 stars that are likely to host exoplanets but require closer inspection, either with additional measurements or further analysis of the existing data.

The researchers have, however, confirmed the presence of an exoplanet around GJ 411, which is the fourth-closest star to our solar system and has a mass that is roughly 40 percent that of our sun. The planet has an extremely tight orbit, circling the star in less than 10 days. Burt says that there is a good chance that others, looking through the dataset and combining it with their own observations, may find similarly intriguing candidates.

“We’ve gone from the early days of thinking maybe there are five or 10 other planets out there, to realizing almost every star next to us might have a planet,” Burt says.

HIRES will continue to record observations of nearby stars in the coming years, and the team plans to periodically update the public dataset with those observations. 

“This dataset will slowly grow, and you’ll be able to go on and search for whatever star you’re interested in and download all the data we’ve ever taken on it. The dataset includes the date, the velocity we measured, the error on that velocity, and measurements of the star’s activity during that observation,” Burt says. “Nowadays, with access to public analysis software like Systemic, it’s easy to load the data in and start playing with it.”

Then, Burt says, the hunt for exoplanets can really take off.

“I think this opens up possibilities for anyone who wants to do this kind of work, whether you’re an academic or someone in the general public who’s excited about exoplanets,” Burt says. “Because really, who doesn’t want to discover a planet?”

This research was supported, in part, by the National Science Foundation.


Topics: Astronomy, Astrophysics, Kavli Institute, Planetary science, Research, School of Science, Space, astronomy and planetary science, Exoplanets, National Science Foundation (NSF), Crowdsourcing

Original Link

Astrobee: NASA’s Newest Robot for the International Space Station

Small, versatile, and autonomous, Astrobee will be getting to work on the ISS


The International Space Station will soon be getting some new robot occupants. Astrobee is a robotic cube packed with sensors, cameras, computers, and a propulsion system. It’s designed to help astronauts around the ISS with a variety of tasks.

While the robot is designed to fly freely on board the ISS, for testing on the ground, Astrobee is mounted on top of a sled that uses a jet of CO2 to create a low-friction air bearing above a perfectly flat (and very enormous) block of granite. This allows the researchers to simulate microgravity in two dimensions to test the robot’s propulsion and navigation systems, but once it’s up in space, the entire robot will consist of just the cube that’s defined by the blue bumpers, without all of the stuff underneath it.

Last fall, IEEE Spectrum visited NASA Ames Research Center in Mountain View, Calif., to have a look at the latest Astrobee prototype and meet the team behind the robot.

NASA expects to have Astrobee on orbit at some point between July 2017 and June 2018. They’ll be sending three of them to the ISS, although they only expect two robots to be active at once: The third will be packed away in a space closet somewhere. 

Read More: How NASA’s Astrobee Robot Is Bringing Useful Autonomy to the ISS

Original Link

Eight MIT faculty elected to the National Academy of Engineering

Eight MIT faculty are among the 84 new members and 22 foreign associates elected to the National Academy of Engineering. Newly elected members for this year also include an impressive 21 MIT-affiliated alumni.

Election to the National Academy of Engineering (NAE) is among the highest professional distinctions accorded to an engineer. Academy membership honors those who have made outstanding contributions to “engineering research, practice, or education, including, where appropriate, significant contributions to the engineering literature,” and to “the pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education.”

The eight elected this year include:

Paula Hammond, the David H. Koch Professor, head of the Department of Chemical Engineering, and member of the Koch Institute for Cancer Research, for contributions to self-assembly of polyelectrolytes, colloids, and block copolymers at surfaces and interfaces for energy and health care applications.

Daniel Hastings, the Cecil and Ida Green Education Professor in the Department of Aeronautics and Astronautics and chief executive officer and director of the Singapore-MIT Alliance for Research and Technology, for contributions in spacecraft and space system-environment interactions, space system architecture, and leadership in aerospace research and education.

Dara Entekhabi, the Bacardi and Stockholm Water Foundations Professor in the departments of Civil and Environmental Engineering and Earth, Atmospheric and Planetary Sciences, for leadership in the hydrologic sciences including the scientific underpinnings for satellite observation of the Earth’s water cycle.

Dina Katabi, the Andrew (1956) and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), for contributions to network congestion control and to wireless communications.

Alexander H. Slocum, the Pappalardo Professor of Mechanical Engineering in the Department of Mechanical Engineering, for contributions to precision machine design and manufacturing across multiple industries and leadership in engineering education.

Michael S. Strano, the Carbon P. Dubbs Professor of Chemical Engineering in the Department of Chemical Engineering, for contributions to nanotechnology, including fluorescent sensors for human health and solar and thermal energy devices.

Mehmet Toner, professor of health sciences at the Harvard-MIT Division of Health Sciences and Technology and the Helen Andrus Benedict Professor of Surgery at Massachusetts General Hospital, for engineering novel microelectromechanical and microfluidic point-of-care devices that improve detection of cancer, prenatal genetic defects, and infectious disease.

Ioannis Yannas, professor of polymer science and engineering in the Department of Mechanical Engineering, for co-developing the first commercially reproducible artificial skin that facilitates new growth, saving the lives of thousands of burn victims.

“This is a great class of new NAE members who are affiliated with MIT,” says Ian A. Waitz, dean of the School of Engineering and the Jerome C. Hunsaker Professor in the Department of Aeronautics and Astronautics. “It is wonderful to see our faculty and alumni being honored by their peers for contributions of the highest level.”

Including this year’s inductees, 142 current MIT faculty and staff are members of the National Academy of Engineering. With this week’s announcement, this brings NAE’s total U.S. membership to 2,281 and the number of foreign members to 249.

Twenty-one MIT alumni, including some of the newly elected members listed above, were also named to the NAE this year. They include: Ellen Arruda PhD ’92; Aziz Asphahani PhD ’75; David Boger ’83; Mark Daskin ’74; Bailey Diffie ’65; Eric Ducharme SM ’85, ScD ’87; Dara Entekhabi PhD ’90; Paula Hammond ’84; Daniel Hastings PhD ’80; Dina Katabi SM ’99, PhD ’03; Jennifer A. Lewis ScD ’91; Steven B. Lipner ’65 SM ’66; Robert McCabe SM ’81; E. Sarah Slaughter ’82, SM ’87, PhD ’91; Alexander Slocum, Jr. ’82; Megan Smith ’86; Darlene Solomon PhD ’85; Mehmet Toner SM ’85, PhD ’89; George Varghese PhD ’93; Ioannis Yannas SM ’59; and Katherine Yelick ’82.


Topics: Awards, honors and fellowships, Faculty, Alumni/ae, National Academy of Engineering (NAE), School of Engineering, Electrical Engineering & Computer Science (eecs), Aeronautical and astronautical engineering, Civil and environmental engineering, Chemical engineering, EAPS, Mechanical engineering, Harvard-MIT Health Sciences and Technology, Singapore-MIT Alliance for Research and Technology (SMART), Computer Science and Artificial Intelligence Laboratory (CSAIL), Koch Institute

Original Link