ALU

Energy

Eskom ‘should consider selling’ Medupi, Kusile

Eskom should consider selling two coal-fired plants that rank among the world’s biggest to repair the state-owned utility’s finances, according to the head of South Africa’s biggest bank by market value. Original Link

Gordhan cracks the whip at Eskom: ‘We have to serve SA better’

Public enterprises minister Pravin Gordhan on Thursday announced immediate plans to put an end to the load shedding that has been a daily occurrence countrywide since 29 November. Original Link

Bloated Eskom facing looming ‘death spiral’

Bloated by debt, bled by corruption and battered by structurally declining sales, Eskom is facing what’s known in the industry as a “death spiral”. Original Link

Eskom rescue plan to be launched only next year

Eskom expects to launch its turnaround strategy in 2019 after at least two delays of its much-anticipated recovery plan, as power cuts blanket the nation. Original Link

Back to blackouts: SA in the dark as Eskom stumbles

Eskom’s warning that the country was threatened by months of rotating blackouts became a reality in less than 24 hours. Original Link

Brace for impact: Eskom cuts 2GW from the grid

Eskom will cut 2GW of supply from the national grid because it lost additional generating units overnight. Original Link

Eskom is broke, and you’re going to pay the price

Eskom is locked into a permanent loss situation and revenue is structurally limited. Expenses have ballooned due to inefficiencies, and electricity tariffs are not cost-reflective. Original Link

Eskom once again imposes load shedding

Eskom on Thursday introduced stage-one rotational load shedding, saying it would be in effect from midday to 10pm. Original Link

Eskom once again imposes load shedding

Eskom on Thursday introduced stage-one rotational load shedding, saying it would be in effect from midday to 10pm. Original Link

Eskom in ‘severe difficulty’ as interim profit plunges 89%

Eskom’s first-half profit plunged 89% and the situation at the South African state-owned power utility is likely to worsen in the next six months, chairman Jabu Mabuza said on Wednesday. Original Link

Eskom a threat to SA’s finances, AG says

The finances of Eskom, Transnet and other companies owned by South Africa’s government have deteriorated to such an extent that they now pose a significant risk to the nation’s finances, the auditor-general said. Original Link

Load shedding regime now goes all the way to stage 8

Struggling power utility Eskom has extended its load shedding regime from four stages that allow up to 4GW of demand to be shed, or cut, to eight stages providing for up to 8GW to be shed. Original Link

Anatomy of a crisis: why SA is on the brink of rolling blackouts

Eskom has shut down 11 power station units due for major maintenance because it lacks the funds to fix them. Original Link

Report urges Eskom, Transnet criminal probes

A forensic report into Eskom and Transnet found some executives and board members failed to act in the companies’ best interests and recommended criminal investigations against them. Original Link

Load shedding could be back by Christmas

State-owned power utility Eskom may be forced to implement controlled blackouts before the end of the year because of pressure on the national system, CEO Phakamani Hadebe said on Friday. Original Link

Battery boom to attract $620-billion in investment by 2040

The battery boom is coming to China, California and basically everywhere else – and it will be even bigger than previously thought. Original Link

Real risk of load shedding as Eskom faces coal crisis

Eskom’s coal stockpile situation has deteriorated further, with five power stations having less than 10 days’ stock. Original Link

Your Power System Failed its Conducted EMI Test – Now What?

You just got your test results back from the conducted emissions test lab, and your product failed. Now What?

You just got your test results back from the conducted emissions test lab, and your product failed.

Now what? 

Power supply switching noise creates conducted emissions, as do other circuits with regular switching. This webinar suggests and reviews concepts that can be used as starting points to help divide and conquer your challenge. Attendees will narrow the field of focus of a complex system to a few components by learning the signatures that can help guide you during troubleshooting. 

The interpretation techniques can be used to help identify the source of the noise that is getting past the conducted emissions filter, and measured at the test lab. 

We will discuss current wave shapes vs. frequency, common mode vs. differential mode currents, and the corresponding components to control these currents in a conducted emissions filter. We will conclude with a special topic for regulated power supplies: when filters can interfere with power supply stability and cause the failing test result.

PRESENTERS:

  http://event.on24.com/event/18/16/28/8/rt/harry_vig_casual.jpg  Harry Vig, Application Engineer, Vicor Corporation

Mr. Harry Vig graduated from the University of Waterloo in Canada with a B.Sc. in Electrical Engineering in 1988. He has worked as a test engineer and design engineer in the fields of power electronics, opto-electronics, high speed networking, thermal controls and home theatre audio and video products.

 He is currently an Application Engineer at Vicor Corporation, helping customers use Vicor power electronics products successfully in their own designs.

  

Attendees of this IEEE Spectrum webinar have the opportunity to earn PDHs or Continuing Education Certificates!  To request your certificate you will need to get a code. Once you have registered and viewed the webinar send a request to gs-webinarteam@ieeeglobalspec.com for a webinar code. To request your certificate complete the form here: http://innovationatwork.ieee.org/spectrum/

Attendance is free. To access the event please register.
NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Original Link

As Economics Improve, Solar Shines in Rural America

Sheep provided by a local 4-H club help with vegetation management at a solar array owned by the Eau Claire Electric Co-op in Wisconsin.
Photo: NRECA
Sheep provided by a local 4-H club help with vegetation management at a solar array owned by the Eau Claire Electric Co-op in Wisconsin.

Advertisement

A five-year effort by electric cooperatives to expand the use of solar energy in rural parts of the United States is coming to a successful conclusion.

Under the Solar Utility Network Deployment Acceleration (SUNDA) program, which was run by the National Rural Electric Cooperative Association (NRECA) under a cost share arrangement with the U.S. Energy Department, rural electric co-ops are on track to own or buy 1 gigawatt of solar power generation capacity by 2019.

As of April, more than 120 co-ops had at least one solar project on line. Of those, half said they have plans to add more solar generating capacity.

The accomplishment is no small feat. The consumer-owned structure of co-ops means that they can’t make direct use of federal tax credits, which have helped to spur solar adoption among investor-owned utilities. Co-ops often have had to come up with innovative financing arrangements to make the numbers work. In particular, solar adoption has benefited from big drops in the cost of solar PV cells in recent years.

“As the cost went down, solar became more economically feasible,” says Tracy Warren, an NRECA spokesperson.

The trade group says that the first system deployed by a co-op under the SUNDA program in 2014 came in at a cost of US $4.50 per peak watt. The most recent SUNDA deployment came in at $1.30 per peak watt.

NRECA released a report in mid-July [PDF] that evaluates its SUNDA initiative and offers lessons learned.  It says that when the SUNDA project began, fewer than 1 percent of electric cooperatives had solar PV systems larger than 250 kilowatts and the average solar project in 2013 was about 25 kilowatts in size. Today, the average project exceeds 1 megawatt. 

As Economics Improve, Solar Shines in Rural America

Sheep provided by a local 4-H club help with vegetation management at a solar array owned by the Eau Claire Electric Co-op in Wisconsin.
Photo: NRECA
Sheep provided by a local 4-H club help with vegetation management at a solar array owned by the Eau Claire Electric Co-op in Wisconsin.

Advertisement

A five-year effort by electric cooperatives to expand the use of solar energy in rural parts of the United States is coming to a successful conclusion.

Under the Solar Utility Network Deployment Acceleration (SUNDA) program, which was run by the National Rural Electric Cooperative Association (NRECA) under a cost share arrangement with the U.S. Energy Department, rural electric co-ops are on track to own or buy 1 gigawatt of solar power generation capacity by 2019.

As of April, more than 120 co-ops had at least one solar project on line. Of those, half said they have plans to add more solar generating capacity.

The accomplishment is no small feat. The consumer-owned structure of co-ops means that they can’t make direct use of federal tax credits, which have helped to spur solar adoption among investor-owned utilities. Co-ops often have had to come up with innovative financing arrangements to make the numbers work. In particular, solar adoption has benefited from big drops in the cost of solar PV cells in recent years.

“As the cost went down, solar became more economically feasible,” says Tracy Warren, an NRECA spokesperson.

The trade group says that the first system deployed by a co-op under the SUNDA program in 2014 came in at a cost of US $4.50 per peak watt. The most recent SUNDA deployment came in at $1.30 per peak watt.

NRECA released a report in mid-July [PDF] that evaluates its SUNDA initiative and offers lessons learned.  It says that when the SUNDA project began, fewer than 1 percent of electric cooperatives had solar PV systems larger than 250 kilowatts and the average solar project in 2013 was about 25 kilowatts in size. Today, the average project exceeds 1 megawatt. 

As Economics Improve, Solar Shines in Rural America

Sheep provided by a local 4-H club help with vegetation management at a solar array owned by the Eau Claire Electric Co-op in Wisconsin.
Photo: NRECA
Sheep provided by a local 4-H club help with vegetation management at a solar array owned by the Eau Claire Electric Co-op in Wisconsin.

Advertisement

A five-year effort by electric cooperatives to expand the use of solar energy in rural parts of the United States is coming to a successful conclusion.

Under the Solar Utility Network Deployment Acceleration (SUNDA) program, which was run by the National Rural Electric Cooperative Association (NRECA) under a cost share arrangement with the U.S. Energy Department, rural electric co-ops are on track to own or buy 1 gigawatt of solar power generation capacity by 2019.

As of April, more than 120 co-ops had at least one solar project on line. Of those, half said they have plans to add more solar generating capacity.

The accomplishment is no small feat. The consumer-owned structure of co-ops means that they can’t make direct use of federal tax credits, which have helped to spur solar adoption among investor-owned utilities. Co-ops often have had to come up with innovative financing arrangements to make the numbers work. In particular, solar adoption has benefited from big drops in the cost of solar PV cells in recent years.

“As the cost went down, solar became more economically feasible,” says Tracy Warren, an NRECA spokesperson.

The trade group says that the first system deployed by a co-op under the SUNDA program in 2014 came in at a cost of US $4.50 per peak watt. The most recent SUNDA deployment came in at $1.30 per peak watt.

NRECA released a report in mid-July [PDF] that evaluates its SUNDA initiative and offers lessons learned.  It says that when the SUNDA project began, fewer than 1 percent of electric cooperatives had solar PV systems larger than 250 kilowatts and the average solar project in 2013 was about 25 kilowatts in size. Today, the average project exceeds 1 megawatt. 

As Economics Improve, Solar Shines in Rural America

Sheep provided by a local 4-H club help with vegetation management at a solar array owned by the Eau Claire Electric Co-op in Wisconsin.
Photo: NRECA
Sheep provided by a local 4-H club help with vegetation management at a solar array owned by the Eau Claire Electric Co-op in Wisconsin.

Advertisement

A five-year effort by electric cooperatives to expand the use of solar energy in rural parts of the United States is coming to a successful conclusion.

Under the Solar Utility Network Deployment Acceleration (SUNDA) program, which was run by the National Rural Electric Cooperative Association (NRECA) under a cost share arrangement with the U.S. Energy Department, rural electric co-ops are on track to own or buy 1 gigawatt of solar power generation capacity by 2019.

As of April, more than 120 co-ops had at least one solar project on line. Of those, half said they have plans to add more solar generating capacity.

The accomplishment is no small feat. The consumer-owned structure of co-ops means that they can’t make direct use of federal tax credits, which have helped to spur solar adoption among investor-owned utilities. Co-ops often have had to come up with innovative financing arrangements to make the numbers work. In particular, solar adoption has benefited from big drops in the cost of solar PV cells in recent years.

“As the cost went down, solar became more economically feasible,” says Tracy Warren, an NRECA spokesperson.

The trade group says that the first system deployed by a co-op under the SUNDA program in 2014 came in at a cost of US $4.50 per peak watt. The most recent SUNDA deployment came in at $1.30 per peak watt.

NRECA released a report in mid-July [PDF] that evaluates its SUNDA initiative and offers lessons learned.  It says that when the SUNDA project began, fewer than 1 percent of electric cooperatives had solar PV systems larger than 250 kilowatts and the average solar project in 2013 was about 25 kilowatts in size. Today, the average project exceeds 1 megawatt. 

Liquid Battery Based on Methuselah Molecule

Harvard's new liquid battery that uses a so-called Methuselah molecule.
Photo: Harvard School of Engineering and Applied Sciences

Advertisement

A new liquid battery that uses a so-called Methuselah molecule could lead to long-lasting and affordable storage of renewable energy for power grids, scientists at Harvard say.

The sun and wind are clean sources of energy, but they provide power intermittently. As such, utility companies want massive rechargeable battery farms that can store the surplus energy reaped from these renewable power sources for use when the sun is not shining and the wind is not blowing.

One possible solution involves flow batteries that use liquids to store and release energy. These devices replace a conventional battery’s two solid electrodes with a pair of liquid electrolytes. These liquids, contained in separate tanks, flow into the battery’s stack, and chemical reactions between the liquids—which occur across a porous membrane—discharge or recharge the battery.

Energy Storage Projects to Replace Three Natural Gas Power Plants in California

The Tesla Powerpack system Illustration: Tesla

Advertisement

Energy storage could get a big boost if California officials green-light plans by utility Pacific Gas and Electric Co. to move forward with some 567 megawatts of capacity.

Included in the mix is more than 180 MW of lithium-ion battery storage from Elon Musk’s company Tesla. The Tesla-supplied battery array would be owned by PG&E and would offer a 4-hour discharge duration. The other projects would be owned by third parties and operated on behalf of the utility under long-term contracts. All of the projects would be in and around Silicon Valley in the South Bay area.

Once deployed, the storage would sideline three gas-fired power plants—the 605-MW Metcalf Energy Center, the 47-MW Feather River Energy Center, and the 47-MW Yuba City Energy Center—that lack long-term energy supply contracts with utilities. Even without the contracts, the state’s grid operator identified the units as needed for local grid reliability. It, and independent power producer Calpine, which owns the plants, asked federal regulators to label the plants as “must run.” That would let them generate electricity and be paid for it even without firm utility contracts.

Both PG&E and California’s utility regulators object to that idea. They argue that the must-run designation without firm contracts would distort the state’s power market and lead to unfair prices. Backing up their objection, regulators earlier this year directed the utility to seek offers to replace the gas-fired power plants with energy storage.

The utility says that its search prompted more than two dozen storage proposals with 100 variations. PG&E narrowed the list to four, which it presented to state regulators in late June.

One of the projects, Vistra Energy Moss Landing storage project, would be owned by Dynegy Marketing and Trade, a unit of Vistra Energy Corp. The holding company manages more than 40 gigawatts of generating capacity across 12 states. The project would be a transmission-connected, stand-alone lithium-ion battery energy storage resource in Monterey County. The facility, which would feature a 300-MW, 4-hour duration battery array, could enter service in December 2020 under a 20-year contract.

A second project, Hummingbird Energy Storage, would be owned by a unit of esVolta, a new company that is partnering with Oregon-based Powin Energy Corp. and Australia-based Blue Sky Alternative Investments. The Santa Clara County–sited resource would include a 75-MW, 4-hour-duration Li-ion battery array. It also could enter service in December 2020 and would operate under a 15-year contract.

One so-called behind-the-meter proposal also was accepted by PG&E. It came from Micronoc and would total 10 MW of 4-hour-duration storage. In practice, the project would bundle the discharge capacity of lithium-ion batteries located at multiple customer sites. That’s in step with Micronoc’s business model of developing distributed energy storage projects, most of them so far in South Korea. A 10-year service contract with PG&E could start in October 2019.

PowerPack lithium-ion batteries from Tesla would form the backbone of a 182.5-MW array with a 4-hour discharge duration. The batteries would be located at a PG&E substation in Monterrey County. The array could enter service by the end of 2020 and include a 20-year performance guarantee from Tesla.

The regulatory mandate directing PG&E to seek energy storage proposals was not the first time California regulators acted to boost storage.

In February 2013, regulators told utility Southern California Edison to secure energy storage and other resources to meet an expected shortfall stemming from the closure of the San Onofre nuclear power plant. In that instance, the utility’s energy storage target was 50 MW. It ultimately procured more than 260 MW of storage capacity.

Then, in May 2016, regulators again directed SoCalEd to procure storage to ease electric supply shortages that were feared as a result of a leak at a natural gas storage facility. As a result, more than 100 MW of grid-level energy storage was placed into service.

In announcing the Silicon Valley projects, PG&E sought to play up storage’s role in helping to integrate increasing amounts of renewable energy onto California’s grid.  It also cited recent decreases in battery prices as enabling energy storage to compete with “traditional solutions” such as fossil-fueled power plants.  

No cost details were provided by the utility in making its announcement. And a supporting document [PDF] justifying the four projects was scrubbed of cost details before being released to the public.

Evidence is growing, however, that battery energy storage can beat natural gas on price when it comes to a specific type of power generation known as “peaking capacity.” Known as peakers, the fast-start power plants typically are called on to generate power on days when consumer demand for electricity is highest. For most places, that’s a hot summer day when air conditioners are cranked up.

One milestone came earlier this year when Arizona Public Service signed a 15-year deal for peaking power from a solar-powered battery. The 50 MW of storage beat out other forms of peaking generation, including natural gas. A field of solar PV panels from FirstSolar will energize the storage array when the sun is high in the sky, allowing electricity to be delivered to customers during times of peak demand.

Over the next 15 years, Arizona Public Service says it plans to put more than 500 MW of additional battery storage capacity in place. In January 2018, Arizona utility regulator Andy Tobin proposed that the state’s utilities deploy 3,000 MW of energy storage by 2030.

Efforts to increase the amount of energy storage in places like Arizona and California received a boost in February when the U.S. Federal Energy Regulatory Commission (FERC) voted 5-0 to remove what it said were nationwide barriers that kept storage sources from taking part in various markets that are run by regional transmission organizations and independent system operators.

In a November 2016 proposal, the Obama-era FERC said that market rules designed for traditional generation resources created barriers to entry for emerging technologies such as electric storage resources. The February action was the next step in that effort to reduce or remove barriers to entry.

Original Link

Researchers Fish Yellowcake Uranium From the Sea With a Piece of Yarn

This first gram of yellowcake was produced from uranium captured from seawater with modified yarn. Photo: LCW Supercritical Technologies This gram of yellowcake was produced from uranium captured from seawater with modified yarn.

Advertisement

Researchers at the U.S. Energy Department’s Pacific Northwest National Laboratory (PNNL) and LCW Supercritical Technologies made use of readily available acrylic fibers to pull five grams of yellowcake—a powdered form of uranium used to produce fuel for nuclear power reactors—from seawater.

The milestone, announced in mid-June, follows seven years of work and a roughly US $25 million investment by the federal energy agency. Another $1.15 million is being channeled to LCW as it attempts to scale up the technique for commercial use. The effort builds on work by Japanese researchers in the late 1990s and was prompted by interest in finding alternative sources of uranium for a future time when terrestrial sources are depleted.

Nuclear power plant operators increasingly want their facilities to run for up to a century, says Gary Gill, a researcher at PNNL who led the seawater extraction effort. But within decades, he says, terrestrial sources of uranium could be depleted or prove to be too expensive for use in commercial reactors.

A 2017 assessment by the World Nuclear Association says that roughly 445 reactors worldwide, with a combined 390 gigawatts of generating capacity, require around 75,000 tons of uranium oxide concentrate each year to operate.

In the U.S., an expected surge of demand for uranium to fuel a fleet of new reactors largely dried up after the 2011 accident at Japan’s Fukushima nuclear power plant. More recently, low natural gas prices have hurt the economic case for nuclear power plants, leading some developers to scrap plans for new units and a number of utility operators to consider closing existing ones.

Before it stopped reporting on U.S. uranium reserves a decade ago, the DOE’s Energy Information Administration said terrestrial uranium reserves could meet anywhere from 10 to 23 years’ worth of demand, depending on market prices. EIA pointed out at the time that domestic U.S. uranium mining supplied around 10 percent of U.S. requirements for nuclear fuel.

Unlike terrestrial sources that can be mined at specific locations, uranium in seawater shows up in concentrations of around 3.3 parts per billion. With a total volume estimated at more than 4 billion tons, there is around 500 times more uranium in seawater than in land-based sources. As a result, the widely dispersed sea-based resource could last for thousands of years, Gill says.

The Japan Atomic Energy Agency focused its work starting in 1999 on adsorbent fabric materials that could be suspended on a floating frame hung around 20 meters below the ocean’s surface. Using the technique, they collected about 1 kilogram (kg) of uranium as yellowcake.

In the PNNL-LCW collaboration, researchers developed an acrylic fiber to attract and then hold on to uranium that was dissolved in seawater. The uranium binding agent (called a “ligand” or, more generally, a functional group) is called amidoxime.

The adsorbent has a second functional group called a carboxylate. This group does not bind uranium, but instead makes the adsorbent hydrophilic, meaning it’s attractive to water. Fiber can be readily obtained from a local craft store, helping with the technique’s overall economics, says Chien Wai of LCW, which developed the extraction fabric and brought it to the lab for testing.  

Production at PNNL's Marine Sciences Laboratory.cakes Photos: PNNL

The process involves putting the fiber into a container with reagents such as hydroxylamine, then cooking it for up to 24 hours. Once cooked, the treated fiber is ready for use.

PNNL researchers have conducted three separate tests of the adsorbent’s performance, exposing it to large volumes of seawater from Sequim Bay next to its Marine Sciences Laboratory.

For each test, the research team put two pounds of fiber into a tank about the size of a hot tub, pumped seawater through it to mimic ocean conditions, and waited a month. Wai’s team at LCW then extracted the uranium from the adsorbent. From these first three tests, they produced about five grams of yellowcake, which is roughly how much a nickel weighs.

Based on its work so far, LCW has received $1.15 million in DOE grant money to help it scale up the technology for commercial use.

For that to happen, the technology needs to be able to compete with conventional mining operations. Over the past 12 years, the market price of 1 kg of uranium has ranged from more than $330 to around $40, Gill says. Recent prices have been closer to the low end. The LCW-developed extraction method will need a price of $180-$280/kg to be economical.

The economics could improve, Wai says, if the technique is used to extract other marketable metals and even to filter water to remove toxic metals such as lead. The researchers next plan to test the extraction technique in the Gulf of Mexico or off the coast of Florida. Warm water in either location will help the extraction process, compared with the colder waters of the Pacific Northwest where initial tests were carried out.

“The ability of the material to adsorb uranium depends on the water temperature,” Gill says. A water temperature increase of 20 to 30 degrees Fahrenheit should double the amount of uranium that can be extracted.

The extraction is unlikely to have any adverse environmental impacts, Gill says. No known biological need exists for uranium in seawater, so no animal or plant life is likely to be harmed through its removal.

“What we’re proposing,” Gill says, “is far greener than land-based mining operations.”

Original Link

First Semisolid Lithium Batteries to Debut This Year, in Drones

Advertisement

/image/MzA3NTE0Mg.jpeg

Photo: SolidEnergy Systems Ready-Made: SolidEnergy says its new batteries can be manufactured with existing equipment.

Lithium-ion batteries boast a powerful blend of energy capacity and long cycle life. But they have a dangerous tendency to burst into flames, leading to injuries, product recalls, and flight bans.

Researchers have touted solid-state lithium batteries as a safer alternative. These devices swap out flammable liquid electrolytes for an inert solid such as plastic or ceramic. But researchers have pursued solid-state battery technology for decades without coming up with any products.

Now, SolidEnergy Systems, in Massachusetts, plans to become the first company to sell such batteries. The startup says it can pack twice as much energy into its battery as a conventional lithium-ion battery of the same weight can store.

That means devices could work twice as long. For example, right now “advanced drones have sensors, cameras, and processors on board, so the battery lasts only 20 minutes, and it’s heavy,” says founder Qichao Hu. With SolidEnergy’s new battery, those drones could fly for 40 minutes or more.

The company is currently testing its batteries for drones and expects to begin selling them later this year, followed by batteries for wearables in 2019 and for electric vehicles after 2021.

In today’s batteries, a dilute solution of lithium salts serves as the electrolyte. Its job is to shuttle ions between the carbon anode and the lithium transition metal oxide cathode. Some ceramics, polymers, and glassy materials can also do that well. In addition to being safer than their liquid counterparts, these alternatives could also support a pure lithium anode, which would boost energy density.

Lithium-ion battery pioneers originally chose lithium metal for the anode in the 1980s. But lithium metal anodes quickly grow mossy whiskers called dendrites, which can reach the cathode and short the battery. So battery researchers switched to carbon for the anode.

SolidEnergy’s workaround is to coat its ultrathin anode, made of a pure lithium foil, with a mixed polymer-ceramic electrolyte, which smothers dendrite growth. Another electrolyte, a paste of lithium salts, goes on the cathode.

/image/MzA3NTI0MQ.jpeg/image/MzA3NTI0Mg.jpeg Photos: SolidEnergy Systems Weed Control: These ultrathin anodes, made of pure lithium foil, are covered in a polymer-ceramic coating to prevent harmful dendrites from sprouting.     

The electrolyte on the cathode contains just enough solvent to make the lithium salts conduct ions at room temperature. The device is technically a semisolid battery but safer than conventional lithium-ion cells, Hu says. The battery’s energy density is about 500 watt-hours per kilogram, twice that of a conventional lithium-ion battery’s 250 Wh/kg. The downside is that it can be recharged only about ­200 times, as opposed to more than 1,000 times for conventional batteries.

“There are lots of people trying to find the 100 percent perfect solid-state approach,” Hu says. “But we think our semisolid approach is good enough.”

Other labs remain focused on that vision of the ultimate solid-state battery. Last year, John Goodenough at the University of Texas at Austin unveiled a solid glass battery. He and colleague Maria Helena Braga use a lithium-doped glassy material as the electrolyte. In their latest design, which they reported in April in the Journal of the American Chemical ­Society, they coat the flexible cathode with a special plasticizer solution.

One problem with solid-state batteries is that as various materials expand and contract at different rates, the batteries’ interfaces crack. The plasticizer acts as a cushion to prevent cracking, Braga says. The new battery design has twice the energy density of conventional ­lithium-ion batteries and can be recharged 23,000 times.

Recently, industry giants have also begun to invest in solid-state batteries. Honda, Nissan, and Toyota have teamed up with Panasonic Corp. to develop them for electric vehicles. But some high-profile buyouts of solid-state technology startups have sputtered. In 2015, Dyson bought University of Michigan spin-off Sakti3 with plans to develop an EV battery, while German giant Bosch bought Seeo, a solid-state polymer battery startup from Lawrence Berkeley National Laboratory (LBNL). Both companies have since deserted those technologies.

Lithium metal batteries are not easy to work with, says LBNL scientist and chemical engineer Nitash Balsara, who cofounded Seeo with other LBNL alumni in 2007. “By and large, the battery industry is really interested in safety, as long as it’s free,” he says. “I think [that’s] a mistake. There is room to develop intrinsically safe lithium batteries and give [them] to consumers.”

Balsara now has a new startup, Blue Current, which is perfecting a hybrid polymer-ceramic electrolyte. Polymers don’t conduct ions as well as ceramics, but ceramics are brittle. The hybrid “mixes the best of both worlds to stuff more energy into a battery, and it doesn’t crack when a car hits a bump,” Balsara says.

Solid-state batteries might work eventually, but they still face engineering challenges, says lithium-ion pioneer M. Stanley Whittingham, a professor of chemistry at Binghamton University, in New York. “Nothing’s going to replace lithium-ion batteries in the near future,” he says, predicting that solid and semisolid batteries will be relegated to niche markets for the next 5 to 10 years. “In the end, the challenge is how expensive they’ll be,” he says.

At $500 per kilowatt-hour, SolidEnergy’s battery is currently much pricier than conventional lithium-ion batteries, which now sell for about $200. But Hu expects costs to go down with large-scale manufacturing and is talking with major battery makers.

“We’re not ready for the ultimate goal of EVs yet,” Hu admits. “But we’ve met the key performance requirements for drones and are making great progress toward EV batteries.”

This article appears in the July 2018 print issue as “New Battery Tech Launches in Drones.”

Advertisement

Original Link

Preventing and Eliminating Lightning Damage with Surge Arrestors

© Copyright 2018 IEEE — All rights reserved. Use of this Web site signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

Original Link

This Innovative Technology Harvests Water from Cooling Towers

By David C. Wagman

Advertisement

To an researcher like Kripa Varanasi, an associate professor of mechanical engineering at MIT, a big nuclear power plant similar to the one that generates electricity near Cape Town, South Africa, is a fountain of water just waiting to be tapped.

The 1,800-megawatt Koeberg Station drinks in water from the nearby Atlantic Ocean and uses it as part of its thermal cooling cycle. Just as in other industrial settings, cooling towers are an inherent part of the power plant’s steam cycle. As the reactor heats ultra-pure water to create steam to spin a turbine and generate electricity, a cloud forms and rises from the cooling towers, akin to the plume of “steam” that wafts from the brim of a hot cup of coffee.

And with Cape Town—a city of more than 4 million people—facing a nearly existential crisis due to a drought that began in 2015 and that could see the city run out of potable water as soon as next year, Varanasi sees an opportunity for his remarkably simple technology.

He and colleagues from a new startup plan to demonstrate later this year at MIT’s main power plant that much of the vapor plume can be captured and turned into drinking water. The technology is expected to work at a low cost both in terms of capital equipment and energy.

Around and Not Through

The idea of capturing water droplets from the plume of fog is nothing new. Existing systems tend to consist of little more than a screen door-type mesh structure stretched across the path of a fairly reliable fog bank. But these passive systems capture only a frustratingly small amount of water, as little as 1 to 3 percent of the plume, Varanasi says. That’s because moisture-laden air currents tend to travel around and not through the mesh screen material, carrying precious water droplets with them.

Water captured in the lab Gif: Melanie Gonick/MIT

The innovation from Varanasi’s MIT lab applies a small electrical current to ionize the air and cause the droplets to be attracted to the mesh. Once captured, the water drains into a beaker in the lab or a cistern at much larger scales. Varanasi claims that once at full scale, the approach can be efficient, productive, and able to pay for itself in as little as two years.

“We want to become a water company,” Varanasi says of his startup, Infinite Cooling. In that sense, he and his co-founders want to use industrial cooling towers as something akin to a farm where they will harvest otherwise lost water. Once captured, the water could be sold back to a host power plant and reused. Or, in a water-starved place like Cape Town, the distilled water could be delivered to a municipal water system for drinking and other domestic use.

Varanasi’s work at MIT focuses on nano-engineered surface, interface, and coating technologies. Already, he has co-founded two startups, LiquiGlide to commercialize super-slippery coatings, and DropWise, to commercialize an advanced coating material that increases efficiency in power plant desalinization and refrigeration systems.

His latest startup, Infinite Cooling, is working to raise around $2.5 million to scale up the water capture technology from the laboratory to commercial deployment.

Researchers at MIT cooling towers Maher Damak (left) and Kripa Varanasi at MIT’s power plant. Image: Melanie Gonick/MIT

The capture system is described in a paper published 8 June in the journal Science Advances. It was co-authored by Varanasi and Maher Damak, one of his PhD students. Both are among Infinite Cooling’s co-founders.

Major Market

Varanasi and his partners point out that the electric power sector is second only to agriculture in terms of how much water it draws from lakes, rivers, and streams. For instance, he says, a 600 megawatt (MW) combined cycle gas-fired power plant with a 55-percent capacity factor drinks in the same amount of water each year as a city of 100,000 people.

To be sure, most of that water is used for “once-through cooling” and then is dumped back into the source. In practice, water flows in at one end of the plant, cools condensing equipment and other systems critical to the steam cycle, and exits at the plant’s other end.

Cooling towers are an inherent part of the process. In the tower, water that was heated as it passed through the plant is sprayed out and cools through evaporation. A fraction of the water enters the air as a visible plume from the top of the tower.  Condensed fog in the plume has droplets whose average diameter is 10 micrometers, Varranasi says. These make up around 20 to 30 percent of the cooling tower exhaust and are similar to the “steam” that comes off the rim of a cup of coffee.

Close-up of water capture Gif: Melanie Gonick/MIT

The technology developed in Varanasi’s lab captures droplets at the point that they drift off of the rim. It does this by ionizing the droplet-rich fog with a beam of electrically charged particles. The charged water droplets are drawn toward the wire mesh capture screen, collect on the mesh, then drain off for collection and reuse.

“We can achieve on the order of 99 percent efficiency” in capturing the water droplets, Varanasi says. That’s a big improvement from the 1 to 3 percent collection efficiency from a screen simply stretched across a fog plume.

And the capture technique could come at a relatively trivial energy cost. For the 600-MW combined cycle reference plant, Varanasi says the annual cost of electricity to capture as much as 150 million gallons of water is around $10,000. He declined to give detailed full cost estimates, but says that a full-scale system could achieve a two- to three-year payback and still generate a nice profit for his startup.

Varanasi and colleagues will test full-scale version of their system on the cooling tower of MIT’s Central Utility Plant, a natural gas-fired cogeneration plant that provides most of the campus’s  electricity, heating, and cooling. The system will start running this fall and is expected to test different variations of mesh and supporting structures.

Meanwhile, fundraising efforts for Infinite Cooling are under way as the company looks toward big power plants like Cape Town’s Koeberg Station and sees fertile ground for harvesting thirst-quenching water.

Original Link

Making Fuels with Carbon Dioxide Pulled From Air Could be Affordable

Photo-illustration showing Carbon Engineering's proposed air contactor design to capture 1M tons of CO2 per year. Photo-illustration: Carbon Engineering Carbon Engineering’s proposed design to capture 1million tons of CO2 per year.

Advertisement

In the fight to slow down climate change, few ideas generate more hype, scorn, and desperate hope than capturing carbon dioxide from the air. Critics point to its massive expense, as much as US $1000 to extract a metric ton of carbon dioxide.

Now Harvard geoengineer David Keith has proven that direct air capture is doable on an industrial scale for $100 a ton.

Carbon Engineering, the company Keith founded in 2009 and that has Bill Gates’ support, has been capturing a ton of carbon dioxide every day since 2015 at its pilot plant north of Vancouver. Keith and his team have now come up with the first-ever detailed engineering design and cost analysis of a plant that would capture 1 million tons of carbon dioxide a year. They describe their results today in the journal Joule.

Carbon Engineering's pilot air contactor. Photo: Carbon Engineering Carbon Engineering’s pilot air contactor.

“There has been a lot of talk about direct air capture technology,” Keith says. “We help to bring it down to reality. This is not some kind of narrow scientific innovation. It’s not something we did in the lab. It’s the product of a tested engineering process.”

Carbon Engineering uses a bank of giant fans to draw ambient air and push it through an aqueous solution that reacts with carbon dioxide. Heat and known chemical reactions separate the CO2 molecules. The company combines it with hydrogen produced from water electrolysis to create liquid fuels that can be used in today’s trucks and aircraft engines. They started making these fuels at the plant in December.

Keith acknowledges that making carbon dioxide an ingredient in fuel doesn’t help to reduce the overall amount of carbon dioxide in the atmosphere. But because the carbon-neutral fuel replaces fossil fuels that would add more planet-warming gas to the atmosphere, it would make a large dent in transportation-related CO2 emissions. 

While the technology to capture carbon dioxide from air isn’t new—it was first developed in the 1950s—its use on a large scale to prevent global warming is so far unproven and unaffordable. Estimates of the technology’s cost have ranged from $50 to $1,000 per ton of carbon dioxide. But Keith and his colleagues’ number-crunching fills a vital void. “Those others were estimates made from scaling,” he says. “Our number comes from a deep effort to do the full engineering.”

In their paper, the researchers describe every step of the direct air capture process, with an energy and mass balance. They also identify a commercial hardware vendor for each unit or commercial hardware that can be adapted to perform the process. Finally, they calculate a range of costs by taking into account energy use—the design relies on natural gas and electricity—and capital expenses for systems that use different chemicals, materials, and equipment designs. They calculate a levelized cost of $94 to $232 for capturing a ton of carbon dioxide. 

Carbon Engineering's clean fuel, synthesized from carbon dioxide captured from the air and hydrogen split from water. Photo: Carbon Engineering Carbon Engineering’s clean fuel, synthesized from carbon dioxide captured from the air and hydrogen split from water.

The team is now trying to finance its first commercial plant, Keith says. It will use the cheapest low-carbon energy available, be that wind or solar. This should help cut energy use and bring down costs. The fuel the plant produces will likely still cost more per gallon than fuel you can buy at your local refueling station. But even if they can bring it down to $1.05 per liter ($4/gallon), it could be economically feasible when used in vehicles that meet California’s low-carbon fuel standards. 

Swiss company Climeworks has already built a commercial carbon dioxide–capture plant. The facility near Zurich, which began operation last year, uses fans to draw ambient air through filters that trap carbon dioxide. The 900 metric tons of carbon dioxide that it can trap every year are sent to greenhouses to grow plants.

“The difference is really of scale,” Keith says. Climeworks’ units are smaller and their process is simpler. But they would need a large number of units for industrial scale. “So if you want to capture on small scale, theirs beats us,” he says. “But if you want industrial scale capture where you’re making 2000 barrels of fuel a day, we can get cost number down to $100/ton.”

Critics say that direct air capture is a distraction. The focus should be on keeping carbon dioxide out of the air by capturing it at power plants or using clean energy sources. But the reality is that the world will continue to burn fossil fuels for decades. Many scientists agree that we need to use all the tools we have to fight climate change. If Carbon Engineering can get carbon dioxide out of the air in a relatively cheap way, more power to them.

Original Link

Interactive: Spurring Change

Many jurisdictions have created economic incentives to reduce greenhouse gas emissions

In an effort to reduce greenhouse gas emissions, governments at the national, subnational, and regional levels have established carbon taxes or put into place emissions trading schemes. Using this interactive map, you can get a quick overview of such regulations around the world. Note that this interactive is best viewed on a larger screen.

This interactive map was built with Cesium, a JavaScript library for 3D globes and maps.

Original Link

Can Technology Reverse Climate Change?

Advertisement

photo Illustration: Francesco Muzzi/StoryTK

graphic link to special report landing page
graphic link to special report landing page

Do you believe that climate change is a vast left-wing conspiracy that does little more than create jobs for scientists while crippling businesses with pointless regulation? Or, quite the contrary, are you convinced that climate change is the biggest crisis confronting the planet, uniquely capable of wreaking havoc on a scale not seen in recorded history?

Many of you are probably in one camp or the other. No doubt some of you will tell us how disappointed/angry/outraged you are that we (a) gave credence to this nonsense or (b) failed to convey the true urgency of the situation. We welcome your thoughts.

In crafting this issue, we steered clear of attempting to change hearts and minds. Your views on climate change aren’t likely to be altered by a magazine article, or even two dozen magazine articles. Rather, this issue grew out of a few simple observations. One is that massive R&D programs are now under way all over the world to develop and deploy the technologies and infrastructures that will help reduce emissions of greenhouse gases. Governments, corporations, philanthropies, and universities are spending billions of dollars on these efforts. Is this money being spent wisely?

That question brings us to the next observation: The  magnitude of the challenge is eye-poppingly huge. In 2009, representatives of industrialized nations met in Copenhagen and agreed on the advisability of preventing global average temperatures from rising more than 2 °C above their preindustrial levels. In 2014, the Intergovernmental Panel on Climate Change declared that doing so would require cutting greenhouse gas emissions 40 to 70 percent from 2010 levels by midcentury. These targets then guided the Paris Agreement, in 2015.

Even before Paris, Bill Gates had declared his belief that only a series of “energy miracles” could make meaningful progress in reducing greenhouse gases.

That got us thinking: What might those “miracles” be? If they were going to enable substantial cuts within a couple of decades, they would have to be in laboratories now.

So we started looking around for these miracles. We focused on three of the largest greenhouse-gas-emitting categories: electricity, transportation, and food and agriculture. We considered dozens of promising projects and programs. Eventually we settled on the 10 projects described in this issue (and two others covered on our website).

We picked most of these projects because they seemed to hold unusual promise relative to the attention they were getting. And we threw in a couple for, well, the opposite reason. Our reporters went to see these activities firsthand, fanning out to sites in Japan; Iceland; Hungary; Germany; the Netherlands; Columbus, N.M.; Schenectady, N.Y.; LaPorte, Texas; Cambridge, Mass.; and Bellevue, Wash. They trooped up and down vertical farms. They flew in electric airplanes. They viewed entirely new microorganisms—genetically engineered with the help of robots—growing in shiny steel fermentation chambers. An algae-growing tank burbled quietly in our mid-Manhattan offices, sprouting the makings for a green-breakfast taste test.

After six months, we had soaked up some of the best thinking on the use of tech to cut carbon emissions. But what did it all suggest collectively? Could these projects, and others like them, make a real difference? We put these questions to our columnist Vaclav Smil, a renowned energy economist, who responded with an essay. Without stealing Smil’s thunder, let’s just say that they don’t call them “miracles” for nothing.

Advertisement

Original Link

A Critical Look at Claims for Green Technologies

Advertisement

graphic link to special report landing page
graphic link to special report landing page

illustration Illustration: Stuart Bradford

When a bright new idea comes along, it’s easy to imagine a fantastic future for it. Perhaps the best example of this is Ray Kurzweil’s Singularity, scheduled to arrive in 2045, which will supposedly bring “immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.” Not to be left behind, a former Google X senior executive says that “everything you see in sci-fi movies is going to happen.” Not just something, mind you, but everything.

Compared with such utterly ahistorical visions, unmoored from reality, the articles gathered in this issue are actually quite tame. They promise only a long-lasting supply of affordable and clean energy—either through nuclear fission or through electricity derived from burning (yes, burning) CO2—and a surfeit of food from a variety of sources: vertical farms based in cities, crops that will need almost no fertilizer, and environmentally friendly meat substitutes.

Of course, these claims of impending innovation may be seen (although they are not labeled as such) as being largely aspirational—but the benefits would be great if even just a fraction of their goals were realized during the next generation.

At the same time, these claims should be appraised with unflinching realism. I would not presume to offer specific, in-depth critiques of proposed innovations even if I had 300 instead of three pages to work with. Instead, I will just point out some nontrivial complications pertaining to specific proposals, and above all, I will stress some fundamental systemic considerations that are too often ignored. These are not arguments against the need for some form of the techniques that are promoted here but rather cautionary reminders that many of today’s ambitions will not become tomorrow’s realities. It’s better to be pleasantly surprised than to be repeatedly disappointed.

Human beings have always sought innovation. The more recent phenomenon is this willingness to suspend disbelief. Credit this change to the effect that the electronics revolution has had on our perceptions of what is possible. Since the 1960s, there has been an extraordinarily rapid growth in the number of electronic components that we can fit onto a microchip. That growth, known as Moore’s Law, has led us to expect exponential improvements in other fields.

However, our civilization continues to depend on activities that require large flows of energy and materials, and alternatives to these requirements can’t be commercialized at rates that double every couple of years. Our modern societies are underpinned by countless industrial processes that have not changed fundamentally in two or even three generations. These include the way we generate most of our electricity, the way we smelt primary iron and aluminum, the way we grow staple foods and feed crops, the way we raise and slaughter animals, the way we excavate sand and make cement, the way we fly, and the way we transport cargo.

Some of these processes may well see some relatively fast changes in decades ahead, but they will not follow microchip-like exponential rates of improvement. Our world of nearly 8 billion people produces an economic output surpassing US $100 trillion. To keep that mighty engine running takes some 18 terawatts of primary energy and, per year, some 60 billion metric tons of materials, 2.6 billion metric tons of grain, and about 300 million metric tons of meat.

Any alternatives that could be deployed at such scales would require decades to diffuse through the world economy even if they were already perfectly proved, affordable, and ready for mass adoption. And none of the innovations presented in this issue fits fully into that category. In fact, these three critical prerequisites are notably absent from nearly all of the innovations presented in this issue.

Most of the articles do acknowledge that difficulties lie ahead, but the overall impression is one of an accelerating advance toward an ever more remarkable future. That needs some tempering. Today, we can fly for up to an hour in a two-seat, battery-powered trainer plane; in a decade, perhaps we’ll fly in a battery-assisted regional hybrid plane. The savings in energy use and in carbon emissions will be modest—and we are a very long way from all-electric intercontinental airliners.

The traveling-wave nuclear-fission reactor has many obvious advantages over the dominant pressurized water reactor, including remarkably safe operation and the ability to use spent nuclear fuel. But our experience with developing fast-breeder reactors, which are cooled with molten sodium, indicates how extraordinarily challenging it can be to translate an appealing concept into a commercially viable design. Experimental breeder prototypes in the United States, France, and Japan were all shut down many years ago, after decades of development and billions of dollars spent.

Vertical farms in cities can produce—profitably—hydroponically grown leafy greens, tomatoes, peppers, cucumbers, and herbs, all with far less water than conventional agriculture requires. But the produce contains merely a trace of carbohydrates and hardly any protein or fat. So they cannot feed cities, especially not megacities of more than 10 million people. For that we need vast areas of cropland planted with grains, legumes, and root, sugar, and oil crops, the produce of which is to be eaten directly or fed to animals that produce meat, milk, and eggs. The world now plants such crops in 16 million square kilometers—nearly the size of South America—and more than half of the human population now lives in cities. The article in this issue acknowledges that vertical farms can’t substitute for much farmland, and that the claims made for it have been exaggerated.

Crops that get their nitrogen by fixing it from the air would largely eliminate the need for synthesizing and applying the most important plant macronutrient. Today, only legumes (and some cultivars of sugar cane) coexist with symbiotic nitrogen-fixing bacteria; imparting this symbiotic ability to staple grains would be a feat rivaling the outcome of a long evolutionary process. But symbiosis does not come free, and bacterial nitrogen fixation is not as reliable as fertilizer application. Legumes pay a considerable price for sharing their photosynthetic products with bacteria. The average yield of U.S. corn is now about 11 metric tons per hectare, and it needs about 160 kilograms of nitrogen per hectare; U.S. soybeans yield 3.5 metric tons per hectare while receiving only a small, supplemental application of about 20 kg of nitrogen per hectare. When at last we make grain crops symbiotic with nitrogen-fixing bacteria, will they maintain their high yields? And how uniformly will future engineered microbes perform in different soils and climates, and with different crops?

Meat substitutes and cultured meat are meant to reduce the environmental burdens associated with meat production. But a better, less burdensome solution would simply be to moderate our eating of meat. Good nutrition does not require annually consuming nearly double your weight in meat—100 kg per capita in some developed countries, such as the United States. Producing just 30 kg per year for 8 billion people could be done with well-managed grazing and by feeding herds the residues from crop and food processing, together with some of the enormous quantity of food that’s now wasted.

Using emitted carbon dioxide in fuel cells and burning supercritical CO2 to run turbines constitute the latest in an increasing array of techniques aimed at reducing emissions of the leading greenhouse gas. These efforts at carbon capture and storage began decades ago and have increased since 2000, but all operating projects and those under construction have an annual capacity equal to just 0.3 percent of annual emissions from stationary sources (less than 40 million metric tons compared with some 13 billion metric tons). This is another perfect illustration of the scale of the challenge. All of the carbon-capture projects now scheduled to start operating at various dates during the 2020s would not even double today’s minuscule rate of carbon capture.

Electric vehicles are the latest darling of the media, but they run into two fundamental constraints. EVs are meant to do away with automotive carbon emissions, but they must get their electricity somehow, and two-thirds of electricity worldwide still comes from fossil fuels. In 2016, electricity produced by wind and photovoltaic solar still accounted for less than 6 percent of world generation, which means that for a long time to come the average electric vehicle will remain a largely fossil-fueled machine. And by the end of 2017, worldwide cumulative EV sales just topped 3 million, which is less than 0.3 percent of the global stock of passenger cars. Even if EV sales were to grow at an impressive rate, the technology will not eliminate automotive internal combustion engines in the next 25 years. Not even close.

Battery- or fuel-cell-powered designs for small ferries and river barges (see “The Struggle to Make Diesel-Guzzling Cargo Ships Greener”) offer a transport capability orders of magnitude below what’s required to propel the container ships that maritime trade depends on. Compare these little boats with the behemoths that move containers from the manufacturing centers of East Asia to Europe and North America. The little electric vessels travel tens or hundreds of kilometers and need the propulsion power of hundreds of kilowatts to a few megawatts; the container ships travel more than 10,000 kilometers, and their diesel engines crank out 80 megawatts.

Battery-powered jetliners fall into the same category: The big plane makers have futuristic programs, but hybrid-electric designs cannot quickly replace conventional propulsion, and even if they did, they wouldn’t save vast amounts of carbon emissions. If you compare a small, battery-powered trainer with a Boeing 787 and multiply capacity (2 versus 335 people), speed (200 vs. 900 kilometers per hour) and endurance (3 vs. 17 hours), you’ll see that you need batteries capable of storing three orders of magnitude more energy for their weight to allow for all-electric intercontinental flight. Since 1950, the energy density of our best batteries has improved by less than one order of magnitude.

The human craving for novelty is insatiable, and in a small matter you can meet it in no time at all, particularly when Moore’s Law can help you. It took a single decade to come up with entirely new mobile phones. But you just can’t replicate that pace of adoption with techniques that form the structure of modern civilization—growing food, extracting energy, producing bulk materials, or providing transport on mass scales. While it is easy to extoll—and to exaggerate—the seductive promise of the new, its coming will be a complicated, gradual, and lengthy process constrained by many realities. 

This article appears in the June 2018 print issue as “It’ll Be Harder Than We Thought to Get the Carbon Out.”

Advertisement

Original Link

The Green Promise of Vertical Farms

Advertisement

graphic link to special report landing page

img 

Photo: Plenty Unlimited Wall of Plenty: A wall of basil is bathed in light from LED tubes, which are optimized for this particular crop by Plenty Unlimited’s proprietary machine-learning algorithms.

graphic link to special report landing page

I emerge from the Tokyo Monorail station on Shōwajima, a small island in Tokyo Bay that’s nestled between downtown Tokyo and Haneda Airport. Disoriented and dodging cargo trucks exiting a busy overpass, I duck under a bridge and consult the map on my phone, which leads me deeper into a warren of warehouses. I eventually find Espec Mic Corp.’s VegetaFarm, in a dilapidated 1960s office building tucked between a printing plant and a beer distributor. Stepping inside the glass-walled lobby on the second floor, I see racks upon racks of leafy green lettuce and kale growing in hydroponic solutions of water and a precisely calibrated mix of nutrients. Energy-efficient LEDs emit a pinkish light within a spectral range of 400 to 700 nanometers, the sweet spot for photosynthesis.

I’m here to find out how plant factories, called vertical or indoor farms in Western countries, can help reduce the greenhouse gas emissions associated with conventional field agriculture. According to the World Bank, 48.6 million square kilometers of land were farmed worldwide in 2015. Collectively, agriculture, forestry, and other land uses contributed 21 percent of global greenhouse gas emissions, per a 2017 report from the Food and Agriculture Organization of the United Nations, mostly through releases of carbon dioxide, methane, and nitrous oxide.

Vertical farms avoid much of these emissions, despite the fact that they rely on artificial light and have to be carefully climate-controlled. Indeed, according to vertical farms evangelist Dickson Despommier, who’s widely credited with taking the fledging industry mainstream, these kinds of farms could significantly reduce the amount of land devoted to farming and thereby make a serious dent in our climate change problem.

“What if every city can grow 10 percent of its food indoors?” he asks, and then answers himself: That shift could free up 881,000 km2 worth of farmland, which could then revert to hardwood forest. That’s enough, Despommier claims, “to take 25 years’ worth of carbon out of the atmosphere.” He adds that Japan, which began experimenting with plant factories in the 1980s, is now the world’s leader, and most of those farms lie near or within cities. As he noted in his 2010 book The Vertical Farm: Feeding the World in the 21st Century (Thomas Dunne Books), the vertical farm solution to anthropogenic climate change is both “straightforward” and “simple.”

But how realistic is it?

Determining what contributes to agriculture’s share of overall greenhouse gas emissions is fairly straightforward. Paul West, codirector and lead scientist of the Global Landscapes Initiative at the University of Minnesota’s Institute on the Environment, says that half of agriculture’s share of greenhouse gas emissions comes in the form of carbon dioxide from clearing forests for cattle and soy in South America and for oil palms in Southeast Asia. Another huge chunk comes from livestock and rice paddies, which release staggering quantities of methane. Nitrous oxide from fertilizer accounts for a good portion of the rest.

Scene inside Espec Mic's VegetaFarm. 

Photo: Harry Goldstein Working the (Indoor) Farm: A bunny-suited farmer wrangles trays of frill lettuce toward harvest in Espec Mic’s VegetaFarm.

I ask West whether vertical farms could help. He notes that the vast majority of calories produced on cropland come from grains like wheat, rice, corn, and soy, none of which are particularly good candidates for indoor farming.

And it’s true—I don’t see any rice, wheat, corn, or soy growing at the VegetaFarm in Tokyo. Instead, the 160-square-meter space is filled with a fecund profusion of leafy greens. The farm produces 1,000 heads of lettuce per day, according to Shun Kawasaki, production manager and plant scientist.

To enter the farm, Kawasaki and I don clean-room bunny suits, face masks, and rubber boots, step on a sticky mat, walk through an air shower, and exit into the plant room. Each of the six racks holds five tiers of water-filled canals on which float rafts of plants. Two bunny-suited workers tend this indoor garden, occasionally pushing a new tray of seedlings onto one end of a shelf and another ready-to-be-harvested tray off the other.

Through his mask, Kawasaki tells me that besides controlling temperature, the HVAC ducts and fans snaking under the shelving also pump in carbon dioxide to keep levels at about 1,000 parts per million, about two and a half times the typical outdoor level. Trays of lettuce and kale soak up light from LED tubes, which stay on for 16 to 17 hours per day and use up to 70 percent of the 600 to 700 kilowatt-hours consumed per day.

If you have US $1 million, you can buy a medium-size VegetaFarm like this one from Espec Mic, including the racks, control systems, HVAC, and lighting. Besides the lettuce and kale, the Tokyo farm grows bok choy, mint, mizuna, and shiso, and is experimenting with basil and radishes. Lettuce grown in the field takes about 60 days from seed to harvest. In the VegetaFarm, it takes 40 days. Other plant factories claim faster rates, in the 30-day range. So instead of one to three harvests per year on a conventional farm in the middle latitudes, a plant factory can produce one harvest every month or so. And unlike field-grown lettuce, which is harvested all at once, the indoor harvest is continual and the yields extremely high, with no loss from pests or inclement weather.

photo of the Mineraleaf package Photo: Harry Goldstein Succulent Greens: Grown indoors under LEDs with sea water pumped up from 800 m deep, Mineraleaf lettuce is delicious and loaded with nutrients.

After the tour, Kawasaki gives me a sample of Espec Mic’s Mineraleaf green lettuce, freshly harvested and packaged on-site that day. One of the major benefits of plant factories is that you can tune the plant’s chemical composition to engineer its nutrient content and flavor profile. The company grows its Mineraleaf lettuces in seawater pumped up from 800 meters, which makes for a tender, delicious leaf—maybe the tastiest lettuce I’ve ever had—that’s also dense in calcium, potassium, and magnesium. A 100-gram package sells for about 200 yen (about $2). On the package, where you might expect to see an image of a lush field or the Jolly Green Giant, there are photos of plants basking in pink light.

Proponents of urban indoor agriculture tout a number of benefits—such as increasing city dwellers’ access to fresh produce and revitalizing rundown warehouse districts. But the most audacious claims center on indoor farms’ environmental benefits over those of conventional field agriculture, including the elimination of pesticides and much more efficient use of water. According to Toyoki Kozai, professor emeritus at Chiba University and president of the Japan Plant Factory Association, plant factories use water 30 to 50 times as efficiently as a traditional greenhouse does. Many plant factories don’t even wash their produce. Instead, as at VegetaFarm, harvested plants go straight into packaging, and they’re clean enough to eat.

Kozai says that a vertical farm is most economically viable when its output is consumed fresh within a few kilometers of the farm itself. That cuts down on fuel for transportation and processing as well as the loss of produce en route to the consumer.

Reduction of the fuel used to transport food—known as food miles—is an obvious benefit of urban vertical farms. And yet, the carbon savings are relatively minor, says West. “Eighty percent or more of the emissions for agriculture happens on the farm—not in the processing, not in the transportation,” he says. Real reductions in greenhouse gases will come from “how we are managing our soils, how we are managing the crops on the land, the types of mechanization that’s used, the types of fertilizer.” West says he’s all for “urban gardening and vertical systems, but I don’t see it being at the scale that’s needed to meet food demand or have environmental impact on a massive scale.”

Certainly, Japan’s vertical farming industry is still tiny, despite being around for several decades. Eri Hayashi, director of international relations and consulting at the Japan Plant Factory Association, says there are 182 plant factories in Japan. One of the largest is Spread Co.’s Kameoka plant near Kyoto. Its two 900-m2 towers have a total cultivation area of 25,200 m2 and produce 21,000 heads of lettuce per day. In September, Spread will open the Techno Farm in Tokyo, which the company says will exploit advanced automation to more than double productivity, to 648 heads per square meter.

img Photo: Harry Goldstein LEDs Are Key to Growth: The Japan Plant Factory Association runs a prototype plant factory on the Chiba University campus. Here it researches different crops as well as lighting systems from the likes of Advanced Agri, Future Green, Kyocera, Philips, Showa Denko, and Toshiba.

According to a 2014 market study [PDF] by the Yano Research Institute, total revenue for the Japanese vertical farm industry was 3.4 billion yen ($31.2 million) in 2013. Japan’s domestic market for vegetables that year was 2,253 billion yen, according to the Statistics Bureau of the Japan Ministry of Internal Affairs and Communications, which means that vertical farms accounted for a scant 0.15 percent of the country’s vegetable market.

Still, interest in vertical farms has never been higher. Outside of Japan, the most active markets are China, Taiwan, and the United States, Hayashi says. As the concept has spread, new hybrids have sprung up. These include indoor aquaponics farms, where fish poop fertilizes the plants, and the “aeroponics”⁠-based AeroFarms in Newark, N.J., which employs proprietary spray nozzles to mist plant roots with water and nutrients. A recent white paper by investment firm Newbean Capital counted 56 commercial warehouse, aquaponics, and rooftop greenhouse farms in the United States in 2017, up from 15 in 2015, and notes that at least three 6,500-m2 farms are under construction.

One of the biggest farms slated to open this year is Plenty Unlimited’s 9,300-⁠m2 facility located just south of Seattle. Unlike most indoor farms, which grow trays of plants on multilevel racks, Plenty will grow its plants “on the vertical plane,” says Nate Storey, Plenty’s cofounder and chief science officer. “Imagine rows of towers with product growing on either side of them,” he explains. “That orientation allows us to put about three times more product into a given space than we could if we stacked it.”

img

Photo: Plenty Unlimited Over the Rainbow Chard: A worker tends to double-sided walls of rainbow chard at one of Plenty Unlimited’s facilities. Tubes of LEDs tuned for these specific plants light the leaves from multiple angles, something conventional vertical farms are now starting to experiment with by lighting the leaves from beneath as well as above.

Plenty has attracted $200 million in investment from SoftBank chief Masayoshi Son’s Vision Fund as well as from funds that invest for Amazon’s Jeff Bezos (who also owns Whole Foods), Bloomberg.com reported. With Bezos involved, Plenty could be positioned to do what no other indoor farming company has been able to do so far: grow produce indoors on a global scale.

Plenty’s betting big on a suite of technologies that Storey believes can usher in a new era of farming, one powered by renewable energy and lit by LEDs, with sensor networks collecting tens of thousands of data points that feed into machine-learning algorithms to optimize growing conditions for particular plants at specific stages of their life cycle.

“We came to realize that the future of these farms really rests in the hands of artificial intelligence,” Storey tells me. “We’re trying to improve both the amount that we can produce for a given cost or unit of energy as well as the quality of that product.”

Plenty will start with greens and herbs, but in the next 12 to 18 months the company intends to branch out into fruits that until now have been grown indoors only experimentally. “I think that the industry as a whole will be surprised at the speed with which we begin to introduce crops that have historically only been viable in the field,” Storey says.

He is also concerned with extending the shelf life of produce, along with saving food miles. “Half of what people are buying they’re just chucking in the trash, right? That is a huge carbon cost,” says Storey. “By delivering something that’s superfresh, that has two weeks more of shelf life in your fridge than something you bought that was transported a very long way, we basically chop the carbon cost in half…. If we can get consumers to eat everything that they buy, we’ve done a lot better.”

But can superfresh lettuce save a forest? Despommier’s thesis relies on converting farmland back to hardwood forest. To free up 881,000 km2 of land, you’d need an area equivalent to Spread’s 25,200-m2 Kameoka plant multiplied by 35 million. Despommier’s vision for skyscraper-scale vertical farms coupled with much shorter growing seasons could certainly cut that number from millions to tens of thousands, but even an optimist can’t imagine such a building boom within this century. And in the unlikely event such a boom were to ensue, you’d be getting only a small percentage of the vegetables and fruits grown on traditional farms and none of the wheat, corn, soy, or rice, at least not in the foreseeable future. Nor will vertical farms raise livestock or grow oil palms, which are mainly what people are clearing hardwood forests to make room for.

As West puts it, “We have heard that people can’t live on bread alone. Well, if they can’t do that, they’re not going to live on kale either as their main source of calories.”

If not kale, then what? Neil Mattson, an associate professor of plant science at Cornell University, in New York, has been looking into which crops make the most sense to grow indoors. He’s the principal investigator on a $2.4 million grant from the National Science Foundation, and he and his team are analyzing how plant factories stack up against field agriculture “in terms of energy, carbon, and water footprints, profitability, workforce development, and scalability.” It is probably the best-funded, most comprehensive study on indoor farms to date, one that will help quantify how much they can mitigate climate change.

Mattson notes, for example, that it makes no sense to grow wheat indoors. His Cornell colleague Lou Albright looked at the lighting costs of vertical farms for a 2015 presentation [PDF], and he calculated that if you grew wheat indoors, just the electricity cost per loaf of bread made from that wheat would be $11.

“Lou Albright would say indoor production like that doesn’t really make sense until you get completely renewable energy,” Mattson says.

For its part, Plenty is committed to integrating renewable energy sources into its power mix. The Seattle facility will source hydroelectric power. But to make a dent in, say, methane emissions by moving rice cultivation indoors, the amount of renewable energy you’d need would be truly massive: Of the 48.6 million km2 of land being farmed, 1.61 million km2 are devoted to rice cultivation.

Plenty’s Storey isn’t daunted. “We can grow things like rice and wheat and sorghum. We can grow commodities,” he says. “It doesn’t work for us right now, but I wouldn’t rule out a future in which it starts to make sense.”

It may well be that before that future arrives, we’ll be growing more of our food in plant factories. Will it be the 10 percent that Despommier hopes for? Even if plant factories and vertical farms wind up being only a small part of the overall solution to reducing greenhouse gas emissions in the near term, they might be our insurance policy. As climate change starts to erode the viability of croplands, we may be forced to grow indoors, where the climate is still under our control.

Advertisement

Original Link

TerraPower’s Nuclear Reactor Could Power the 21st Century

Advertisement

/image/MzA2MjU1MQ.jpeg

Photo: TerraPower Pipe Dream: Sodium-cooled nuclear reactors have a history of lackluster performance, but TerraPower believes it can build one that will work. Testing the flow of molten sodium through the reactor assembly is crucial. Water shares many of the same flow characteristics as the toxic metal and is a viable substitute for tests.

graphic link to special report landing page
graphic link to transportation page

Table tennis isn’t meant to be played at Mach 2. At twice the speed of sound, the ping-pong ball punches a hole straight through the paddle. The engineers at TerraPower, a startup that has designed an advanced nuclear power reactor, use a pressurized-air cannon to demonstrate that very point to visitors. The stunt vividly illustrates a key concept in nuclear fission: Small objects traveling at high speed can have a big impact when they hit something seemingly immovable.

And perhaps there is a larger point being made here, too—one about a small and fast-moving startup having a big impact on the electric-power industry, which for many years also seemed immovable.

In a world defined by climate change, many experts hope that the electricity grid of the future will be powered entirely by solar, wind, and hydropower. Yet few expect that clean energy grid to manifest soon enough to bring about significant cuts in greenhouse gases within the next few decades. Solar- and wind-generated electricity are growing faster than any other category; nevertheless, together they accounted for less than 2 percent of the world’s primary energy consumption in 2015, according to the Renewable Energy Policy Network for the 21st Century.

To build a bridge to that clean green grid of the future, many experts say we must depend on fission power. Among carbon-free power sources, only nuclear fission reactors have a track record of providing high levels of power, consistently and reliably, independent of weather and regardless of location.

Yet commercial nuclear reactors have barely changed since the first plants were commissioned halfway through the 20th century. Now, a significant fraction of the world’s 447 operable power reactors are showing their age and shortcomings, and after the Fukushima Daiichi disaster in Japan seven years ago, nuclear energy is in a precarious position. Between 2005 and 2015, the world share of nuclear in energy consumption fell from 5.73 to 4.44 percent. The abandonment of two giant reactor projects in South Carolina in the United States and the spiraling costs of completing the Hinkley Point C reactor in the United Kingdom, now projected to cost an eye-watering £20.3 billion (US $27.4 billion), have added to the malaise.

Elsewhere, there is some nuclear enthusiasm: China’s 38 reactors have a total of 33 gigawatts of nuclear capacity, and the country has plans to add an additional 58 GW by 2024. At the moment, some 50 power reactors are under construction worldwide. These reactors, plus an additional 110 that are planned, would contribute some 160 GW to the world’s grids, and avoid the emission of some 500 million metric tons of carbon dioxide every year. To get that kind of cut in greenhouse gases in the transportation sector, you’d have to junk more than 100 million cars, or roughly all the passenger cars in France, Germany, and the United Kingdom.

Against this backdrop, several U.S. startups are pushing new reactor designs they say will address nuclear’s major shortcomings. In Cambridge, Mass., a startup called Transatomic Power is developing a reactor that runs on a liquid uranium fluoride–lithium fluoride mixture. In Denver, Gen4 Energy is designing a smaller, modular reactor that could be deployed quickly in remote sites.

img

Photo: Michael Koziol Hardcore Testing: The full-scale reactor-core test assembly is more than three stories tall.   

In this cluster of nuclear startups, TerraPower, based in Bellevue, Wash., stands out because it has deep pockets and a connection to nuclear-hungry China. Development of the reactor is being funded in part by Bill Gates, who serves as the company’s chairman. And to prove that its design is viable, TerraPower is poised to break ground on a test reactor next year in cooperation with the China National Nuclear Corp.

To reduce its coal dependence, China is racing to add over 250 GW of capacity by 2020 from renewables and nuclear. TerraPower’s president, Chris Levesque, sees an opening there for a nuclear reactor that is safer and more fuel efficient. He says the reactor’s fuel can’t easily be used for weapons, and the company claims that its reactor will generate very little waste. What’s more, TerraPower says that even if the reactor were left unattended, it wouldn’t suffer a calamitous mishap. For Levesque, it’s the perfect reactor to address the world’s woes. “We can’t seriously mitigate carbon and bring 1 billion people out of energy poverty without nuclear,” he says.

The TerraPower reactor is a new variation on a design that was conceived some 60 years ago by a now-forgotten Russian physicist, Saveli Feinberg. Following World War II, as the United States and the Soviet Union stockpiled nuclear weapons, some thinkers were wondering if atomic energy could be something other than a weapon of war. In 1958, during the Second International Conference on Peaceful Uses of Atomic Energy, held in Geneva, Feinberg suggested that it would be possible to construct a reactor that produced its own fuel.

Feinberg imagined what we now call a breed-and-burn reactor. Early proposals featured a slowly advancing wave of nuclear fission through a fuel source, like a cigar that takes decades to burn, creating and consuming its fuel as the reaction travels through the core. But Feinberg’s design couldn’t compete during the bustling heyday of atomic energy. Uranium was plentiful, other reactors were cheaper and easier to build, and the difficult task of radioactive-waste disposal was still decades away.

The breed-and-burn concept languished until Edward Teller, the driving force behind the hydrogen bomb, and astrophysicist Lowell Wood revived it in the 1990s. In 2006, Wood became an adviser to Intellectual Ventures, the intellectual property and investment firm that is TerraPower’s parent company. At the time, Intellectual Ventures was exploring everything—fission, fusion, renewables—as potential solutions to cutting carbon. So Wood suggested the traveling-wave reactor (TWR), a subtype of the breed-and-burn reactor design. “I expected to find something wrong with it in a few months and then focus on renewables,” says John Gilleland, the chief technical officer of TerraPower. “But I couldn’t find anything wrong with it.”

That’s not to say the reactor that Wood and Teller designed was perfect. “The one they came up with in the ’90s was very elegant, but not practical,” says Gilleland. But it gave TerraPower engineers somewhere to start, and the hope that if they could get the reactor design to work, it might address all of fission’s current shortcomings.

Others have been less optimistic. “There are multiple levels of problems with the traveling-wave reactor,” says Arjun Makhijani, the president of the Institute for Energy and Environmental Research. “Maybe a magical new technology could come along for it, but hopefully we don’t have to rely on magic.” Makhijani says it’s hard enough to sustain a steady nuclear reaction without the additional difficulty of creating fuel inside the core, and notes that the techniques TerraPower will use to cool the core have largely failed in the past.

The TerraPower team, led by Wood and Gilleland, first tackled these challenges using computer models. In 2009, they began building the Advanced Reactor Modeling Interface (ARMI), a digital toolbox for simulating deeply customizable reactors. With ARMI, the team could specify the size, shape, and material of every reactor component, and then run extensive tests. In the end, they came away with what they believe is a practical model of a breed-and-burn TWR first proposed by Feinberg six decades ago. As Levesque recalls, he joined TerraPower when the team approached him with remarkable news: “Hey, we think we can do the TWR now.”

img

Photo: Michael Koziol Fuel for Thought: Mock fuel pins (not made of radioactive uranium!) sit ready for validation tests.

To understand why the TWR stymied physicists for decades, first consider that today’s reactors rely on enriched uranium, which has a much higher ratio of the fissile isotope of uranium (U-235) to its more stable counterpart (U-⁠238) than does a natural sample of uranium.

When a passing neutron strikes a U-235 atom, it’s enough to split the atom into barium and krypton isotopes with three neutrons left over (like that high-speed ping-pong ball punching through a sturdy paddle). Criticality occurs when enough neutrons hit enough other fissile uranium atoms to create a self-sustaining nuclear reaction. In today’s reactors, the only way to achieve criticality is to have a healthy abundance of U-235 atoms in the fuel.

In contrast, the TWR will be able to use depleted uranium, which has far less U-235 and cannot reach criticality unassisted. TerraPower’s solution is to arrange 169 solid uranium fuel pins into a hexagon. When the reaction begins, the U-238 atoms absorb spare neutrons to become U-239, which decays in a matter of minutes to neptunium-239, and then decays again to plutonium-⁠239. When struck by a neutron, Pu-239 releases two or three more neutrons, enough to sustain a chain reaction.

It also releases plenty of energy; after all, Pu-239 is the primary isotope used in modern nuclear weapons. But Levesque says the creation of Pu-239 doesn’t make the reactor a nuclear-proliferation danger—just the opposite. Pu-239 won’t accumulate in the TWR; instead, stray neutrons will split the Pu-239 into a cascade of fission products almost immediately.

In other words, the reactor breeds the highly fissile plutonium fuel it needs right before it burns it, just as Feinberg imagined so many decades ago. Yet the “traveling wave” label refers to something slightly different from the slowly burning, cigar-style reactor. In the TWR, an overhead crane system will maintain a reaction within a ringed portion of the core by moving pins into and out of that zone from elsewhere in the core, like a very large, precise arcade claw machine.

To generate electricity, the TWR uses a more complicated system than today’s reactors, which use the core’s immense heat to boil water and drive a steam turbine to generate usable electricity. In the TWR, the heat will be absorbed by a looping stream of liquid sodium, which leaves the reactor core and then boils water to drive the steam turbine.

But therein lies a major problem, says Makhijani. Molten sodium can move more heat out of the core than water, and it’s actually less corrosive to metal pipes than hot water is. But it’s a highly toxic metal, and it’s violently flammable when it encounters oxygen. “The problem around the sodium cooling, it’s proved the Achilles’ heel,” he says.

Makhijani points to two sodium-cooled reactors as classic examples of the scheme’s inherent difficulties. In France, Superphénix struggled to exceed 7 percent capacity during most of its 10 years of operation because sodium regularly leaked into the fuel storage tanks. More alarmingly, Monju in Japan shut down less than a year after it achieved criticality when vibrations in the liquid sodium loop ruptured a pipe, causing an intense fire to erupt as soon as the sodium made contact with the oxygen in the air. “Some have worked okay,” says Makhijani. “Some have worked badly, and others have been economic disasters.”

img

Photo: TerraPower Foundational Underpinnings: An engineer readies a bundle of full-size mock fuel pins to test how they’ll perform during their operational lifetime.

Today, TerraPower’s lab is filled with bits of fuel pins and reactor components. Among other things, the team has been testing how molten sodium will flow through the reactor’s pipes, how it will corrode those pipes, even the inevitable expansion of all of the core’s components as they are subjected to decades of heat—all problems that have plagued sodium-cooled reactors in the past. TerraPower’s engineers will use what they learn from the results when building their test reactor—and they’ll find out if their design really works.

The safety of the TerraPower reactor stems in part from inherent design factors. Of course, all power reactors are designed with safety systems. Each one has a coping time, which indicates how long a stricken reactor can go on without human intervention before catastrophe occurs. Ideas for so-called inherently safe reactors have been touted since the 1980s, but the goal for TerraPower is a reactor that relies on fundamental physics to provide unlimited coping time.

The TWR’s design features some of the same safety systems standard to nuclear reactors. In the case of an accident in any reactor, control rods crafted from neutron-absorbing materials like cadmium plummet into the core and halt a runaway chain reaction that could otherwise lead to a core meltdown. Such a shutdown is called a scram.

Scramming a reactor cuts its fission rate to almost zero in a very short time, though residual heat can still cause a disaster. At Chernobyl, some of the fuel rods fractured during the scram, allowing the reactor to continue to a meltdown. At Fukushima Daiichi, a broken coolant system failed to transfer heat away from the core quickly enough. That’s why the TerraPower team wanted to find a reactor that could naturally wind down, even if its safety systems failed.

TerraPower’s reactor stays cool because its pure uranium fuel pins move heat out of the core much more effectively than the fuel rods in today’s typical reactors. If even that isn’t enough to prevent a meltdown, the company has an ace up its sleeve. As Gilleland explains, the fuel pins will expand when they get too hot—just enough so that neutrons can slip past the fuel pins without hitting more Pu-239, thereby slowing the reaction and cooling the core automatically.

Because the TWR burns its fuel more efficiently, the TerraPower team also claims it will produce less waste. The company says a 1,200-MW reactor will generate only 5 metric megatons of waste per gigawatt-year, whereas a typical reactor today produces 21 metric megatons per gigawatt-year. If that number is right, the reactor could address the ongoing storage problem by drastically reducing the amount of generated waste, which remains highly radioactive for thousands of years. More than 60 years into the nuclear age, only Finland and Sweden have made serious progress in building deep, permanent repositories, and even those won’t be ready until the 2020s.

TerraPower plans to break ground on its test reactor next year in China. If all goes well, this reactor will be operational by the mid-2020s. But even if TerraPower’s reactor succeeds wildly, it will take 20 years or more for the company to deploy large numbers of TWRs. Thus for the next couple of decades, the world’s utilities will have no choice but to rely on fossil fuels and conventional nuclear reactors for reliable, round-the-clock electricity.

Fission will probably not be the final answer. After decades of always being 30 years away, nuclear fusion may finally come into its own. Societies will be able to depend on renewables more heavily as storage and other technologies make them more reliable. But for the coming decades, some analysts insist, nuclear fission’s reliability and zero emissions are the best choice to shoulder the burden of the world’s rapidly electrifying economies.

“I don’t think we should think about the solution for midcentury being the solution for all time,” says Jane Long, a former associate director at Lawrence Livermore National Laboratory, in California. “If I were in charge of everything, I would say, have a long-term plan to get [all of our electricity] from sunlight—there’s enough of it. For the near term, we shouldn’t be taking things with big impact off the table, like nuclear.”

As the globe warms and the climate becomes increasingly unstable, the argument for nuclear will become more obvious, Long says. “It’s got to come to the point where people realize how much we need this.”

This article appears in the June 2018 print issue as “What Will the Electricity Miracle Be?”

Advertisement

Original Link

Blueprints for a Miracle

Can technology slow emissions of greenhouse gases enough to halt climate change? IEEE Spectrum reporters fanned out across the globe to find out.  Read more »

Photos: From top: Don Farrall/Getty Images; Ina Fassbender/AFP/Getty Images; Richard Bailey/Getty Images

Original Link

Bioengineers Aim to Break Big Ag’s Addiction to Fertilizers

Advertisement

img Photo: Ginkgo Bioworks Mutant Microbes: Joyn Bio is trying to reprogram bacteria to give them a very particular superpower: the ability to capture nitrogen from the air and give that essential nutrient to the roots of cereal plants such as corn, wheat, and rice.

graphic link to special report landing page
graphic link to special report landing page

Big Ag is addicted to nitrogen fertilizers. It’s a massive problem for the global climate, yet it may yield to a microscopic solution: microbes rewired to “fix” nitrogen from the air and turn it into a natural type of fertilizer that corn, wheat, and other cereal crops can use.

Until the Green Revolution changed agriculture in the mid-20th century, farmers fed cereal crops either by spreading nitrogen-rich manure on their fields or by planting a legume crop (such as beans or peas) whose root systems contain microbes that naturally nab atmospheric nitrogen, and then plowing that crop under to fertilize the cereal crop they actually wanted to grow. But these inefficient methods couldn’t begin to support the 7 billion people alive today. The Green Revolution ushered in a new era of chemical fertilizers, enabling farmers to feed the booming global population—but also creating a dangerous addiction.

And so, every year, the world’s farmers lavish on their crops some 120 million metric tons of nitrogen fertilizer made via the century-old Haber-Bosch process. This industrial operation requires high pressure and temperature, so fertilizer factories burn a lot of fossil fuel, releasing carbon dioxide right away; later, the unused fertilizer in the soil returns to the air as nitrous oxide (N2O), a gas that has 300 times as much heat-trapping power as carbon dioxide. The combined emissions from fertilizer production and use are equivalent in their effect to as much as 1.3 billion metric tons of CO2 a year.

Biologists have long sought a better, cheaper, and more environmentally friendly way to fix nitrogen. They’ve tried to make cereal crops form symbiotic relationships with nitrogen-fixing bacteria, as legumes do. They’ve tried to convince bacteria such as Azospirillum and Klebsiella to set up shop in the roots of wheat and rice plants. Yet, despite more than half a century of effort, no one has yet managed to endow any of the world’s major grains with a viable nitrogen-fixing bacterial partner.

Enter Joyn Bio, a Boston-based spinoff launched last September from two companies: Bayer CropScience, which boasts a vast library of agricultural microbes, and Ginkgo Bioworks, a pioneering biotech firm that creates custom-made bacteria for industrial applications.

Ginkgo’s Boston research hub is home to an assembly line of robots that are programmed to manufacture, read, or edit strands of DNA. Joyn’s idea is to synthesize variations on some of the genes that are believed to play a role in nitrogen fixation, including those involved in the cooperation between legumes and the bacteria specific to their root systems. Snippets of this synthetic DNA are then slotted by machines into microbes growing in rows of tiny fermentation chambers, before another set of automated tools characterizes the genetically altered microbes’ performance every which way. Synthesize, build, test, repeat.

Joyn is hoping this engineering approach to the fertilizer-replacement challenge—combined with computational power to integrate terabytes of data into predictive metabolic models—will make the company succeed where others have failed. “Our goal is to use all the tools of synthetic biology to take naturally occurring microbes that have evolved in plants, and see what we can do in the lab to create organisms that replace significant amounts of fertilizer for crop plants,” says Johan Kers, head of nitrogen-fixation research at the company. “If we can do that, we’ll have a big party.”

img Photo: Ginkgo Bioworks Assembly-Line Genetics: At the Ginkgo Bioworks lab, automated equipment inserts new DNA into microbes. The “organism engineers” who oversee the robots can therefore test many genetic variants at once.

They may have a lot of guests to invite. Joyn is structured like a scrappy startup, with fewer than 20 full-time employees split between research sites in Boston and West Sacramento, Calif. But as a joint venture between Bayer—which will become the world’s largest supplier of seeds and crop chemicals after its US $62.5 billion buyout of Monsanto—and Ginkgo, one of only a handful of private biotech companies valued at more than $1 billion, the spinoff has ample resources.

The lab space and microbial production capacity comes from Ginkgo, the bacterial strains and greenhouses from Bayer. The parent companies, together with a hedge fund, also put in $100 million to bankroll Joyn’s R&D operations over the next five years. “These things all came in on day one, so we’re really hitting the ground running,” says Joyn CEO Mike Miille.

Joyn has already sequenced the genomes of around 20 different bacterial species, all of which naturally take gaseous nitrogen from the air and use enzymes to convert it into ammonia, which plants use to make DNA, proteins, and other essential building blocks of life. Some of these critters use the same well-characterized enzymes found in the nitrogen-fixing bacteria that live in the root nodules of legume plants, differing only in that they don’t naturally share their biochemical bounty with crops. Others may use weird new enzymes that have yet to be discovered because no one has ever embarked on this kind of screen. “We’re sampling the solution space for this engineering problem,” Kers says.

Once Kers and his team close in on a few enzyme-encoding genes of interest, the next steps will largely be outsourced to Ginkgo’s foundries, so named to evoke metallurgic factories that manufacture metal parts to exacting specifications. At Ginkgo HQ, these foundries form a glass-encased core that expanded late last year and now spans the length of two football fields. There, software and robotics automate much of the drudge work of organism design.

That kind of process engineering and rapid prototyping is unparalleled in the biotech industry, says Paul Miller, chief scientific officer of Synlogic, a Boston-area company that’s partnering with Ginkgo to develop microbial therapeutics for diseases. “Ginkgo is really a world leader when it comes to a massively parallel engineering capability and the ability to iterate around organism-design ideas on a large scale,” Miller says.

For Joyn, the tight relationship with Ginkgo means it can build hundreds of engineered microbes with slight variations in one or more genes. Foundry scientists, called organism engineers, can test the performance of each microbe through chemical analyses on a mass spectrometer. Kers and his colleagues can study the data, input it into their model, and order a new batch of engineered bugs. Those that look promising are shipped off to West Sacramento for further evaluation alongside corn plants raised in greenhouses and, eventually, in the fields.

img Photo: Ginkgo Bioworks Growing Strong: Ginkgo’s organism engineers want to see which of the genetically altered bacteria strains are hale and hearty.

Boosters of synthetic biology think the design-build-test framework that has proven itself in Silicon Valley will also work in microbial engineering. “These are some of the smartest and most talented people in the business,” says Andrew Hessel, a biotechnologist who until recently worked as a researcher at Autodesk Life Sciences, which builds software for biological design. “People have been trying to hack this forever, and now they actually have the tools to do it for real.”

But nitrogen fixation is not merely an engineering challenge; it’s also an ecological one. “And boy, has it proven tricky,” warns Allen Good, a plant scientist at the University of Alberta, in Canada. “It’s one thing to take a piece of DNA and put it in bacteria and get it to fix nitrogen,” he says. “But to build a symbiotic relationship is so much more complex than that.”

In the root system of a bean plant, bacteria supply the plant with ammonia and receive sugar in return. Both sides profit—that’s the essence of symbiosis. But if you engineer a strain of bacteria to give ammonia away, you force it to incur a cost that nonengineered bacteria don’t shoulder. Naturally occurring microbes may therefore outcompete the engineered ones, eliminating them quickly. You could get around the problem by engineering symbiosis—by getting the plant to reciprocate—but that isn’t easy to do. Cereals and nitrogen fixers don’t play nice together. And, like a teacher trying to cajole a classroom full of selfish toddlers to share their toys, scientists have struggled to promote cooperation in the soil. “You really need to have a signal exchange between partners—between the plant and the microbe,” says Philip Poole, a plant microbiologist at the University of Oxford.

In addition to the scientific challenge of engineering microbes and plants, there’s also a societal one: Consumers are still distrustful of genetically modified organisms (GMOs) in foods, and the designation brings additional regulatory scrutiny. There are, however, ways of using the tools of synthetic biology that tiptoe up to the GMO line without crossing it—for example, by mutating bugs at random and then selecting for the best ones. That’s the strategy of Pivot Bio, one of the few other companies developing nitrogen-fixing bugs. Pivot, based in Berkeley, Calif., starts with microbes that can naturally capture airborne nitrogen but fail to do so in agricultural settings. The company characterizes these critters with all the fanciest genomic tools available, building computational models to better understand gene circuitry, then tries to breed progeny that don’t have the feedback mechanisms that normally shut off nitrogen fixation in fertilizer-rich soils.

“My team has the best synthetic biologists in the world, and they can do the craziest transgenic things out there,” says Pivot’s CEO and founder, Karsten Temme. “But we’ve put on handcuffs and said we’re not going to build transgenic microbes because that’s not culturally acceptable, and it means you have to go through a regulatory process to get approvals.”

img Photo: Ginkgo Bioworks From Lab to Field: At the moment, Joyn Bio’s scientists are testing their altered bacteria in the Ginkgo lab. The most promising strains will soon be studied in the roots of corn plants growing in greenhouses and fields.

Not so Joyn. According to Brynne Stanton, head of metabolic engineering at the company, Joyn will use all the synthetic biology tools at its disposal. Only later, if Joyn succeeds in engineering a robust nitrogen-exchanging symbiosis with corn, will the company see if it’s possible to get to the same end products in a way that doesn’t get them slapped with a GMO label. “We are really starting with a blank slate,” says Stanton, as she sips from a can of coconut water that bears the words “non-GMO” on its label.

Many leading experts, even some who work with Pivot, applaud this approach. “To be able to reach a product that’s not GMO—at this point, I don’t see how that would be possible,” says Jean-Michel Ané, who studies plant-microbe interactions at the University of Wisconsin–Madison and serves on Pivot’s scientific advisory board. In his academic research, Ané is coleading a $5.1 million project called Synthetic Symbioses, which is taking the genetic engineering strategy one step further: modifying DNA of both corn and a nitrogen-fixing microbe so they’re fully reliant on each other. Others, like Luis Rubio, a biochemist at the Technical University of Madrid, are trying to cut the microbe out of the equation entirely and simply engineer the ability to fix atmospheric nitrogen into the plants themselves, which would then be self-fertilizing.

Whatever works, the initial commercial products from Joyn or its rivals will likely displace only small amounts of chemical fertilizer, maybe 10 to 20 percent, executives say. That modest reduction might help limit local impacts, such as air and water pollution, but “it wouldn’t make much of a dent in global N2O emissions unless combined with other nitrogen best-management practices,” says David Kanter, an environmental scientist at New York University who studies nitrogen pollution.

These companies have to start somewhere, though—and in Pivot’s case, that means deploying its nitrogen-producing microbes alongside traditional fertilizers in large-scale field testing taking place this growing season at farms across the U.S. corn belt. “Eventually,” says Pivot’s Temme, “we want to replace all the fertilizer.”

Miille, of Joyn, has equally lofty ambitions. “Nobody is sitting here saying this is easy. There are a whole bunch of things that are unpredictable,” he says. But, he adds, “this is really going to push the technical boundaries forward.” As his company’s organism engineers work through their design-build-test cycle, they just might find an unpredictable little microbe with big potential.

This article appears in the June 2018 print issue as “Breaking Big Ag’s Fertilizer Addiction.”

Advertisement

Original Link

This Power Plant Runs on CO2

Advertisement

img

Photo: Michael Thad Carter/The Forbes Collection/Contour by Getty Images CO2 Cycler: Rodney Allam [above] invented a natural-gas-burning power plant that captures its own carbon dioxide at practically no cost.

graphic link to special report landing page

A fire breaks out in your office’s server suite. You grab an extinguisher, aim its nozzle at the blaze, and hit it with a cloud of carbon dioxide. Out goes the fire.

Flames die when doused in CO2. And yet under just the right conditions, CO2 can also sustain combustion. That counterintuitive fact is at the heart of a new power plant being built in the Houston industrial suburb of LaPorte. The natural-gas-fired plant’s novel design, from Durham, N.C.–based NET Power, uses a fuel mix that is 95 percent carbon dioxide at the point of combustion. What’s more, it captures and sequesters carbon dioxide at virtually no additional cost. According to NET Power’s calculations, once the company scales up and rolls out the technology commercially, its plants should cost no more to construct and operate than a traditional natural-gas plant, which simply vents its exhaust into the atmosphere.

The key to making CO2 part of the solution instead of the problem is a strange state of matter known as a supercritical fluid. Above a certain temperature and pressure—31.1 °C, or a summer day in Phoenix, and 7.39 megapascals, or about 80 percent what you find on the surface of Venus—carbon dioxide turns supercritical. In that state, it can expand like a gas and yet still move with the density of a liquid; it can even dissolve things the way a liquid can. (In fact, it’s used to decaffeinate coffee.)

Supercritical CO2 can be pumped, compressed, and driven to spin a turbine with an efficiency that steam may never reach. Consequently, supercritical CO2 has been proposed and developed for decades as a credible replacement for steam in all sorts of power generation, including nuclear power and concentrated solar towers.

But in LaPorte, Texas, they’re doing something that could have way bigger consequences for climate change than adding a few—though much needed—percentage points to a solar tower’s efficiency stats. After almost a decade of development, NET Power is putting the finishing touches on its US $140 million, 50-megawatt power plant there. The grid-connected plant is being tested this year, and its backers hope to scale up to commercial deployment by 2021.

“Their technology is actually excellent technology,” says Nathan Weiland, a research engineer with the U.S. Department of Energy’s National Energy Technology Laboratory, near Pittsburgh, who specializes in supercritical CO2 power generation. “By all accounts it should work well.” If Weiland’s right, then burning fossil fuels without emitting carbon could become about as economical as burning fossil fuels in a conventional power plant without any carbon-control gear.

Supercritical carbon dioxide’s latest use is largely attributable to British inventor Rodney Allam. After a 45-year career with industrial gas manufacturer Air Products and Chemicals, where he served as director of technology development at the European division, he retired for a single weekend in 2005 and began work as a consulting engineer.

In 2009, he met the principals of 8 Rivers Capital, NET Power’s parent company, and signed on to work with its engineers on a seemingly impossible task. They were to create a technology that could burn fossil fuel without any carbon emissions and generate at an efficiency and capital cost on a par with conventional power plants. In other words, Team Allam aimed to do carbon capture for free.

Following a false start with coal, NET Power’s target became natural-gas-fueled “combined cycle” technology. A combined-cycle plant marries a gas turbine to a steam turbine. The first part burns natural gas, and the exhaust directly spins a turbine to generate electricity. The exhaust, still scorching hot, then enters a heat recovery system to generate steam, which spins a second turbine to produce more electricity.

Ordinarily, this combination delivers an efficiency of up to 52 percent (based on the total energy content of natural gas) and emits around 0.4 kilograms of CO2 per kilowatt-hour. Compare that with a new coal-fired power plant, which emits roughly 8 kg of CO2 per kilowatt-hour. If you simply tack on an existing carbon-capture system to a combined-cycle plant, the power needed to run the added equipment reduces the overall output by about 13 percent. Much of the energy penalty is because the plant’s flue gas is mostly nitrogen from the air used for combustion, and it’s an energy-intensive process to separate the relatively small amount of CO2 from the huge mass of nitrogen.

The NET Power engineers decided they’d need a new type of power cycle, one that basically drops steam from the equation and doesn’t use air. To end up with flue gas that’s almost exclusively CO2 and water, their cycle would have to inhale 95 percent pure oxygen. This concept, called oxyfuel combustion, is at the heart of several carbon-capture schemes. Most of these schemes have drawbacks, though.

First, to deliver the nearly pure oxygen requires attaching an air-separation system to the plant, which of course takes energy to run. Second, the gas might not have enough mass to turn the turbine efficiently. Air is about 75 percent nitrogen by mass, so nitrogen is the main thing driving a typical gas turbine. Without the nitrogen’s mass, the exhaust simply doesn’t have enough momentum. If you try replacing that mass with a lot more oxygen and fuel, the combustion would be so hot that you’d need to make your turbines out of exotic—and expensive—high-temperature alloys, or risk transforming them into melted heaps of slag.

img

Photo: David Wagman Hot Stuff: The Toshiba-supplied turbine in NET Power’s Allam cycle plant is reinforced with special coatings and insulation for use with hot supercritical CO 2. The turbine is unusually small for its power—about the size of a family minivan. The company sees that as a selling point.

Allam’s counterintuitive approach stemmed from an idea he’d had long before he started working at NET Power: Burn the fuel and oxygen in supercritical CO2 and the resulting exhaust has the necessary mass to spin the turbine. The heat of combustion expands the supercritical CO2 exhaust through a turbine, from which it exits at around 3 MPa. The hot exhaust enters a heat exchanger, which transfers the gas’s thermal energy to a supercritical CO2 stream that’s headed back to the combustor.

The turbine exhaust, meanwhile, exits the heat exchanger, having been cooled to air temperature. It falls out of its supercritical state and the water vapor produced in combustion condenses and drains away. The now highly pure CO2 stream is then compressed, cooled, and pumped up to a supercritical 30 MPa for a return trip to the combustor.

The pumping step represents one key to the cycle’s performance. Allam and his team realized that if they used compression alone to pressurize the CO2 from 3 MPa all the way to about 30 MPa, the energy required would sap the cycle’s overall efficiency. That’s because compressing CO2, which boosts pressure by decreasing gas volume, takes more energy than pumping it, which increases pressure by adding mass. So in the Allam cycle, the CO2 is compressed to a superfluid at around 8 MPa, cooled, and then efficiently pumped to 30 MPa.

After compression and pumping, most of the CO2 makes a pass through the other end of the heat exchanger to get warmed up before flowing to the combustor. But something less than 5 percent of the CO2 is siphoned off to a high-pressure pipeline for sequestration underground or other uses.

So to sum up: The Allam cycle uses its own exhaust to drive a turbine and its own compressors and pumps to sequester carbon. In other words, carbon capture is integral to the process.

“I am absolutely confident what we have will work,” says Allam. “It’s pretty well standard equipment, and there is nothing innovative in the turbine.”

/image/MzA2Mzc5NA.jpeg

Photo: David Wagman Waiting to Connect: The LaPorte plant will generate 50 megawatts and connect to the grid when it comes online in 2021.

In LaPorte, NET Power’s low-profile plant sits next to an air-separation facility owned by the French firm Air Liquide. You reach it by a gravel road that splits a screen of trees and overgrowth. On a foggy January morning, the plant’s most imposing feature is its bank of cooling towers, conventional fare for an industrial plant. A pair of construction cranes idle nearby.

Overshadowed in the maze of piping and structural steel is the turbine itself, which is exceptional mostly for its small size. The combination of high pressure and high temperature means the footprint of the turbine is not much larger than that of a family minivan. (A conventional turbine with about the same output is the size of a city bus, by comparison.) Project backers are counting on the smaller footprint to help improve the venture’s overall economics.

NET Power’s business plan calls for selling some of the industrial gases that are produced. In particular, nitrogen pulled from the air-separation unit could be sold to fertilizer plants, and other trace gases could go to chemical production and welding.

The plan also depends on the captured carbon dioxide being used productively. Right now, one growing market for CO2 is using it to drive hard-to-access oil out of the ground. Pipelines already stretch through Colorado, New Mexico, and Texas to deliver naturally occurring supplies of the gas to oil companies. Occidental Petroleum Corp.’s CO2 use in West Texas alone amounts to about 100,000 metric tons a day, equal to the output of 45 NET Power plants, the oil company says.

While it might seem counterproductive to use a zero-emission technology to help push more fossil fuel out of the ground, experts point out that even in a low-carbon world, we’ll still need oil as the key feedstock for the petrochemical and plastics industries. And it may be only a matter of time before there is a price for carbon that will make such uses less economical. Already in the United States, a recently passed law offers tax credits worth $50 per metric ton of CO2 buried but only $30 per metric ton used for oil production.

Assuming such industrial gas sales materialize, the net effect would trim the NET Power plant’s levelized cost of electricity (LCOE), a proxy for the price that the plant must receive for its output over its lifetime to break even. On paper, the first plant should have a base LCOE of around $50 per megawatt-⁠hour, according to NET Power. That puts it in the same ballpark as a combined-cycle power plant with no CO2 capture. Once multiple plants are built to scale and with all industrial gas sales factored in, the company forecasts LCOE to fall to around $42/MWh (even without the new tax credits).

As a result, NET Power’s backers say that by the time its 30th plant is on line, the technology could match the construction cost and efficiency—about $1,000 per kilowatt and 50 percent—of today’s combined-cycle power plants. And with no on-site emissions.

But technology doesn’t stand still, and existing turbine designs are evolving to address carbon emissions. For example, in March, Gasunie, Statoil, and Vattenfall tapped Mitsubishi Hitachi Power Systems to convert a 440-MW combined-cycle power plant in the Netherlands to burn hydrogen by 2023. Mitsubishi claims that a gas turbine it is developing has used a 30 percent hydrogen fuel mix. That mixture resulted in a 10 percent reduction in CO2 emissions, compared with natural-gas-fired power generation.

With Allam’s technology being tested for only the first time, it’s probably too early to speak with confidence about price, says a chief technology officer with a global turbine manufacturer who asked not to be named. For one thing, the CTO says that air-separation is an expensive, mature technology with little opportunity to eke out much cost savings.

NET Power acknowledges that the air-separation unit is a big expense. But it claims that the cost is largely a wash because the Allam cycle cuts out multiple pieces of equipment that otherwise would be needed to run a conventional combined-cycle plant.

The CTO also marvels at the high pressure that the Allam cycle operates under and its impact on the turbine. “It’s an order of magnitude higher than conventional technology,” he says. NET Power CEO Bill Brown responds that Toshiba showed no concern over the high turbine pressures and that the most substantive modifications to the turbine design were to provide additional coatings on the turbine blades and more layers of insulation.

NET Power is not alone in exploring supercritical CO2 for energy production. Another approach is called indirect firing, explains the DOE’s Weiland. Indirect firing takes heat from a standard turbine’s exhaust and uses it to heat and compress supercritical CO2 for use in what is an otherwise conventional steam cycle. That added process should boost the efficiency by 2 to 4 percentage points over that of a regular steam turbine. So a 550-MW supercritical CO2 plant would save enough fuel to energize 17,500 to 35,000 more homes per year than a 550-MW state-of-the-art steam power plant, based on average annual U.S. household use of 10,800 kWh.

Akron, Ohio–based Echogen Power Systems has designed an 8-MW generator that uses supercritical CO2 to turn waste heat derived from a gas turbine or engine into electricity. A heat exchanger drives the CO2 conversion, and the process yields roughly 20 percent more power than a gas turbine alone.

Supercritical CO2 technology is also being explored with an $80 million facility funded by the DOE project called STEP, or Supercritical Transformational Electric Power. The pilot plant, about 10 MW, will be an indirect-fired system that uses the Brayton cycle (typical of internal combustion engines). Researchers will use the facility to test components and technologies such as heat exchangers, compressors, and turbines for use with supercritical CO2. The San Antonio, Texas, facility is slated to open in 2019.

Meanwhile, NET Power’s parent, 8 Rivers, and its partners are working on a variation of the Allam cycle that would run on syngas (carbon monoxide and hydrogen) derived from coal. The goal is to build a 100- to 300-MW power plant by the early 2020s, most likely in North Dakota, where the CO2 could be used to push oil out of the Bakken Formation.

Even as testing gets under way in LaPorte, NET Power is scouting sites for a commercial-scale facility. CEO Brown says that eight locations are under consideration, in the United States, the United Kingdom, Qatar, and the United Arab Emirates. The key will be local demand for industrial gases and pipeline-quality CO2 for oil production. Greenhouse gas emission rules like those in the European Union may favor the Allam cycle, too.

Expectations are high for success at LaPorte, and Brown is eager to see precisely what this potentially revolutionary technology can do.

Advertisement

Original Link

New Tech Could Turn Algae Into the Climate’s Slimy Savior

Advertisement

graphic link to special report landing page
graphic link to special report landing page

“Those are the kinds of bubbles we want to see,” says Rebecca White, pointing to tiny pockets of gas that are barely visible on the surface of an artificial pond in New Mexico. Their small size means the carbon dioxide she and her colleagues have injected into the water has mostly dissolved, instead of just escaping into the air.

We’re less than a kilometer from the U.S.–Mexican border, surrounded by desert grasslands. It’s one of the last places I’d expect to find a thriving population of marine algae. Yet here in this massive pool swirls more than a million liters of Nannochloropsis, a salt-loving alga that flourishes on the brackish water pumped from below.

Until 1973, long before the algae moved in, farmers grew cotton on this land. Then the well water they were using became so salty that they let the fields go fallow and never planted again. Algae’s ability to prosper in places that would kill most other crops, combined with their astounding nutritional profile, has wooed experts worried about the future of the global food supply.

White, who oversees operations at this 97-acre algae farm and a sister site in Imperial, Texas, for Houston-based Qualitas Health, is familiar with the promise and pitfalls of large-scale algae production. A veteran of algae’s biofuels craze, she worked here in 2012 when a different company spent US $104 million building the farm to produce “green crude.” After three years, crude oil prices tanked, undercutting biofuels, and the facility closed.

The new hope for algae is that they could rebalance the global carbon equation as a food, not a fuel. Despite appearances, algae are an excellent source of protein. If meat-eaters started to eat more algae, the industry’s theory goes, that shift could slash carbon emissions by reducing demand for beef and pork.

Photo showing technician Jose Alvarado filling shipping totes with algae.  photo showing shipping totes algae. 

Photos: Amy Nordrum Pond to Plate: Technician Jose Alvarado fills shipping totes with freshly harvested Nannochloropsis, the only type of algae grown on this New Mexico farm.

Algae could also replace fertilizer-intensive crops such as corn and soy as fillers in processed foods, including fish, pig, and cow feed. With enough algae in human and animal diets, society could avoid planting new fields even as the population increases, and perhaps even allow existing farmland to return to forests, which absorb more greenhouse gases per square kilometer.

There’s one more big reason to let algae seep into the food supply: Though algae are not technically plants, they need CO2 to grow. And to grow large amounts very quickly, farmers must inject CO2 directly into the crop. That means every algae farm doubles as a carbon sponge.

New technologies aim to capture emissions from power plants and pipe them into algae ponds, in a twist on carbon sequestration. One variation of this dream even calls for placing an algae farm next to every coal-fired power plant to ingest the CO2 produced there.

Today, though, algae production is still a boutique industry focused mostly on nutraceuticals and food dyes. Qualitas’s New Mexico farm churns out Nannochloropsis, or “nano,” as White calls it, for omega-3 supplements. With just 48 ponds in production, adding up to about 50 acres, this site nevertheless qualifies as one of the world’s largest microalgae farms.

The algae industry will have to scale up in a big way to deliver the carbon offsets and the protein the world needs. Eventually, producers will also have to persuade food manufacturers that algae deserves to be used in their products—and convince consumers that it belongs on their plates.

But if everything goes according to plan, these microorganisms could reshape the food supply as dramatically as corn and soy have over the past 50 years. And this time, the planet may be better off for it.

Algae represent one of Earth’s oldest life-forms. Algae also constitute some of the world’s simplest organisms. Many species are unicellular and lack advanced structures such as stems, leaves, and petals.

For such tiny organisms, algae already wield an outsize influence on the planet. Consider Prochlorococcus, the smallest and most abundant type of phytoplankton in the ocean, which forms the basis of the marine food chain. “It’s amazing that it does so much with so little,” says Zackary Johnson, a marine biologist at Duke University, in North Carolina.

Now, scientists want to use algae’s superpowers to build a more sustainable food system for land dwellers. Agriculture, forestry, and other land uses (a category defined by [PDF] the Food and Agriculture Organization of the United Nations) emit 21 percent of greenhouse gases globally. There is rising tension between the need to clear more land to produce food for more people and the desire to keep global emissions in check.

In theory, algae could ease that tension while also providing high-quality sustenance. Protein composes up to 70 percent of the dry weight of Spirulina and some species of algae. (Like Prochlorococcus, Spirulina is technically not a type of algae—it’s a genus of cyanobacteria, but “we allow them into the club,” says Stephen Mayfield, director of the California Center for Algae Biotechnology at the University of California, San Diego.)

Photo showing the Qualitas farm in New Mexico. Photo: Qualitas Health Desert-Grown: While a soybean farmer harvests only once a year, each pond of algae shown here at the Qualitas farm in New Mexico will produce 33 harvests in 2018.

Thanks in part to algae’s high protein content, 1 hectare of algae ponds can generate 27 times as much protein as a hectare of soybeans. And protein from algae is more nutritious than protein from soy, because it contains vitamins and minerals in addition to all the essential amino acids.

Meanwhile, the global demand for protein will more than double by 2050. Setting aside for a moment the question of whether people who wish to eat steak would be willing to take a swig of algae instead, producing the latter could satisfy demand without requiring large tracts of land to be cleared for pastures or crops.

But growing algae has never been an easy affair. Despite algae’s reputation as a fast-proliferating weed, it still takes a lot of electricity and ingenuity to reliably produce large amounts of it. Farms use electricity that’s primarily still generated from fossil fuels to pump water and constantly stir the bubbling mixture. Algae also need some fertilization with nitrogen and phosphorus, the production and application of which generates emissions.

Given all of this, one life-cycle assessment concluded that a protein powder made from algae would be no better than animal protein from a sustainability perspective, and slightly worse than other vegetable-based proteins, such as soy. Another analysis found that while growing algae for food would avoid some future emissions from deforestation, long-term reductions in emissions would be possible only by producing algae-based food and fuels at the same facility.

So for algae to have any measurable impact on emissions, producers must find ways to grow more of it with less. And to sell it at a price palatable to consumers and food companies, they must also lower their own costs. That means finding a cheap supply of CO2. The gas, which to many industries is nothing more than a problematic by-product, is currently Qualitas’s most expensive nutrient.

4 Photo: Amy Nordrum Fresh Ideas: Rebecca White of Qualitas stands in front of an energy-efficient harvester at the farm. Most of the water is removed from the algae at this stage, and then recycled back into the ponds.

Dave Hazlebeck believes in the power of technology to help the fledgling algae industry reach its full planet-saving potential. His company, Global Algae Innovations, operates a 33-acre algae farm in Kauai, Hawaii, that doubles as a site to demonstrate dozens of technologies that can grow algae faster with fewer emissions and at a lower cost.

The company’s boldest experiment to date is to divert the flue gases of a nearby coal-fired power plant into their ponds, as the algae’s primary source of CO2. It’s a three-step process: After CO2 is captured with an absorber (akin to a scrubber) that dissolves the gas into recycled water, that carbonated water is stored in covered pools. When the company is ready to grow algae, it pumps the carbonated water into an open pond and inoculates it with one of several algae strains.

Using this method, Hazlebeck says the farm could theoretically capture 90 percent of the CO2 from the plant, which would support up to 1,000 acres of algae. He figures that if the United States stuck an algae farm next to every power plant located in an algae-friendly climate, those farms could sequester 800 million metric tons of CO2, offsetting 198 coal-fired power plants. With that CO2 the farms could produce 400 million metric tons of algae, which is roughly equivalent to how much protein meal the entire world currently makes in a year.

Those figures make a strong case for putting algae everywhere, but White, of Qualitas, isn’t convinced. As soon as I ask her about Hazlebeck’s proposal, she grabs a paper napkin and starts doing her own calculations. To qualify for tax credits for carbon sequestration under a newly proposed bill, a U.S. facility must capture at least 100,000 metric tons of CO2 in a year (a small amount, considering that many coal-fired power plants emit millions of tons of CO2).

To find out how much algae it would take to absorb that much CO2, White figures that an algae farm produces about 14.5 metric tons of algae per acre per year—considered the industry standard. To produce 1 kilogram of algae, White’s team uses 2.7 kg of CO2. That means a farm with at least 2,300 acres of algae would have to be built next to each power plant. For comparison, the average U.S. farm today is 440 acres.

White is skeptical that farmers could come up with enough land and water to pull this scheme off—even with algae’s advantage of growing on land and in water that’s not suitable for other crops. “It’s not feasible—it’s aspirational,” she says.

A better solution, she believes, would be to capture CO2 directly from the air, using affordable commercial technology that doesn’t yet exist but which several companies are working on. Power plants could pay to operate direct air-capture systems and give the gas to algae farms as a carbon offset. This would “significantly lower the cost of algae production” by eliminating the need to buy tanks of CO2, White says, and it would create a kind of symbiosis between the two industries.

The algae industry has had a lot of false starts. German scientists and employees of Britain’s Imperial Chemical Industries independently began studying algae’s potential as a food source during World War II. Around the same time, Stanford University researchers produced 45 kg of a popular edible type called Chlorella while piloting a concept for an algae production facility.

Needless to say, the idea didn’t catch on. It may have been the taste—one report described Chlorella as having “a vegetable-like flavor, resembling that of raw lima beans or raw pumpkin.” Testers on a flavor panel were less generous—they reported “unpleasantly strong notes,” a “lingering, mildly unpleasant aftertaste,” and a “gag factor.”

Another reason for algae’s stagnancy is that raising microalgae is a relatively new affair. That means many of the technologies that other farmers count on to plant and harvest crops are still being developed for algae. The paddle wheels that stir the Qualitas ponds were custom-built by metalworkers a few towns away.

At Global Algae Innovations, Hazlebeck is trying to change that. The company has invented a drier that he says uses one-tenth of the energy of a typical drier, and ponds that circulate algae with one-third the energy that other farms use for that purpose while producing two to three times as much algae.

Those advances could help the industry reduce its energy use and boost productivity to the point where growing algae proves both profitable and sustainable. The company is now selling the first of its inventions to other algae farms—a harvester that uses one-thirtieth the energy required to remove water from algae compared with traditional processes.

Back at the Qualitas farm in New Mexico, Aaron Smith, a technician who operates the harvester, was wiping off the window of his work station when we stopped by. He’d just pressure washed the entire system the day before, but it was already splattered with bright green goo.

The harvester’s guts are made of long, slim straws with tiny holes only 0.04 micrometers in diameter. The straws fill up with wet algae, and the machine uses pressure to force most of the water through the holes, while the algae remain behind. The design borrows heavily from a system originally developed by General Electric for wastewater treatment plants—one industry where algae have successfully been put to work.

White calls this technology “a massive breakthrough for the industry.” Growers will need more breakthroughs like this one to expand production—she was one of only three people who raised their hands last year at the algae industry’s largest conference, when a presenter asked who there had grown more than an acre of algae.

So how long until algae overhauls our food system? Stephen Mayfield, the UC San Diego algae expert who cofounded the biofuels company that originally built the New Mexico farm, estimates the industry is 10 years from solving the remaining challenges and “hitting that tipping point to where we’re going to replace bulk protein.”

In the eyes of White, who grew up on a cotton farm, the algae industry is finally pointed in the right direction. And, having raised more microalgae in her career than perhaps anyone else in the industry, she has high hopes for where it could lead: “We want to be a part of every meal, every snack, every day.”

This article appears in the June 2018 print issue as “Algae 2.0.”

Advertisement

Original Link

Rooftop Solar Takes Hold in Iraq in the Aftermath of ISIS

Advertisement

/image/MzA1OTQ2Mw.jpeg

Photo: Peter Fairley Power Up: This shop is one of a growing number of small businesses in Iraq that now focus on renewable energy installations.

The souk, or marketplace, keeps buzzing in Sulaimani, a provincial capital in Iraq’s semi-autonomous Kurdish region, even though the national power grid has just gone off-line. Merchants like Mohamad Romie emerge from shops to fire up their generators or switch over to commercial backup power suppliers.

This switch to local power is a more-than-daily ritual across Iraq, thanks to a stubborn electricity supply gap that hinders Iraq’s development. Renewable power installations could shrink that gap—be they rooftop photovoltaics like those Mohamad and his brother Ali are installing through their company, Romie Electric, or utility-scale wind and solar plants. Rooftop power is already beginning to bridge gaps in Iraq’s grid supply, while large utility projects must still gain domestic support and international investment.

Iraq has excellent solar resources, and six years ago it declared ambitions to build hundreds of megawatts of solar power plants (plus smaller wind farms). Then the Islamic State arrived, interrupting the country’s renewable ambitions.

Suppliers such as Romie Electric are moving forward, however, by offering rooftop solar as an alternative to loud, dirty private generators or pricey district power. Solar suppliers generally package PV panels with lead-acid batteries to produce electricity for the 5 to 15 hours of each day without “government power.”

/image/MzA1OTQ2NA.jpeg

Photo: Peter Fairley Solar Sale: A vendor selling home solar systems displays a photovoltaic panel at a market in Sulaimani, Iraq.

Ali Romie estimates that their firm has installed US $200,000 worth of solar equipment over the past two years and says both demand and competition are now growing. A typical system generates about 1 kilowatt. “People like this system because it has no noise and has no effect on the environment, especially small stores that don’t need a high amount of energy to turn on their lights, TVs, and other devices,” he explains.

Longer-lasting lithium batteries have eclipsed lead batteries in many energy storage markets, but ABB Group microgrids specialist Rob Roys says lead varieties may be a better fit for Iraq, with its daily power outages. “Lead-acid batteries do better when deep cycled” or substantially discharged and recharged, explains Roys.

Solar systems cost many times more than a generator up front but actually deliver cheaper energy because they consume zero fuel, according to Ramyar Ali, assistant manager for Aras Green Energy, a four-year-old renewable-equipment firm based in Sulaimani.

According to the International Energy Agency, power from generators burning government-subsidized fuel costs Baghdad residents 17 to 25 cents per kilowatt-hour. The Abu Dhabi–based International Renewable Energy Agency, meanwhile, recently estimated that rooftop PV in Germany was already generating power for 16 to 18 cents per kilowatt-hour two years ago.

Utility-scale solar and wind plants could someday also supplement the oil- and gas-fired generation that supplied 96 percent of Iraq’s grid power in 2015. Large solar plants are particularly attractive, say experts in Iraq, since they are relatively quick to build and can supply peak usage in the summer, when air conditioners drive demand furthest beyond the national grid’s limits.

But Samad Hussain, a top environmental official in the Kurdistan Regional Government in Erbil, says international firms that finance and build renewable power plants are apprehensive about security threats in Iraq. They also worry about getting paid, he says, because many Iraqi consumers do not pay their government power bills.

Othman Hama Rahim, a renewable-energy researcher at the Kurdistan Institution for Strategic Studies and Scientific Research, cites several domestic challenges to incorporating more renewables into Iraq’s energy mix. One is dust storms, which may necessitate regular cleaning of solar panels. Another is that Iraq’s energy leaders remain focused on exploiting its fossil fuel resources. As Hama Rahim puts it: “We have oil. This is another factor retarding renewable power generation here.”

This article appears in the June 2018 print issue as “Rooftop Solar Takes Hold in Iraq.”

Advertisement

Original Link

Building a Stronger, Safer Zinc Battery

Advertisement

A team led by researchers at the University of Maryland’s A. James Clark School of Engineering has created a water-based zinc battery that is simultaneously powerful, rechargeable, and intrinsically safe.

A researcher wearing a white lab coat and blue latex gloves kneels to work on a set of safe zinc batteries.UMD engineer Fei Wang works on safe zinc batteries.

Together with colleagues at the U.S. Army Research Laboratory and National Institute of Standards and Technology, UMD engineers combined old battery technology (metallic zinc) with new (water-in-salt electrolytes). Building on prior UMD advances to create safer batteries using a novel aqueous electrolyte instead of the flammable organic electrolyte used in conventional lithium-ion batteries, the researchers cranked up the energy of the aqueous battery by adding metallic zinc—used as the anode of the very first battery—and its salt to the electrolyte as well.

The research team says the new aqueous zinc battery could eventually be used not just in consumer electronics, but also in extreme conditions to improve the performance of safety-critical vehicles such as those used in aerospace, military, and deep-ocean environments.

A large set of safe zinc battery cells.

The aqueous zinc electrolyte can be used to assemble safe battery cells, like these.

As an example of the aqueous zinc battery’s power and safety, the researchers cite numerous battery fire incidents in cell phones, laptops, and electric cars highlighted in media coverage. This new aqueous zinc battery could be the answer to the call for safe battery chemistry while still maintaining the comparable or even higher energy densities of conventional lithium-ion batteries.

A paper based on the research was published in Nature Materials.

Advertisement

Original Link

MIT Spin-off Faces Daunting Challenges on Path to Build a Fusion Power Plant in 15 Years

Visualization of the proposed SPARC tokamak experiment. Using high-field magnets built with newly available high-temperature superconductor, this experiment would be the first controlled fusion plasma to produce net energy output Image: Ken Filar, PSFC Research Affiliate

Advertisement

Fusion power is always two or three decades away. Dozens of experimental reactors have come and gone over the years, inching the field forward in some regard, but still falling short of their ultimate goal: producing cheap, abundant energy by fusing hydrogen nuclei together in a self-sustained fashion.

Now an MIT spin-off wants to use a new kind of high-temperature superconducting magnet to speed up development of a practical fusion reactor. The announcement, by Commonwealth Fusion Systems, based in Cambridge, Mass., caused quite a stir. CFS said it will collaborate with MIT to bring a fusion power plant online within 15 years—a timeline faster by decades than other fusion projects.

CFS, which recently received an investment of US $50 million from Eni, one of Europe’s largest energy companies, says the goal is to build a commercial fusion reactor with a capacity of 200 MWe. That’s a modest output compared to conventional fission power plants—a typical pressurized water reactor, or PWR, can produce upwards of 1,000 MWe—but CFS claims that smaller plants are more competitive than giant, costly ones in today’s energy market.

It’s certain that, between now and 2033, when CFS expects to have its reactor ready for commercialization, the company will face a host of challenges. These revolve around key milestones that include: fabricating and testing the new class of superconducting magnets, and using them to build an experimental reactor, which CFS named SPARC; figuring out how to run SPARC so that fusion reactions inside the machine can produce excess energy in a continuous manner, one of the biggest challenges in any fusion reactor; and finally, scaling up the experimental design into a larger, industrial fusion plant. 

Each of these steps embodies numerous scientific and engineering quandaries that may have never been seen before or have already confounded some of the smartest physicists and nuclear engineers in the world. Can CFS and MIT finally harness fusion power? Maybe. In 15 years? Probably not. 

“Fusion research remains fusion research,” says Robert Rosner, a professor of physics at the University of Chicago and the former director of Argonne National Laboratory. “It’s a field where getting to a practical, energy-generating reactor is not an engineering issue, but a basic science issue.”

Most experimental fusion reactors are based on a Russian design called a tokamak. These machines employ a powerful magnetic field to confine a cloud of hot ionized gas, or plasma, in the shape of a donut. This creates the extreme temperatures—in excess of 100 million degrees Celsius— for hydrogen nuclei to speed around and collide, fusing into heavier elements, like helium. The process releases vast amounts of energy. (Fusion is what powers stars like our sun, with their mighty gravity squeezing the hydrogen nuclei into helium.) 

CFS and MIT plan to build a tokamak with technology never before employed in fusion. It will generate a magnetic field using a relatively new high-temperature superconducting material made from steel tape that’s coated in a compound called yttrium-barium-copper oxide, or YBCO. The advantage of using this material is that it can produce intense magnetic fields from a much smaller machine than those at other facilities. 

CFS estimates that SPARC will be about one-fourth the size (and 1/65 the volume) of the 23,000-metric ton machine called ITER, the world’s largest experimental tokamak, currently under construction in France. Yet SPARC’s magnet will generate a maximum magnetic field of 22 teslas, nearly double that of ITER’s 12-T magnetic field.

Although MIT has pioneered research in tokamak magnetics and has persisted in exploring the high magnetic field approach to fusion, nobody has made superconducting magnets of that size and strength from YBCO, says Tim Luce, head of operations and science at ITER. “There are a lot of technological challenges associated with that,” he says. 

MIT expects it will take three years to design, fabricate, and test the magnets. For comparison, ITER’s magnets, which consist of 18 units made from niobium-tin and niobium-titanium, are still being built, with final assembly scheduled for 2022 (the ITER project began in 2007). 

There’s also the question of fuel. The sun with its intense gravity and pressure is able to produce fusion using ordinary hydrogen. But hydrogen gas doesn’t work well in a fusion reactor because the nuclei do not collide reliably. 

To improve the chances of fusion, plasma physicists prefer two gases that are isotopes of hydrogen: deuterium, which is abundant in seawater, and tritium, a form rarely found in nature because it naturally decays with a half-life of about 12 years. A deuterium-tritium mixture, called D-T, has the greatest potential in the near-term for a sustainable fusion reaction that lasts more than a few minutes. But using that mixture has a downside: It produces large amounts of free neutrons, whose lack of an electrical charge allows them to escape the tokamak’s magnetic field. This stream of neutrons reacts with the nuclei of metals in the containment vessel to form new isotopes that can produce harmful radiation or make the vessel material brittle and vulnerable to cracks. 

“Any tokamak must run for years to optimize the plasma before daring to use tritium,” says Daniel Jassby, who was a principal research physicist at the Princeton Plasma Physics Laboratory until 1999.

Tokamak designers who have used D-T fuel—or plan to use it—have come up with creative solutions to deal with the neutrons. ITER engineers, for instance, are designing a water-cooled steel structure about 1-meter thick that will line the inside of the machine. Both the Tokamak Fusion Test Reactor, which the Princeton Plasma Physics Lab operated from 1982 to 1997, and the Joint European Torus, operating at Culham Centre for Fusion Energy in Oxfordshire, U.K., simply surrounded the entire machine in a thick concrete shield.

CFS and MIT want to develop a molten salt blanket that will surround the plasma and behave as a kind of neutron-absorbing lining. Although circulating molten salt has been used in fission nuclear reactors, no one has ever developed such a technology for use inside in a tokamak.

In an email to IEEE SpectrumRobert Mumgaard, CEO of CFS, writes that this collaboration is different than others dominated by government funding with a focus on basic research. In this partnership, MIT will carry out the basic and applied research and CFS will work to commercialize it.

“By involving private industry focused on delivering a working product, the project and company will be able to grow and accelerate upon success, bringing more human and monetary resources to bear,” he says.

“We think that the MIT projection of 15 years to a power plant is very ambitious, if not overly ambitious,” says Luce, of ITER. “But we will celebrate any success, and we share the dream of making energy from fusion.”

Original Link

Cyber Defense Tool Is an Early Warning System for Grid Attacks

Power Lines Photo: iStock Photo

Advertisement

A rifle attack on an electrical substation near California’s Silicon Valley in April 2013 led to the development of a new tool for grid operators that will enable them to better detect not only a brutal physical attack but also the slightest hint of a hacker looking for vulnerabilities in these critical links in the grid.

The thousands of substations that are nodes in North America’s electrical grid receive high-voltage energy from transmission lines that originate at power plants and step down that voltage so it can enter local distribution networks to power homes and businesses. Although distributed in nature, grid operators worry that the loss of just a few critical substations could trigger an outage that cascades across a region, potentially crippling a major urban center.

Indeed, in 2014, the Wall Street Journal reported the startling findings in confidential report by the Federal Energy Regulatory Commission (FERC): Thirty substations across the U.S. played an outsized role in grid operations; knocking out nine of them could cause a cascading outage capable of bringing down the nation’s grid.

Investigators thought that the intent to trigger such a cascading event may have been behind a 2013 rifle attack on Pacific Gas & Electric’s Metcalf substation in Coyote, Calif., near San Jose, home to Silicon Valley. During the still-unsolved crime, attackers cut fiber optic cables to the facility, and then shot up 17 transformers, resulting in $15 million in damage. The utility had to to re-route power around the damaged substation until repairs could be made.

A rifle assault means the attacker has to come close enough to blast away at a substation. Perhaps more worrisome to grid operators, however, is the possibility of a cyberattack launched remotely from anywhere on the globe.

Stoking those concerns is the fear that a seminal event in using computer networks to bring down a nation’s infrastructure—the December 2015 assault on Ukraine’s power grid—will happen again. In that attack, hackers switched off 30 substations across three energy distribution companies, disrupting electricity supply to around 230,000 end users for up to six hours.

Against those background events, a team of researchers working at the U.S. Energy Department’s Lawrence Berkeley National Laboratory completed work earlier this year on a project to design and implement a tool they say can detect cyberattacks and physical assaults on power distribution networks.

Their tool, developed after three years of work, uses micro phasor measurement units (μPMUs) to collect information about the physical state of the power distribution grid. Combining that data with SCADA (supervisory control and data acquisition) information provides real-time insights into system performance and alerts grid operators to even a minor disruption.

Marriage Made in a Laboratory

Grid operators look to frequency—60 hertz in North America and 50 Hz in Europe, for example—as a primary indicator of system health. Devices known as synchophasors help operators monitor frequency by measuring both the magnitude and the phase angle of the sine waves found in electricity. Synchophasors are able to provide data orders of magnitude faster than SCADA systems that are in common use across the world’s power networks. If installed at facilities such as substations, synchophasors can pay close attention to frequency and alert operators to a system anomaly that deserves attention.

The threat detection application developed at Berkeley Lab marries safety engineering with computer security, says Sean Peisert, a computer scientist in the lab’s Computational Research Division. He led the research effort, along with collaborators at Arizona State University, synchophasor pioneer Power Standards Laboratory, the Electric Power Research Institute, software vendor OSISoft, and utility partners Riverside Public Utilities and Southern Company.

A Power Standards Lab microPMU Photo: Power Standards Lab

Developed at Power Standards Lab under a project led by UC Berkeley and funded by the Department of Energy’s ARPA-E program, µPMUs are designed to increase situational awareness at the power distribution grid leve

That marriage is important, says John Matranga, director of Customer Innovation and Academia at OSISoft. “What Sean was able to do was bring forth the idea that data is a critical element in determining the cyber state of the grid.” By comparing hard data from the control system with First Principle physics that describe how the grid should function, grid operators can determine if a suspicious event is underway. 

Investigators looking into the 2015 Ukraine attack, for example, learned that one or more intruders had gained access to equipment control functions and were stealthily looking for vulnerabilities. Their intrusion began months before the substation attacks were launched.

That sort of “reconnaissance attack” may have involved making small changes to how equipment operated such as threshold adjustments, says Ciaran Roberts, Senior Scientific Engineering Associate at Berkeley Lab. To help counter such a probing threat, the new detection tool uses machine learning so that a system’s long-term nominal operational mode can be compared against real-time SCADA data. Unexpected behaviors immediately make themselves evident; operating system engineers can quickly involve their information technology counterparts to better understand what is going on, Roberts says.

Grid hardening efforts began after the terror attacks of 11 September 2001, says Bryan Owen, Security Chief at OSISoft. Those efforts began with control centers and power plants, and were guided by rules from FERC and the North American Electric Reliability Corporation, which is responsible for the grid’s overall health. Security rules were extended to substations following the Metcalf attack, Owen says.

Up on the Roof

The Berkeley Lab team is extending its work beyond substations to include distributed generation resources such as rooftop solar panels. The worry is that the thousands of solar panels and their electronic equipment could entice a hacker to gain access to power inverters and disrupt a region’s power grid. Such an intrusion could come from something as simple as a software update pushed out by an equipment supplier.

The possibility of an attack has actually been enhanced by industry and government efforts to develop standards for how solar inverters communicate with the grid.

“It is this standardization that presents a vulnerability,” said Daniel Arnold, a Berkeley Lab researcher in announcing the project.

Berkeley Lab will lead work to develop algorithms that counteract attacks on solar inverters by sending opposite signals to nullify malware—similar to what a noise-canceling headphone does.

The three-year, $2.5 million project began in early March and includes industry partners, the National Rural Electric Cooperative Association, and the Sacramento Municipal Utility District.

Original Link

SXSW 2018: Energy and Tech Executives Envision the Carbon-Free Future

The Hydrogen Fuel Cell Power System in Stuttgart/Sunnyvale Photo: Daimler AG

Advertisement

Let’s say you live in a developed country and you’re concerned about your carbon footprint. You’re aware that the world generates and uses about 575 exajoules of energy a year, and that there are 7.632 billion people on the planet. Not wanting to be an energy hog, you do a quick calculation to figure out your fair share. You come to a sobering realization: One round-trip flight from San Francisco to Rome and you’re done.

Just that one flight “would blow your energy budget for the year,” says Maarten Wetselaar, Director of Integrated Gas & New Energies at Shell International Exploration and Production. “No more electricity” use. “No more heating in the winter. No more air conditioning in the summer.”

Wetselaar was part of a panel on the future of energy at the 2018 South by Southwest Interactive conference. The group’s far-ranging discussion considered such sweeping industry trends as tech collaborations between large companies and small startups, the rapid spread of distributed and renewable power generation, and surprising new combinations of technologies that are starting to cut greenhouse-gas emissions.

Wetselaar’s declaration about personal energy budgets was meant to give his listeners a vivid idea of the magnitude of the changes in store for not just energy producers but also for consumers. “The energy sector needs to change drastically in the next 30 years,” he says. “And no one knows for sure what it will look like.”

He adds: “It’s not just about cleaner energy. It’s about producing a lot more energy. That’s a big agenda for energy producers, like us, but also for consumers.”

To underscore his point, the Shell executive offers a quick reality check: For every electric vehicle purchased in the United States, 70 gasoline-powered SUVs are sold; the figure for Europe is 35.

Some large tech companies are getting the message and looking for ways to reduce their footprints, says another panelist, senior technologist John Frey of Hewlett-Packard Enterprise (HPE). A few years ago, the company, which offers servers, networks, and storage, performed an analysis and discovered that “our own operational footprint was only 10 percent of our impact on the globe.” The rest came from the greenhouse-gas emissions of companies in its supply chain and other factors.

After some soul searching, company officials concluded that “If we’re going to power technology using renewable energy, we can’t do it ourselves.” One initiative, launched two years ago, is a four-way collaboration among HPE, the U.S. National Renewable Energy Laboratory, Daimler AG, and Power Innovations International located in American Fork, Utah. The goal is a hydrogen based, carbon-free data center, powered by solar cells and wind turbines.

Data centers consume huge amounts of power and have high reliability requirements, because outages can mean significant financial losses. So, powering them with intermittent sources, such as solar and wind, hasn’t been tried before. According to Frey, an important shift occurred last November, when the collaborative project began powering the NREL data center with Daimler fuel cells originally developed for Mercedes trucks and SUVs. Power Innovations did the systems integration for the project.

At the moment, the fuel cells are using hydrogen reformed from natural gas. But the near-future goal, Frey explains, is to use hydrogen generated from solar and wind. During times of high power output, the system will produce and store excess hydrogen in tanks for use when the photovoltaics or wind turbines cannot meet demand.

The partnership is an example of the kind of alliances that are becoming increasingly common in the energy industry. “One of the surprising things we’ve seen in the last few years is energy, info-tech, and mobility companies coming together,” says Jules Kortenhorst, CEO of the Rocky Mountain Institute, and the panel’s moderator.

Shell’s Wetselaar verified the trend:

“Historically, the approach to innovation was quite closed. Quite well-paid technologists and researchers worked behind closed doors trying to create the intellectual property that we could use. But now it’s much more open. We’re looking to work with others, collaboratively. Even startups.”

Wetselaar reports that Shell is investing in startups mainly through a venture-capital arm the company runs.

He envisions a grid based mainly on renewable power sources. He concedes that the intermittent nature of these sources will be a problem, but believes it will be solved in the foreseeable future. Intermittency, he declares, is “one of the questions that will be solved in the next 10 years.”

Diagram showing a data center, fuel cell power module, hydrogen storage tanks and renewable energies such as wind and solar Photo: Daimler AG

Daimler transfers its automotive fuel cell technology to stationary power systems in order to provide a sustainable and independent energy supply for a power-hungry data center. Eventually, the hydrogen fuel will be generated by renewable sources.

He disputed the idea that a resurgence of nuclear power would be needed to enable the world to keep chugging into a future in which India and China continue to electrify, generation continues to surge, and the rate of greenhouse gas emissions goes down rather than up.

“If you invest sufficiently in wind and solar, then all you need is a mid-merit solution,” in between baseload generators and peaking plants, says Wetselaar. Such a solution might be “stored hydrogen, natural gas, or hydropower,” depending on location, he says. He adds that structural changes in electricity grids will also be required in such a system. He acknowledges the failure of such a vision in Germany, which some years ago began attempting to emphasize wind power in the north and solar in the south. The critical third link in the chain was to be “a grid to make it all come together, but which failed due to public resistance,” he concedes.

Wetselaar also believes that a carbon tax is necessary to make progress on cutting emissions. “We’ve argued for a carbon tax in Europe,” he says. “Putting a price on carbon is the best way to let the market decide how best to decarbonize. I think it will continue to be an important tool. But if we wait for the politicians to implement it, we will miss the Paris Accords completely.

“Implementing a carbon price takes political courage,” he continues. “And we don’t see much of that.”

Regardless of the details, Wetselaar says the big picture is clear: “Certainly, we believe that by 2070 the energy system has to be net carbon free.”

Original Link

Special Report: Puerto Rico After the Storm

/image/MzAyOTY3OA.jpeg

How Project Loon Built the Navigation System That Kept Its Balloons Over Puerto Rico

The balloons provided basic Internet service to more than 200,000 people after Hurricane Maria

Utilities Bury More Transmission Lines to Prevent Storm Damage

Facing hurricanes and public opposition to overhead lines, utilities are paying extra to go underground

Advertisement

How to Build a More Resilient Power Grid

During big storms, falling trees cause more damage to power grids than strong winds

$17 Billion Modernization Plan for Puerto Rico’s Grid Is Released

The plan adds details and cost estimate to what was outlined earlier to Energywise by a senior official who oversaw the report’s development.

Plans Emerge to Rebuild Puerto Rico’s Electric Grid

A soon-to-be-released plan is likely to feature micro-grids and distributed generation

7 Things About Life in Puerto Rico with No Electricity

A Puerto Rican solar engineer reflects on the struggles of daily life after Hurricane Maria

Battery Storage Will Offer Grid Support as Puerto Rico Recovers

Storage batteries are gaining credibility as a reliable and rapidly deployable technology. Recent disasters play to the technology’s strengths

Why Solar Microgrids May Fall Short in the Caribbean

The cost, complexity, and resilience of solar picogrids may keep them from displacing fossil-fueled generation

Logistics Complicate Puerto Rico’s Electric Grid Recovery

The need to haul equipment and personnel by plane and barge has slowed efforts to restore the grid

Original Link

  • 1
  • 2
  • 4