Solar and Battery Companies Rattle Utility Powerhouses - IEEE Spectrum

2021-12-23 07:53:24 By : Ms. Chelsea Yu

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

How serious a threat to power companies is Tesla's sweeping, distributed "Energy Plan"?

Harmony Energy's 34 MW battery storage site is now online between Brighton and London.

All eyes these days may be on Elon Musk's space venture—which has just put people in orbit—but here on Earth you can now get your monthly electric bill courtesy of a different Musk enterprise.

Tesla and its partner Octopus Energy Germany recently rolled out retail utility services in two large German states. It's being marketed as the "Tesla Energy Plan," and is available to any individual household in this region of 24 million people that has a solar panel system, a grid connection—and a Tesla powerwall, the Palo Alto firm's gigafactory-made 13.5 kWh battery wall unit.

The German initiative comes on the heels of a similar rollout through Octopus Energy last November in the United Kingdom.

It's too soon to say if these are the nascent strands of a "giant distributed utility," an expression Musk has long talked up, the meaning of which is not yet clear. Analysts and power insiders sketch scenes including interconnected local renewable grids that draw on short-duration battery storage (including the small batteries in electric vehicles in a garage, models for which Tesla just happens to make) combined with multi-day storage for power generated by wind and solar. For bigger national grids it gets more complicated. Even so, Tesla also now has gear on the market that institutional battery storage developers can use to run load-balancing trade operations: the consumer won't see those, but it's part of ongoing changes as renewables become more important in the power game. Being able to get a Tesla-backed power bill in the mailbox, though—that's grabbing attention. And more broadly speaking, the notion of what is and isn't a utility is in flux.

"Over the last five to 10 years we have seen an uptick in new entrants providing retail energy services," says Albert Cheung, head of global analysis at BloombergNEF. "It is now quite common to see these types of companies gain significant market share without necessarily owning any of their own generation or network assets at all."

A decade ago it became possible to get your electricity in the UK from a department store chain (though with the actual power supplied first by a Scottish utility and—as of 2018—arranged and managed by Octopus Energy). As Tesla and other makers of home energy storage systems ramp up production for modular large-scale lithium-ion batteries that can be stacked together in industrial storage facilities, new wrinkles are coming to the grid.

"There are simply going to be more and different business models out there," Cheung says. "There is going to be value in distributed energy resources at the customer's home; Whether that is a battery, an electric vehicle charger, a heat pump or other forms of flexible load, and managing these in a way that provides value to the grid will create revenue opportunities."

Tesla Gigafactory site taking shape in Grünheide, Germany in June 2021. It is due to open in late 2021 or early 2022. Michael Dumiak

Tesla the battery-maker, with its giant new production plant nearing completion in Berlin, may be in position to supply a variety of venues with its wall-sized and cargo-container-sized units: As it does so, its controversial bet in first backing and then absorbing panel producer Solar City may start to look a little different.

Harmony Energy seems pretty pleased. The UK-based energy developer's just broken ground on a new four-acre battery storage site outside London, its third such site. Its second just came online with 68 MWh storage capacity and a 34 MW peak, with the site comprising 28 Tesla Megapack batteries. Harmony expects to be at over a gigawatt of live, operating output in the next three to four years.

The Harmony enterprise works with the UK national grid, however—that's different from Octopus's German and UK retail initiatives. Both Harmony and Octopus depend on trading and energy network management software platforms, and Tesla works with both. But while Octopus has its own in-house management platform—Kraken—Harmony engages Tesla's Autobidder.

Peter Kavanagh, Harmony's CEO, says his firm pays Tesla to operate Autobidder on its behalf—Tesla is fully licensed to trade in the UK and is an approved utility there. The batteries get charged when power is cheap; when there's low wind and no sun, energy prices may start to spike, and the batteries can discharge the power back into the grid, balancing the constant change of supply and demand, and trading on the difference to make a business.

A load-balancing trading operation is not quite the same as mainlining renewables to light a house. On any national grid, once the energy is in there, it's hard to trace the generating source—some of it will come from fossil fuels. But industrial-scale energy storage is crucial to any renewable operation: the wind dies down, the sun doesn't always shine. "Whether it's batteries or some other energy storage technology, it is key to hitting net zero carbon emissions," Kavanagh says. "Without it, you are not going to get there."

Battery research and development is burgeoning far beyond Tesla, and the difficult hunt is on to move past lithium ion. And it's not just startups and young firms in the mix: Established utility giants—the Pacific Gas & Electrics of the world, able to generate as well as retail power—are also adding battery storage, and at scale. In Germany, the large industrial utility RWE started its own battery unit and is now operating small energy storage sites in Germany and in Arizona. Newer entrants, potential energy powerhouses, are on the rise in Italy, Spain and Denmark.

The Tesla Energy plan does have German attention though, of media and energy companies alike. It's also of note that Tesla is behind the very large battery at Australia's Hornsdale Power Reserve. One German pundit imagined Octopus's Kraken management platform as a "monstrous octopus with millions of tentacles," linking a myriad of in-house electric storage units to form a huge virtual power plant. That would be something to reckon with.

This article appears in the November 2021 print issue as "The New (Distributed) Utilities."

Michael Dumiak is a Berlin-based writer and reporter covering science and culture and a longtime contributor to IEEE Spectrum. For Spectrum, he has covered digital models of ailing hearts in Belgrade, reported on technology from Minsk and shale energy from the Estonian-Russian border, explored cryonics in Saarland, and followed the controversial phaseout of incandescent lightbulbs in Berlin. He is author and editor of Woods and the Sea: Estonian Design and the Virtual Frontier.

Samsung could double performance of neural nets with processing-in-memory

Samsung added AI compute cores to DRAM memory dies to speed up machine learning.

John von Neumann’s original computer architecture, where logic and memory are separate domains, has had a good run. But some companies are betting that it’s time for a change.

In recent years, the shift toward more parallel processing and a massive increase in the size of neural networks mean processors need to access more data from memory more quickly. And yet “the performance gap between DRAM and processor is wider than ever,” says Joungho Kim, an expert in 3D memory chips at Korea Advanced Institute of Science and Technology, in Daejeon, and an IEEE Fellow. The von Neumann architecture has become the von Neumann bottleneck.

What if, instead, at least some of the processing happened in the memory? Less data would have to move between chips, and you’d save energy, too. It’s not a new idea. But its moment may finally have arrived. Last year, Samsung, the world’s largest maker of dynamic random-access memory (DRAM), started rolling out processing-in-memory (PIM) tech. Its first PIM offering, unveiled in February 2021, integrated AI-focused compute cores inside its Aquabolt-XL high-bandwidth memory. HBM is the kind of specialized DRAM that surrounds some top AI accelerator chips. The new memory is designed to act as a “drop-in replacement” for ordinary HBM chips, said Nam Sung Kim, an IEEE Fellow, who was then senior vice president of Samsung’s memory business unit.

Last August, Samsung revealed results from tests in a partner’s system. When used with the Xilinx Virtex Ultrascale + (Alveo) AI accelerator, the PIM tech delivered a nearly 2.5-fold performance gain and a 62 percent cut in energy consumption for a speech-recognition neural net. Samsung has been providing samples of the technology integrated into the current generation of high-bandwidth DRAM, HBM2. It’s also developing PIM for the next generation, HBM3, and for the low-power DRAM used in mobile devices. It expects to complete the standard for the latter with JEDEC in the first half of 2022.

There are plenty of ways to add computational smarts to memory chips. Samsung chose a design that’s fast and simple. HBM consists of a stack of DRAM chips linked vertically by interconnects called through-silicon vias (TSVs). The stack of memory chips sits atop a logic chip that acts as the interface to the processor.

The third-largest DRAM maker says it does not have a processing-in-memory product. However, in 2019 it acquired the AI-tech startup Fwdnxt, with the goal of developing “innovation that brings memory and computing closer together.”

The Israeli startup has developed memory with integrated processing cores designed to accelerate queries in data analytics.

Engineers at the DRAM-interface-tech company did an exploratory design for processing-in-memory DRAM focused on reducing the power consumption of high-bandwidth memory (HBM).

Furthest along, the world’s largest DRAM maker is offering the Aquabolt-XL with integrated AI computing cores. It has also developed an AI accelerator for memory modules, and it’s working to standardize AI-accelerated DRAM.

Engineers at the second-largest DRAM makers and Purdue University unveiled results for Newton, an AI-accelerating HBM DRAM in 2020, but the company decided not to commercialize it and pursue PIM for standard DRAM instead.

The highest data bandwidth in the stack lies within each chip, followed by the TSVs, and finally the connections to the processor. So Samsung chose to put the processing on the DRAM chips to take advantage of the high bandwidth there. The compute units are designed to do the most common neural-network calculation, called multiply and accumulate, and little else. Other designs have put the AI logic on the interface chip or used more complex processing cores.

Samsung’s two largest competitors, SK hynix and Micron Technology, aren’t quite ready to take the plunge on PIM for HBM, though they’ve each made moves toward other types of processing-in-memory.

Icheon, South Korea–based SK hynix, the No. 2 DRAM supplier, is exploring PIM from several angles, says Il Park, vice president and head of memory-solution product development. For now it is pursuing PIM in standard DRAM chips rather than HBM, which might be simpler for customers to adopt, says Park.

HBM PIM is more of a mid- to long-term possibility, for SK hynix. At the moment, customers are already dealing with enough issues as they try to move HBM DRAM physically closer to processors. “Many experts in this domain do not want to add more, and quite significant, complexity on top of the already busy situation involving HBM,” says Park.

That said, SK hynix researchers worked with Purdue University computer scientists on a comprehensive design of an HBM-PIM product called Newton in 2019. Like Samsung’s Aquabolt-XL, it places multiply-and-accumulate units in the memory banks to take advantage of the high bandwidth within the dies themselves.

“Samsung has put a stake in the ground,” —Bob O’Donnell, chief analyst at Technalysis Research

Meanwhile, Rambus, based in San Jose, Calif., was motivated to explore PIM because of power-consumption issues, says Rambus fellow and distinguished inventor Steven Woo. The company designs the interfaces between processors and memory, and two-thirds of the power consumed by system-on-chip and its HBM memory go to transporting data horizontally between the two chips. Transporting data vertically within the HBM uses much less energy because the distances are so much shorter. “You might be going 10 to 15 millimeters horizontally to get data back to an SoC,” says Woo. “But vertically you’re talking on the order of a couple hundred microns.”

Rambus’s experimental PIM design adds an extra layer of silicon at the top of the HBM stack to do AI computation. To avoid the potential bandwidth bottleneck of the HBM’s central through-silicon vias, the design adds TSVs to connect the memory banks with the AI layer. Having a dedicated AI layer in each memory chip could allow memory makers to customize memories for different applications, argues Woo.

How quickly PIM is adopted will depend on how desperate the makers of AI accelerators are for the memory-bandwidth relief it provides. “Samsung has put a stake in the ground,” says Bob O'Donnell, chief analyst at Technalysis Research. “It remains to be seen whether [PIM] becomes a commercial success.

This article appears in the January 2022 print issue as "AI Computing Comes to Memory Chips."