ALU

Semiconductors/Design

Designing for EMI Testing: A Step-by-Step Guide

Solve your EMI problems more efficiently with oscilloscope solutions

Download this free white paper and learn how to analyze EMI for more efficient R&D and  improved time-to-market.

Key take-aways:

  • Understand the basic steps involved in EMI testing
  • Learn to use probes to discover an inteference signal
  • Discover how to analyze the interference behavior using a digital oscilloscope

Original Link

DDR Memory Test Challenges from DDR3 to DDR4 and DDR5

Prepare for DDR5 Test Challenges

DDR memory chip technology has progressed through two generations in the past 10 years, and the next generation is currently being defined. Each of these generations improved in speed, efficiency, and memory capacity. Don’t get left behind—stay up to date on the DDR memory challenges that lie ahead!

Download the “DDR Memory – Test Challenges from DDR3 to DDR4 and DDR5” white paper to get tips on testing and learn about the latest test equipment.

Original Link

Shielding Effectiveness of Expanded Metal Foils

Careful circuit design can minimize EMI, but additional shielding measures are often required.

Expanded metal foils are versatile, effective EMI shielding materials. EMFs are formed from thin metal foils, creating a lightweight, strong and flexible sheet material. Expanded copper foil is commonly used for EMI shielding, but aluminum, nickel, Monel and stainless-steel foils can also be used when there are unique specifications.

Original Link

Increase Reliability and Efficiency in Power Converter Designs

Tips for better power conversion.

Efficiently create and debug power converter designs using wide-bandgap power devices for vehicle electrification and HEMS applications to maximize their full potential. Learn more by reading Keysight’s three-part application note series, Increasing Reliability and Efficiency in Next Generation Power Converter Designs:

  • Part 1 – Power Device and Component Evaluation
  • Part 2 – Design Software Simulation
  • Part 3 – Hardware Design and Debug

Original Link

DARPA Picks Its First Set of Winners in Electronics Resurgence Initiative

Illustration of a dollar sign on a circuit board
Illustration: iStockphoto

Advertisement

Hundreds of engineers gathered at the DARPA Electronics Resurgence Initiative Summit in San Francisco yesterday to hear that dozens of them were getting millions (one group was awarded more than US $60 million) to tackle some big questions. If researchers can answer them in the affirmative, they will keep electronics going long after Moore’s Law is a thing of the past. And in the process they’ll likely change the nature of the industry as well as the jobs of engineers.

The Electronics Resurgence Initiative has three main thrusts: design, architecture, and materials and integration. (For a deep dive into the the programs and the reasoning behind them, see last week’s interview with the initiative’s leader, Bill Chappell.)

DARPA Picks Its First Set of Winners in Electronics Resurgence Initiative

Illustration of a dollar sign on a circuit board
Illustration: iStockphoto

Advertisement

Hundreds of engineers gathered at the DARPA Electronics Resurgence Initiative Summit in San Francisco yesterday to hear that dozens of them were getting millions (one group was awarded more than US $60 million) to tackle some big questions. If researchers can answer them in the affirmative, they will keep electronics going long after Moore’s Law is a thing of the past. And in the process they’ll likely change the nature of the industry as well as the jobs of engineers.

The Electronics Resurgence Initiative has three main thrusts: design, architecture, and materials and integration. (For a deep dive into the the programs and the reasoning behind them, see last week’s interview with the initiative’s leader, Bill Chappell.)

ZGlue Aims to Make It Cheap and Easy to Produce Wearables and Other IoT Hardware

This ZiP chip incorporates a microcontroller with a Bluetooth radio, a clock chip, an accelerometer, and an optical heart rate chip. It also has embedded power management and system management features.
Photo: ZGlue

Advertisement

Have you taken a look at Kickstarter recently? Earlier this week, entrepreneurs were trying to fund more than 1,400 projects to build some kind of wearable device, and another 200 to build an IoT gadget.

Moving from an idea on Kickstarter to a prototype and then to mass manufacture is challenging, however. Many of these 1,600 developers have yet to find that out; other entrepreneurs have an idea but don’t have the time or cash to create a prototype they can display on Kickstarter.

Today, manufacturing a wearable requires either assembling components onto a printed circuit board—an approach that can be counterproductive when you are trying to make a gadget as small and light as possible—or developing a multichip module (MCM) or system-in-package (SIP), custom-built on an organic or ceramic substrate with copper wires connecting chips.

“Developing these devices isn’t cheap or easy,” says Greg Taylor, an advisor to a startup called zGlue that’s based in Mountain View, Calif. “It’s OK if you’re part of a big company. If not, though, you may be out of luck.” He says zGlue estimates that getting to a prototype SIP could cost a small company more than US $200,000.

ZGlue has created what it says is the key to a whole new world of wearable and IoT gadgets: the ZiP chip. (ZiP stands for zGlue Integration Platform) A ZiP chip uses the same off-the-shelf minimally packaged chips as a multichip module, but zGlue says design and manufacturing, using its technology, is vastly simpler, faster, and cheaper.

ZGlue Aims to Make It Cheap and Easy to Produce Wearables and Other IoT Hardware

This ZiP chip incorporates a microcontroller with a Bluetooth radio, a clock chip, an accelerometer, and an optical heart rate chip. It also has embedded power management and system management features.
Photo: ZGlue

Advertisement

Have you taken a look at Kickstarter recently? Earlier this week, entrepreneurs were trying to fund more than 1,400 projects to build some kind of wearable device, and another 200 to build an IoT gadget.

Moving from an idea on Kickstarter to a prototype and then to mass manufacture is challenging, however. Many of these 1,600 developers have yet to find that out; other entrepreneurs have an idea but don’t have the time or cash to create a prototype they can display on Kickstarter.

Today, manufacturing a wearable requires either assembling components onto a printed circuit board—an approach that can be counterproductive when you are trying to make a gadget as small and light as possible—or developing a multichip module (MCM) or system-in-package (SIP), custom-built on an organic or ceramic substrate with copper wires connecting chips.

“Developing these devices isn’t cheap or easy,” says Greg Taylor, an advisor to a startup called zGlue that’s based in Mountain View, Calif. “It’s OK if you’re part of a big company. If not, though, you may be out of luck.” He says zGlue estimates that getting to a prototype SIP could cost a small company more than US $200,000.

ZGlue has created what it says is the key to a whole new world of wearable and IoT gadgets: the ZiP chip. (ZiP stands for zGlue Integration Platform) A ZiP chip uses the same off-the-shelf minimally packaged chips as a multichip module, but zGlue says design and manufacturing, using its technology, is vastly simpler, faster, and cheaper.

ZGlue Aims to Make It Cheap and Easy to Produce Wearables and Other IoT Hardware

This ZiP chip incorporates a microcontroller with a Bluetooth radio, a clock chip, an accelerometer, and an optical heart rate chip. It also has embedded power management and system management features.
Photo: ZGlue

Advertisement

Have you taken a look at Kickstarter recently? Earlier this week, entrepreneurs were trying to fund more than 1,400 projects to build some kind of wearable device, and another 200 to build an IoT gadget.

Moving from an idea on Kickstarter to a prototype and then to mass manufacture is challenging, however. Many of these 1,600 developers have yet to find that out; other entrepreneurs have an idea but don’t have the time or cash to create a prototype they can display on Kickstarter.

Today, manufacturing a wearable requires either assembling components onto a printed circuit board—an approach that can be counterproductive when you are trying to make a gadget as small and light as possible—or developing a multichip module (MCM) or system-in-package (SIP), custom-built on an organic or ceramic substrate with copper wires connecting chips.

“Developing these devices isn’t cheap or easy,” says Greg Taylor, an advisor to a startup called zGlue that’s based in Mountain View, Calif. “It’s OK if you’re part of a big company. If not, though, you may be out of luck.” He says zGlue estimates that getting to a prototype SIP could cost a small company more than US $200,000.

ZGlue has created what it says is the key to a whole new world of wearable and IoT gadgets: the ZiP chip. (ZiP stands for zGlue Integration Platform) A ZiP chip uses the same off-the-shelf minimally packaged chips as a multichip module, but zGlue says design and manufacturing, using its technology, is vastly simpler, faster, and cheaper.

AMD Tackles Coming “Chiplet” Revolution With New Chip Network Scheme

Illustration of puzzle pieces forming a chip Illustration: iStockphoto/IEEE Spectrum

Advertisement

The time may be coming when computers and other systems are made not from individually packaged chips attached to a printed circuit board but from bare ICs interconnected on a larger slice of silicon. Researchers have been developing this concept called “chiplets” with the idea that it will let data move faster and freer to make smaller, cheaper, and more tightly integrated computer systems. The idea is that individual CPUs, memory, and other key systems can all be mounted onto a relatively large slice of silicon, called an active interposer, which is thick with interconnects and routing circuits.

 “In some sense if this were to pan out it’s somewhat similar to the integration story—Moore’s Law and everything else—that we’ve been writing for decades,” says Gabriel Loh, Fellow Design Engineer at AMD. “It allows the industry to take a variety of system components and integrate them more compactly and more efficiently together.”

There’s (at least) one problem: Though each chiplet’s own on-chip routing system can work perfectly, when they’re all connected together on the interposer’s network a situation can arise where a network tries to route data in such a way that a traffic jam occurs that winds up seizing up the computer. “A deadlock can happen basically where you have a circle or a cycle of different messages all trying to compete for same sorts of resources causing everyone to wait for everyone else,” Loh explains.

“Each of those individual [chiplets] could be designed so that they never have deadlocks,” says Loh. “But once I put them together, there are now new paths and new routes that no individual had planned for ahead of time.” Trying to avoid these new deadlocks by designing all the chiplets together with a particular interposer network in mind would defeat the advantages of the technique: Chiplets, then, couldn’t be designed and optimized easily by separate teams, and they couldn’t easily be mixed and matched to quickly form new systems.  At the International Symposium on Computer Architecture earlier this month, engineers at AMD presented a potential solution to this impending problem.

Chiplets Illustration: AMD A future system might contain a CPU chiplet and several GPUs all attached to the same piece of network-enabled silicon.

The AMD team found that deadlocks on active interposers basically disappear if you follow a few simple rules when designing on-chip networks. These rules govern where data is allowed to enter and leave the chip and also restricts which directions it can go when it first enters the chip. Amazingly, if you follow those rules you can pretend everything else on the interposer—all the other logic chiplets, memory, the interposer’s own network, everything—is just one node on the network. Knowing that, separate teams of engineers can design chiplets without having to worry about how the networks on other chiplets work or even how the network on the active interposer works.

It may be some time before this trick is even needed. So-called passive interposers—silicon that contains interconnects but no network circuits—are already in use; AMD has been using one for its Radeon R9 series, for example. But adding an intelligent network to the interposer could lead to a big change in how systems are designed and what they can do.

Original Link

Wafer-Level Low Frequency Noise Measurement Challenges and Solutions

Accurate wafer-level measurements of low frequency noises, i.e., 1/f noise or flicker noise, random telegraph noise (RTN), thermal noise, are very challenging and users often get inaccurate or suspicious data from a measurement system. With the increasing impact and importance of those noise components to advanced device and material research, technology development, and integrated circuit designs, it is essential for researchers and engineers in the related fields to understand the real challenges and the practical solutions for wafer-level noise measurement. Accuracy, resolution, bandwidth, current and voltage biasing range, DUT impedance matching, and measurement efficiency are among the most critical aspects of a noise system.

This webinar will go through all the key aspects of these measurement challenges, and interpret the key specifications of a practical noise system, that helps audience understand how to evaluate real system capabilities, how to achieve the best wafer level resolution and bandwidth, how fast one measurement can go, how high and low current/voltage a system can measure, etc. It showcases the industry’s de-facto golden noise system that covers all these measurement needs, with unique measurement capabilities for vertical BJT, parallel noise measurement, and noise measurement with prober card. It also demonstrates wafer-level measurement data for some special conditions, e.g., high voltage, ultra-low current, high temperature, and cryogenic conditions. The speaker has over 30-year experiences in flicker noise measurement for semiconductor devices and designed multiple systems that have been the flicker noise measurement standards in foundries and leading semiconductor companies.

PRESENTER:

 

 

Dr. Zhihong Liu, CEO, ProPlus Design Solutions, Inc.

Dr. Zhihong Liu currently serves as the Chairman and Chief Executive Officer of ProPlus Design Solutions, Inc. He was most recently the Corporate Vice President for CSV R&D at Cadence Design Systems Inc. Dr. Liu co-founded BTA Technology Inc. in 1993 and invented BSIMPro, the world’s most leading Spice modeling product. He also served as the President & CEO of BTA Technology Inc. and later Celestry Design Technology Inc., which was acquired by Cadence in 2003. 

Dr. Liu holds a Ph.D. degree in EE from the University of Hong Kong and co-developed the industry’s first standard model (BSIM3) for IC designs as one of the main contributors at the University of California at Berkeley.

Attendees of this IEEE Spectrum webinar have the opportunity to earn PDHs or Continuing Education Certificates!  To request your certificate you will need to get a code. Once you have registered and viewed the webinar send a request to gs-webinarteam@ieeeglobalspec.com for a webinar code. To request your certificate complete the form here: https://fs25.formsite.com/ieeevcep/form112/index.html

Attendance is free. To access the event please register.

NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Original Link

How to Reduce Control Panel Costs Using IEC and Busbar

The next evolutionary step in refining control panel design is through the use of busbar. In this informative white paper from Rittal, you’ll learn how the IEC-busbar combination, an internationally accepted power distribution technique, can deliver reduced labor and acquisition costs, easier integration and enhanced safety in control panel systems.

Original Link

Spec Changes Top List of Power Designer’s Challenges

Power designers face many challenges. Vicor research has identified over a dozen challenges, but the biggest one is changes in specifications for the power system during development. 

This concise paper explores:

  • Today’s top power system design obstacles
  • The causes of specification changes
  • Solution to overcoming spec changes and other challenges

Original Link

Hey Big Spender! (For Semiconductor R&D, That’s Intel)

Close up of an Intel Core X processor Photo: Intel

Advertisement

The semiconductor industry in general is increasing its investments in research and development, but it will take a long time to challenge Intel’s dominant role.

That’s the conclusion of a report by IC Insights. The research firm indicated that overall industry spending, considering the top ten semiconductor companies (see chart, below), was up 6 percent in 2017 over 2016 to US $34 billion. Intel increased its already high levels of R&D spending by 3 percent to more than $13 billion—that Silicon Valley company invests more in R&D annually than the next five companies—Qualcomm, Broadcom, Samsung, Toshiba, and TSMC—combined. MediaTek, Micron, Nvidia, and SK Hynix rounded out the top ten list.

Beyond the top ten, IC Insights reported that eight more companies—NXP, TI, ST, AMD, Renesas, Sony, Analog Devices, and Global Foundries—spent more than $1 billion on semiconductor R&D last year.

The fastest growing R&D budget, the research firm said, is over at Intel’s Silicon Valley neighbor, Nvidia, whose nearly $1.8 billion R&D investment in 2017 topped its 2016 numbers by 23 percent. TSMC, Samsung, and SK Hynix also gave big boosts to their research budgets, while Qualcomm and Toshiba made cuts [see chart, below].

Want to know more about the people, places, and pecularities of Silicon Valley? Sign up for the biweekly “View from the Valley” newsletter here. (Scroll to the bottom of the list, it’s the last option.)

Original Link

Getting Ahead with Particle Source Simulation

This eSeminar will illustrate the electromagnetic simulation of four different particles sources in CST STUDIO SUITE® 2018: a Pierce-type electron gun, a field emission source, an ion source and a magnetron cathode. It will guide you through the use of different emission models as well as the simulation of the emission in a conformal hexahedral or tetrahedral mesh. In addition, the superposition of external fields will be shown and the setup of an electron source will be demonstrated.

PRESENTER:

Dr. Monika Balk is the market development manager for charged particle dynamics applications. Dr. Balk joined CST in 2005 as a senior application engineer for high-frequency applications. Dr. Balk received her Ph.D. from the Technical University of Darmstadt, Germany in 2005 with an accelerator physics-related topic. During her work with CST, she has gained experience in applications of charged particle dynamics with her support activities worldwide. Dr. Balk has published several papers on magnetrons, TWT, and general vacuum tube simulation.

Attendance is free. To access the event please register.
NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Please contact  GS-WebinarTeam@ieeeglobalspec.com if you have questions

Original Link

Getting Ahead with RF Ionization Breakdown Simulation

In this eSeminar we will review the fundamental aspects of RF Breakdown in gasses. In particular, we will discuss the main parameters affecting the discharge breakdown threshold such as the frequency of operation, the pressure, the temperature or the device dimensions. Simulations with SPARK3D® coupled with CST STUDIO SUITE® will show the benefits of using a full numerical approach to determine the breakdown power level in comparison with analytical approaches. For the particular case of narrowband bandpass filters we will show the advantages of using SPARK3D combined with Filter Designer 3D and CST STUDIO SUITE to determine the breakdown power level in a filter cavity without the need to design the complete microwave filter.

PRESENTER:

Dr. Carlos Vicente currently serves as director of AURORASAT. Dr. Vicente received his Ph.D. in telecommunications engineering in 2005 from the Technical University of Darmstadt, Germany. In his doctoral thesis, Dr. Vicente researched high power effects in communications satellites such as RF breakdown and passive intermodulation. In 2006, he co-founded the company Aurora Software and Testing S. L. (AURORASAT) devoted to the telecommunications sector now part of CST GmBH / Dassault Systémes.

Attendance is free. To access the event please register.
NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Please contact  GS-WebinarTeam@ieeeglobalspec.com if you have questions

Original Link

Getting Ahead with Optical Simulation

In this eSeminar we will demonstrate how CST STUDIO SUITE® can be used in the design process of photonic components by presenting two detailed examples.

An SOI ring coupler will be used as an example of the design of a photonic integrated circuit (PIC). We will divide the ring coupler into waveguide blocks and using both 3D electromagnetic simulation and a flexible schematic to speed up the simulation task, increase the flexibility and allow the design of PICs too large for electromagnetic simulation alone. In the second part of this eSeminar, we will present a general approach to obtain dispersion diagrams for photonic crystals with the eigenmode solver. As an example a 2D tridiagonal photonic crystal will be used.

PRESENTER:

Dr. Christian Kremers joined CST as an application engineer in 2013 with a special focus on optical applications. In 2011 Dr. Kremers received his Ph.D. degree in electrical engineering. During his Ph.D. at the Institute for High-Frequency and Communication technology (IHCT) at the University of Wuppertal where he worked on theoretical and numerical aspects of light-matter interaction of nanostructured materials. Afterwards, Dr. Kremers worked as a researcher at the IHCT. His research interests included the physical modeling of charge carrier movements in CMOS technology at THz frequencies.

Attendance is free. To access the event please register.
NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Please contact  GS-WebinarTeam@ieeeglobalspec.com if you have questions

Original Link

What Does Every Engineer Need to Know about 5G?

By Sarah Yost, SDR Solution Marketing, National Instruments

The 3GPP, 3rd Generation Partnership Project, is the standards body that publishes agreed upon specifications that define our wireless communications standards.  They have outlined a timeline for 5G, and the first phase of definition for 5G, called New Radio or NR, just passed in early December 2017 (timeline shown below).

Figure 1 – The first specification of the New Radio technology for 5G was ratified in late 2017, with further updates through 2018.

Figure 1 – The first specification of the New Radio technology for 5G was ratified in late 2017, with further updates through 2018.

Although NR Phase 1 will be different from the LTE protocol commonly used in today’s mobile communications, there will be similarities as well.  The most stark differences between LTE and NR is the carrier bandwidth and operating frequency.  In addtition, NR adds new beamforming capabilities – both in the analog and digital domains.  The table below illustrates a side by side comparison of key specifications for LTE and NR.

Table 1 – Proposed millimeter-wave frequency bands for 5G. *For future study, not part of LTE Release 15

Table 1 – Proposed millimeter-wave frequency bands for 5G. *For future study, not part of LTE Release 15

In order to meet aggressive timelines for 5G deployment, a plan for potential phasing has been proposed.  This plan consists of two versions of NR: a stand-alone version and a non stand-alone version.  The non stand-alone version will operate with the LTE eNB as the master and a secondary cell of NR gNBs (equivalent of eNB in 5G NR) that are connected to the EPC.  This is the version of the standard that was ratified in December 2017.  The diagrams below show what this will look like:

National Instruments Figure 2:

The non stand-alone version of 5G NR exists as a way to take advantage of existing infrastructure during initial deployments of 5G technology.  A stand-alone version also exists with the goal of being forward compatible for future generations of wireless standards.  Stand-alone networks can co-exist with non stand-alone and operate simultaneously.  An exact date for when stand-alone technology will be rolled out has not yet been set, but it is a use case that is being taken into account in the NR Phase 1 design.  A diagram of the stand-alone case can be seen below.

Figure 3 Figure 3:  Source: http://www.3gpp.org/ftp//Specs/archive/38_series/38.804/

Aside from the efforts being made in the standardizations bodies, Verizon and Korea Telecom (KT) are looking to commercialize pre 5G technologies.  Verizon is looking to deploy fixed wireless access based on the 5G Technical Forum (Verizon 5GTF or V5GTF) physical layer as early as winter 2017.  V5GTF will operate at 28 GHz and be used as a way to deliver high speed internet in last mile applications, but will not cover the mobile use case.  KT, on the other hand, is looking to deploy pre 5G technology for the winter Olympics being held in February 2018.  Specifics of their deployments have not been publicly disclosed.

Operating frequency has been a widely discussed and debated topic for 5G, and clarity is starting to emerge.  Below is a summary of the frequencies being considered based on activity in the 3GP.

Table 2 – LTE vs. 5G capabilities.

Table 2 – LTE vs. 5G capabilities.

The Importance of mmWave

It’s important to note that sub-6 GHz frequencies will still play an important roll in 5G technology.  Companies are looking to increase bandwidth up to five times what is currently available in LTE.  The frequencies listed above represent a majority of the frequencies being considered, but is not a comprehensive list.  For example, T-Mobile is planning to use spectrum around 600 MHz in the United States for 5G deployment.

While millimeter wave (mmWave) frequencies for the first phase of NR are better defined, there will still be a need for multiple bands, depending on region.  For instance, Chinese regulatory bodies have proposed 24.75-27.5 and 37-42.5 GHz.  The FCC in the US has proposed 28 GHz and 2 bands covering 37-40 GHz.  And the EU has specifically stated that 28 GHz will not work and is focused on the 24-27 GHz spectrum as well as 38 and 39 GHz.  Korea and Japan are also aligned around 28 GHz.

Having a better idea of what the standard will look like is a good first step to understanding the commercialization of 5G, but there are also a number of other challenges in both component and system design as well as device validation and test that may impact the speed of deployment of these new technologies.  The addition of technologies like beamformng requires changes in the design of RFICs like power amplifiers and transceivers.  To minimize system loss, antenna arrays are being increasingly integrated into the same chip or module as the PA’s and transceivers.  As a result, engineers can no longer test these devices with traditional cabled test methodologies.  Indtead, over-the-air testing, which was once taboo, is becoming mandatory. 

The Challenges Ahead for Test and Measurement

New Radio, especially for mmWave, is significantly more complex than LTE.  Much of the existing test equipment is not designed to handle the combination of higher carrier frequencies, wider bandwidths, and over-the-air measurements.  In fact, even the simplest measurement tasks such as taking an RF power measurement must must be rethought for 5G because the definition of what it means to take calibrated over-the-air measurement is not clearly defined and agreed upon in industry.

While the standardization process of layer 1 and layer 2 is rapidly coming to a close, there are still numerous challenges that remain to be solved.  Thusfar 5G has opened up a new era in wireless communications and it’s clear that this is just the beginning. Now it’s time for RFIC design and the test and measurement industry to take what we’ve learned from wireless researchers and similarily innovate in order for 5G to be deployed commercially.  

Original Link

Efficient Control of AC Machines using Model-Based Development

Discover a highly efficient approach for the control of industrial-strength electrical drives without needing to perform any manual C programming.

This webinar is aimed at engineers who want to develop better-performing AC drives, faster – thus enabling use of motors that are smaller, lighter, quieter, more powerful, and consume less energy. While the Model-Based Development (MBD) approach presented will be generally applicable to any type of AC drive, it will be demonstrated using a specific off-the-shelf AC drive controlled using the Texas Instruments (TI) InstaSPIN™ sensorless, three-phase motor solution. 

During this webinar, special guest Prof. Duco W. J. Pulle will show how to rapidly develop a fully functional, sophisticated electrical drive through the combined use of InstaSPIN with solidThinking Embed software from Altair which provides:

  • Real-time implementation of the control algorithm
  • Automatic generation of reliable, human-readable code direct from diagrams – no manual C programming or code re-writing required
  • Powerful yet easy-to-use debugging capabilities

PRESENTER:

 

 

Dr. Duco W. J. Pulle, CEO EMsynergy, Sydney Australia

Professor Pulle was born in the Netherlands in 1946. He graduated from Eindhoven Technical University in 1979 and received his Ph. D. from the University of Leeds in 1984. He subsequently worked at the Australia Defense Force Academy in power electronics and electrical drives. In 1998 he joined the University of Lund, Sweden. After that he was appointed as professor at the American University of Sharjah, UAE. In addition to that he is the CEO of EMSynergy, guest professor at RWTHISEA Aachen since 2005 and a member of the Texas Instruments InstaSPIN development team. Professor Pulle has published widely, served on numerous professional advisory boards and holds several patents.

The focus of his work at EMsynergy over the past 10 years has been to advise companies worldwide on the use and application of sensorless electrical drive technology by way of workshops focused to the needs of industry.

Attendees of this IEEE Spectrum webinar have the opportunity to earn PDHs or Continuing Education Certificates!  To request your certificate you will need to get a code. Once you have registered and viewed the webinar send a request to gs-webinarteam@ieeeglobalspec.com for a webinar code. To request your certificate complete the form here: https://fs25.formsite.com/ieeevcep/form112/index.html

Attendance is free. To access the event please register.

NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Original Link

Efficient Development of Electrical Drives Using Model-based Development

Discover a highly efficient approach for electrical drives development and concentrate on developing the most optimal control system in the shortest time.

This webinar is ‘hands-on’ and aimed at engineers who want to design, develop and understand modern AC drives. As such, the webinar is designed to significantly shorten the development time needed to develop electrical drives applications.

During this webinar, Prof. Duco W. J. Pulle will use model-based design in solidThinking Embed and Texas Instruments InstaSPIN to demonstrate:

  • The development of a fully functional sophisticated electrical drive. No hand coding required!
  • An application example of sensorless field-oriented control of a three-phase induction machine using a real-time controller
  • User access for control and real-time debugging

PRESENTER:

 

 

Dr. Duco W. J. Pulle, CEO EMsynergy, Sydney Australia

Professor Pulle was born in the Netherlands in 1946. He graduated from Eindhoven Technical University in 1979 and received his Ph. D. from the University of Leeds in 1984. He subsequently worked at the Australia Defense Force Academy in power electronics and electrical drives. In 1998 he joined the University of Lund, Sweden. After that he was appointed as professor at the American University of Sharjah, UAE. In addition to that he is the CEO of EMSynergy, guest professor at RWTHISEA Aachen since 2005 and a member of the Texas Instruments InstaSPIN development team. Professor Pulle has published widely, served on numerous professional advisory boards and holds several patents.

The focus of his work at EMsynergy over the past 10 years has been to advise companies worldwide on the use and application of sensorless electrical drive technology by way of

workshops focused to the needs of industry.

Attendees of this IEEE Spectrum webinar have the opportunity to earn PDHs or Continuing Education Certificates!  To request your certificate you will need to get a code. Once you have registered and viewed the webinar send a request to gs-webinarteam@ieeeglobalspec.com for a webinar code. To request your certificate complete the form here: https://fs25.formsite.com/ieeevcep/form112/index.html

Attendance is free. To access the event please register.

NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Original Link

The Test Implications of Packaging Innovation

System-in-package technologies are making a significant impact on the electronics supply chain as the semiconductor industry strives to meet the perpetual demand for higher performance, smaller size, and lower cost. Learn how automated test strategies must evolve to keep pace.

Original Link

Superaccurate GPS Coming to Smartphones in 2018

/image/Mjk2OTkyNg.jpeg

Photo: Miguel Navarro/Getty Images

We’ve all been there. You’re driving down the highway, just as your navigation app instructed, when Siri tells you to “proceed east for one-half mile, then merge onto the highway.” But you’re already on the highway. After a moment of confusion and perhaps some rude words about Siri and her extended AI family, you realize the problem: Your GPS isn’t accurate enough for your navigation app to tell if you’re on the highway or on the road beside it.

Those days are nearly at an end. At the Institute of Navigation GNSS+ conference in Portland, Ore., in September, Broadcom announced that it is providing customers samples of the first mass-market chip to take advantage of a new breed of global navigation satellite signals. This new chip will give the next generation of smartphones ­30-centimeter accuracy as opposed to today’s 5 meters. Even better, it works in a city’s concrete canyons, and it consumes half the power of today’s generation of chips. The chip, the BCM47755, has been included in the design of some smartphones slated for release in 2018, but Broadcom would not reveal which.

GPS and other global navigation satellite systems (GNSSs), such as Europe’s Galileo, Japan’s QZSS, and Russia’s Glonass, allow a receiver to determine its position by calculating its distance from three or more satellites. All GNSS satellites—even the oldest generation still in use—broadcast a message called the L1 signal, which includes the satellite’s location, the time, and an identifying signature pattern. A newer generation broadcasts a more complex signal called L5 at a different frequency in addition to the legacy L1 signal. The receiver essentially uses these signals to fix its distance from each satellite based on how long it takes the signal to go from satellite to receiver.

Broadcom’s receiver first locks onto the satellite with the L1 signal and then refines its calculated position with the L5. The latter is superior, especially in cities, because it’s much less prone to distortions from multipath reflections than L1.

In a city, the satellite’s signals reach the receiver both directly and by bouncing off one or more buildings. The direct signal and any reflections arrive at slightly different times, and if they overlap, they add up to form a sort of signal blob. The receiver is looking for the peak of that blob to fix the time of arrival. But the messier the blob, the less accurate that fix, and the less accurate the final calculated position will be.

img

Skinny Signals: To be accurate, receivers need the signal that takes the shortest path from the satellite [green]. Classic L1 satellite signals overlap with their reflections [blue and purple] to form signal “blobs,” which mask the shortest path. L5 signals don’t overlap with their reflections, so receivers can easily find the signal that arrives first.

However, L5 signals are so sharp that the reflections are unlikely to overlap with the most direct signal. The receiver chip can simply ignore any signal after the first one it receives, which is the shortest path. The Broadcom chip also uses information embedded in the phase of the carrier signal to improve accuracy.

Although there are advanced systems that use L5 on the market now, these are generally for industrial purposes, such as oil and gas exploration. Broadcom’s BCM47755 is the first mass-market chip that uses L1 and L5.

Why is this only happening now? “Up to now there haven’t been enough L5 satellites in orbit,” says Manuel del ­Castillo, associate director of GNSS product marketing at Broadcom. At this point, there are about 30 such satellites in orbit, counting a set that flies only over Japan and Australia. Even in a city’s “narrow window of sky you can see six or seven, which is pretty good,” del Castillo says. “So now is the right moment to launch.”

Broadcom had to get the improved accuracy to work within a smartphone’s limited power budget. Fundamentally, that came down to three things: moving to a more power-efficient 28-nanometer-chip manufacturing process, adopting a new radio architecture (which Broadcom would not disclose the details of), and designing a power-saving dual-core sensor hub. In total, they add up to a 50 ­percent power savings over the company’s previous, less accurate chip.

The BCM47755 is just the latest development in a global push for centimeter-level navigation accuracy. Bosch, Geo++, ­Mitsubishi Electric, and U-blox established a joint venture called Sapcorda Services in August to work toward that goal. ­Sapcorda seems to depend on using ground stations to measure errors in GPS and Galileo satellite signals due to atmospheric distortions. Those measurements would then be sent to receivers in handsets and other systems to improve accuracy.

Japan’s US $1.9 billion Quasi-Zenith Satellite System (QZSS) also relies on error correction, but it further improves on urban navigation by adding a set of satellites, guaranteeing that one of them is visible directly overhead, even in the densest part of Tokyo. The third of those four satellites launched in August. A fourth was planned for October, and the system is scheduled to come on line in 2018.

Competing GNSS chipmakers have not announced mass-market L1/L5 chips for smartphones, but some are working on similar products. U-blox says it is working on a dual radio chip but would not give details; Qualcomm says it will be delivering products “soon.” ­STMicroelectronics, as a member of a research consortium called the European Safety Critical Applications Positioning Engine, is working on a mass-market chip for the automotive sector. The chip will take advantage of the Galileo satellites’ new spoof-proof signals, which begin broadcasting in 2018.

Original Link

Key Parameters for Selecting RF Inductors

This application note reviews the key selection criteria an engineer needs to understand in order to properly evaluate and specify RF inductors, including inductance value, current rating, DC resistance (DCR), self-resonant frequency (SRF), Q factor and temperature rating.

Original Link

Learn the Basics of Power Amplifier and Front End Module Measurements

The power amplifier (PA) – as either a discrete component or part of an integrated front end module (FEM) – is one of the most integral RF integrated circuits (RFICs) in the modern radio. Download this white paper to learn the basics of testing RF PAs and FEMs via an interactive white paper with multiple how-to videos.

Original Link

Super-Accurate GPS Chips Coming to Smartphones in 2018

We’ve all been there. You’re driving down the highway, just as Google Maps instructed, when Siri tells you to “Proceed east for one-half mile, then merge onto the highway.” But you’re already on the highway. After a moment of confusion and perhaps some rude words about Siri and her extended AI family, you realize the problem: Your GPS isn’t accurate enough for your navigation app to tell if you’re on the highway or on the road beside it.

Those days are nearly at an end. At the ION GNSS+ conference in Portland, Ore., today Broadcom announced that it is sampling the first mass-market chip that can take advantage of a new breed of global navigation satellite signals and will give the next generation of smartphones 30-centimeter accuracy instead of today’s 5-meters. Even better, the chip works in a city’s concrete canyons, and it consumes half the power of today’s generation of chips. The chip, the BCM47755, has been included in the design of some smartphones slated for release in 2018, but Broadcom would not reveal which.

GPS and other global navigation satellite services (GNSSs) such as Europe’s Galileo, Japan’s QZSS, and Russia’s Glonass allow a receiver to determine its position by calculating its distance from three or more satellites. All GNSS satellites—even the oldest generation still in use—broadcast a message called the L1 signal that includes the satellite’s location, the time, and an identifying signature pattern. A newer generation broadcasts a more complex signal called L5 at a different frequency in addition to the legacy L1 signal. The receiver essentially uses these signals to fix its distance from each satellite based on how long it took the signal to go from satellite to receiver.

Broadcom’s receiver first locks on to the satellite with the L1 signal and then refines its calculated position with L5. The latter is superior, especially in cities, because it is much less prone to distortions from multipath reflections than L1.

A chart shows three horizontal red lines. The top line has a broad green triangle. The center line has three overlapping broad triangles—green, blue, and purple. The bottom line has three narrow triangles—green, blue, and purple—which do not overlap. IIlustration: Broadcom

In a city, the satellite’s signals reach the receiver both directly and by bouncing off of one or more buildings. The direct signal and any reflections arrive at slightly different times and if they overlap, they add up to form a sort of signal blob. The receiver is looking for the peak of that blob to fix the time of arrival. But the messier the blob, the less accurate that fix, and the less accurate the final calculated position will be.

However, L5 signals are so brief that the reflections are unlikely to overlap with the direct signal. The receiver chip can simply ignore any signal after the first one it receives, which is the direct path. The Broadcom chip also uses information in the phase of the carrier signal to further improve accuracy.

Though there are advanced systems that use L5 on the market now, these are generally for industrial purposes, such as oil and gas exploration. Broadcom’s BCM47755 is the first mass-market chip that uses both L1 and L5.

Why is this only happening now? “Up to now there haven’t been enough L5 satellites in orbit,” says Manuel del Castillo, associate director of GNSS product marketing at Broadcom. At this point, there are about 30 such satellites in orbit, counting a set that only flies over Japan and Australia. Even in a city’s “narrow window of sky you can see six or seven, which is pretty good. So now is the right moment to launch.”

A bar chart shows a steady increase in the number of satellites. Three flags are below, those of the United States, Japan, and the European Union. Image: Broadcom

Broadcom had to get the improved accuracy to work within a smartphone’s limited power budget. Fundamentally, that came down to three things: moving to a more power-efficient 28-nanometer chip manufacturing process, adopting a new radio architecture (which Broadcom would not disclose details of), and designing a power-saving dual-core sensor hub. In total, they add up to a 50 percent power savings over Broadcom’s previous, less accurate chip. 

In smartphones, sensor hubs take the raw data from the system’s sensors and process it to provide only the information the phone’s applications processor needs, thereby taking the computational burden and its accompanying power draw off of the applications processor. For instance, a sensor hub might monitor the accelerometer looking for signs that you had flipped your phone’s orientation from vertical to horizontal. It would then just send the applications processor the equivalent of the word “horizontal” instead of a stream of complex accelerations.

The sensor hub in the BCM47755 takes advantage of the ARM’s “big.LITTLE” design—a dual-core architecture in which a simple low-power processor core is paired with a more complex core. The low-power core, in this case an ARM Cortex M-0, handles simple continuous tasks. The more powerful but power-hungry core, a Cortex M-4, comes in only when it’s needed.

The BCM4775 is just the latest development in a global push for centimeter-level navigation accuracy. Bosch, Geo++, Mitsubishi Electric, and U-blox, established a joint venture called Sapcorda Services in August, to provide centimeter-level accuracy. Sapcorda seems to depend on using ground stations to measure errors in GPS and Galileo satellite signals due to atmospheric distortions. Those measurements would then be sent to receivers in handsets and other systems to improve accuracy.

Japan’s US $1.9-billion Qasi-Zenith Satellite System (QZSS) also relies on error correction, but additionally improves on urban navigation by adding a set of satellites that guarantees one is visible directly overhead even in the densest part of Tokyo. The third of those four satellites launched in August. A fourth is planned for October, and the system is to come online in 2018.

Original Link

Shielding Effectiveness of Expanded Metal Foils (EMFs)

Under normal operation, all electronic equipment radiates some amount of electromagnetic energy. At the same time, all electronic equipment is (to some degree) susceptible to interference from outside sources of electromagnetic energy.

Electromagnetic compatibility (EMC) is the branch of electrical engineering concerned with the unintentional generation, propagation and reception of electromagnetic energy which may cause unwanted effects such as electromagnetic interference (EMI) or even physical damage in operational equipment. The goal of EMC is the correct operation of different equipment in a common electromagnetic environment.

Original Link

Capacitor Selection for Switch Mode Power Supply Applications

Faced with the availability of multiple capacitor options for use in high reliability SMPS applications, engineers need to consider performance characteristics and long term reliability when making their selection. This paper provides information related to the more popular choices, including Electrolytic, Tantalum, Film and Ceramic capacitors, compares their key attributes and provides insight and recommendations related to their possible selection for high reliability SMPS applications.

Original Link

Overview of CST Filter Design Technology

Modern communication systems are becoming increasingly demanding on the use of the frequency spectrum. In order to deal with stringent spectrum needs, filters are required. The design and analysis of such devices can be challenging and simulation can play a vital part in the development process.

In this webinar, we will give an overview of the different capabilities that CST STUDIO SUITE offers for filter design. We will discuss the various tools that can be used for synthesis and tuning, as well as the multiphysics simulation that is required for power handling analysis.

PRESENTER:

Theunis Beukman

Theunis Beukman received BEng, MScEng (cum laude) and PhD degrees in Electrical and Electronic Engineering from the University of Stellenbosch, South Africa, in 2009, 2011 and 2015 respectively. During his Masters he worked on tunable wideband filters for the Square Kilometre Array (SKA) project and spent several months as a visiting researcher with the filter group at Heriot-Watt University. After finishing his PhD, he started working as an application engineer at CST AG in Darmstadt, Germany.

Attendance is free. To access the event please register.
NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Please contact  GS-WebinarTeam@ieeeglobalspec.com if you have questions

Original Link

Sequential Peeling: A Model-Based Approach to Structure Identification and De-embedding

Dozens of different network extraction and de-embedding methods exist for different measurement environments. Our new white paper discusses one of the model-based approaches that uses position information about structures in the fixture and an indicated impedance/admittance model to extract parameters of that particular structure. Since position information is used, one structure can be de-embedded after another. In a sense, this is peeling away one layer of the fixture at a time which is the source of the name of the method.

Original Link

Bespoke Processors: A New Path to Cheap Chips

/image/MjkyODY2NA.jpeg

Photo: iStockphoto

“Processors are overdesigned for most applications,” says Rakesh Kumar, an associate professor of electrical and computer engineering at the University of Illinois. It’s a well-known and necessary truth: In order to have programmability and flexibility, there’s simply going to be more stuff on a processor than any one application will use. That’s especially true of the type of ultralow-power microcontrollers that drive the newest embedded computing platforms such as wearables and Internet of Things sensors. These are often running one fairly simple application and nothing else (not even an operating system), meaning that a large fraction of the circuits on a chip never, ever see a single bit of data.

Kumar, University of Minnesota assistant professor John Sartori (formerly a student of Kumar’s), and their students decided to do something about all that waste. Their solution is a method that starts by looking at the design of a general-purpose microcontroller. They came up with a rapid way of identifying which individual logic gates are never engaged for the application it’s going to run. They then strip away all those excess gates. The result is what Kumar calls a “bespoke processor.” It’s a physically smaller, less⁠-⁠complex version of the original microcontroller, designed to perform only the application needed. Kumar and Sartori detailed the bespoke processor project in June at the 44th International Symposium on Computer Architecture, in Toronto.

“Our approach was to figure out all the hardware that an application is guaranteed not to use, irrespective of the input,” says Kumar. What’s left is “a union, or superset, of all possible paths that data can take. Then we take away the hardware that’s not touched.”

Starting with an openMSP430 microcontroller, they produced bespoke designs meant to perform applications such as the fast Fourier transform, autocorrelation, and interpolation filtering. These designs had fewer than half of the logic gates that were part of the original microcontroller design. In fact, none of the 15 common microcontroller apps they studied needed more than 60 percent of the gates. On average, the resulting chips were 62 percent smaller and consumed 50 percent less power. By exploiting the timing savings from signals traveling a shorter distance, the average power savings jumped to 65 percent.

/image/MjkyODcwMA.jpeg

Image: University of Minnesota and University of Illinois In an analysis of the logic gates used in two applications—intFilt and Scrambled intFilt—on an openMSP430 microcontroller, the gray dots are gates not used by either application. Red dots are gates unused only by Scrambled intFilt.   

“It’s surprising,” Sartori says. “Most people think that in such a small, simple processor, pretty much everything gets used all the time; but for a given application, there’s actually a lot of logic that can be completely eliminated, and the software still works perfectly.”

The method also works if you want the processor to perform two or more applications; it can even handle an operating system plus an application. When run by itself, the real-time OS they tested, FreeRTOS, left 57 percent of the gates completely untouched. Though no pairing of FreeRTOS with any of the 15 apps left fewer than 27 percent of the gates unused, Kumar points out that these applications typically run “bare metal”—no operating system needed.

Why not just order up an ASIC (application-specific integrated circuit)? In a word: cost. Embedded microcontrollers are used for such low-volume, low⁠⁠-profit-⁠margin purposes that it would cost too much to do the ground-up design and testing needed for an ASIC, says Kumar. By starting with a standard microcontroller design, the process is simplified and cheaper.

It’s like “a black box,” says Kumar. “Input the app, and it outputs the processor design.”

It might not be that simple, says Tom Hackenberg, principal analyst for embedded processors at market research firm IHS. Testing, validation, and other costs encountered on the road to putting out a new application-specific chip will still remain. If the technique can’t reduce the cost of the design process enough, cheap microcontrollers—which average about US $1 but can be as little as 25 cents—will still be the winning solution.

Still, if the concept “can do what they’re saying it can do, then it might be a much more simple process to design a very application-specific processor,” says Hackenberg.

Research engineers at ARM, in Cambridge, England, are hoping it is that simple. They’ve been working hard on a project called Plastic ARM—an attempt to construct 1-cent disposable microcontrollers on plastic using printed electronics. Their first attempt occupied 7.5 square centimeters. It took a full year of hard design work to shrink it below their 1 cm2 target and to customize it for their application, says the project’s leader, James Myers. This summer, with the help of one of Sartori’s students, they plan to use the bespoke processor technique to see if they can achieve the same or better results with a fraction of the effort.

“With printed electronics, there should be a lower barrier to entry” than to silicon, he says. “There should be more opportunity for application-specific designs, but not if the design costs stay the same. What I want is to reduce the design cost as well as the fabrication cost of these things. If you can automatically generate a bespoke version of the processor…then that’s a huge benefit.”

A version of this article appears in our Tech Talk blog.

Original Link

Testing the Tester: Self-Test Methods for Periodic Automatic Test Equipment Verification

Periodically ensuring that Automated Test Equipment (ATE) is functioning properly is a critical part of product manufacturing, and is often required in regulated industries.  Properly verified ATE helps to maintain consistent test results and reduces downtime.  Automated self-test is ideal for periodic verification of test system performance, and can be used to get a system back online faster after system changes or failures.  In this session, Bloomy’s ATE Product Manager will discuss three of the most common methods of self-testing your ATE, including System Self-Test ITA’s (Interchangeable Test Adapters, or fixtures), Loopback Units Under Test (UUTs), and Golden Sample UUTs.   By understanding test coverage, advantages, disadvantages, and best practices for each method you can decide which “test the tester” method is best for your application.

 PRESENTER:

Grant Gothing, ATE Product Manager, Bloomy, Inc 

Grant Gothing is Automated Test Equipment (ATE) Manager at Bloomy, Inc. (Windsor, CT). He is responsible for hardware and software design and standardization for the company’s ATE platform, the UTS.  In his 10 years at Bloomy, Grant has developed dozens of automated test systems for a wide variety of industries, and has held positions in project and product engineering, management, sales, and marketing.  Throughout these roles, he has continuously improved and standardized the design and build of Bloomy’s automated test offerings.  Grant holds an M.S. in Mechanical Engineering from Virginia Tech, where he focused on autonomous vehicles.  He is a National Instruments Certified LabVIEW Architect and Certified TestStand Architect.
 

Attendees of this IEEE Spectrum webinar have the opportunity to earn PDHs or Continuing Education Certificates!  To request your certificate you will need to get a code. Once you have registered and viewed the webinar send a request to gs-webinarteam@ieeeglobalspec.com for a webinar code. To request your certificate complete the form here: https://fs25.formsite.com/ieeevcep/form112/index.html

Attendance is free. To access the event please register.
NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Please contact  gs-webinarteam@ieeeglobalspec.com if you have questions

Original Link

Low Current / Ultra-High Resistance Measurement Fundamentals

Performing current vs. voltage characterization on devices and materials at very low current levels presents a unique set of measurement challenges. Normal measurement issues such as noise, transient signals and cabling and fixturing parasitics are much harder to solve when dealing with currents in the femtoamp range. In addition, many cutting-edge materials have extremely high resistances that conventional DMMs and source/measurement units (SMUs) cannot measure.  In this seminar Keysight will explain the measurement techniques, tricks and tools necessary to measure currents down to 0.01 femtoamps and resistances up to 10 Peta Ohms with both high measurement confidence and repeatability.

 PRESENTER:

Alan Wadsworth, Marketing Brand Manager, Keysight

Alan Wadsworth is currently the Marketing Brand Manager for Keysight’s semiconductor and power products.  He has over 30 years of experience in the semiconductor industry in both design and test, and is the author of Keysight’s Parametric Measurement Handbook, which contains comprehensive information on semiconductor parametric test and measurement techniques.
 
Alan joined Hewlett Packard in 1991 and worked for five years as the SRAM engineer in HP’s Memory Technology Center.  Previously, he worked as an integrated circuit designer at Signetics/Philips where he designed circuits in both bipolar and BiCMOS technologies.  He holds a bachelors and masters degrees in electrical engineering from the Massachusetts Institute of Technology and an MBA from Santa Clara University.

Attendance is free. To access the event please register.
NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Please contact  gs-webinarteam@ieeeglobalspec.com if you have questions

Original Link

Bespoke Processors: Cheap, Low-Power Chips That Only Do What’s Needed

“Processors are overdesigned for most applications,” says University of Illinois electrical and computer engineering professor Rakesh Kumar. It’s a well-known and necessary truth: In order to have programmability and flexibility, there’s simply going to be more stuff on a processor than any one application will use. That’s especially true of the type of ultralow power microcontrollers that drive the newest embedded computing platforms such as wearables and Internet of Things sensors. These are often running one fairly simple application and nothing else (not even an operating system), meaning that a large fraction of the circuits on a chip never, ever see a single bit of data.

Kumar, University of Minnesota assistant professor John Sartori (formerly a student of Kumar’s), and their students decided to do something about all that waste. Their solution is a method that starts by looking at the design of a general-purpose microcontroller. They identify which individual logic gates are never engaged for the application it’s going to run, and strip away all the excess gates. The result is what Kumar calls a “bespoke processor.” It’s a physically smaller, less complex version of the original microcontroller that’s designed to perform only the application needed.

Kumar and Sartori will be detailing the bespoke processor project at the 44th International Symposium on Computer Architecture, in Toronto next week.  

“Our approach was to figure out all the hardware that an application is guaranteed not to use irrespective of the input,” says Kumar. What’s left is “a union, or superset, of all possible paths that data can take. Then we take away the hardware that’s not touched.”

Starting with an openMSP430 microcontroller, they produced bespoke designs meant to perform applications such as the fast Fourier transform, autocorrelation, and interpolation filtering with fewer than half of the logic gates in the original microcontroller design. In fact, none of the 15 common microcontroller apps they studied needed more than 60 percent of the gates. On average, the resulting chips would be 62 percent smaller and consume 50 percent less power. By exploiting the timing savings from signals traveling a shorter distance, the average power savings jumps to 65 percent.

“It’s surprising,” Sartori says. “Most people think that in such a small, simple processor pretty much everything gets used all the time; but for a given application, there’s actually a lot of logic that can be completely eliminated, and the software still works perfectly.”

Two black rectangles side-by-side are subdivided into irregular shapes. Many grey dots and fewer red dots are scattered throughout each rectangle. Illustration: University of Illinois/ACM An analysis of the gates not used for two applications—intFilt and Scrambled intFilt—on an openMSP430 microcontroller. Grey dots are gates not used by either application. Red dots are gates unsused only by that application.

The method also works if you want the processor to perform two or more applications, and it can even handle an operating system plus application. When run by itself, the real-time OS they tested, FreeRTOS, left 57 percent of gates completely untouched. Though no pairing of FreeRTOS with any of the 15 apps left fewer than 27 percent of the gates unused, Kumar points out that these applications typically run “bare metal”—no operating system needed.

Why not just order up an ASIC (application specific integrated circuit)? In a word: cost. These embedded microcontrollers are used for such low-volume low-profit-margin purposes that it would cost too much to do the ground-up design and testing needed for an ASIC, says Kumar. By starting with a standard microcontroller design, the process is simplified and cheaper.

It’s like “a black box,” says Kumar. “Input the app, and it outputs the processor design.”

This post was updated on 16 June to add comment from John Sartori.

Original Link

A Circuit That Sees Radiation Strikes Could Keep Errors at Bay

For a short time, it looked like the worlds electronics would be safe (well, safer) from radiation. With the switch from planar transistors to FinFETs, ICs suddenly became naturally resistant (literally) to having their bits flipped by a neutron splashing into them and blasting lose a small cloud of charge. But two things are now making them vulnerable again: One is the move to operating at voltages so low, that it’s easier for a pulse of radiation-induced charge to flip a transistor on or off. The other is how the unprecedented density of those transistors is giving radiation more targets than ever.

Engineers at the University of Minnesota are nearing a solution that could help bring down the rate of so-called logic soft errors—signals temporarily flipped by a radiation strike. It’s a circuit called a back-sampling chain that has, for the first time, allowed them to reconstruct the strike pulse—called a single event transient—resulting from the radiation strike. In research to be presented in June at the IEEE VLSI Symposia in Kyoto, Kim’s team shows that the back-sampling chain (BSC) circuit—a kind of cross-connected chain of inverters—can detect orders of magnitude higher number of strikes compared to previous approaches.

A blue circuit board with many microchips and other electronic components Photo: Chris Kim/University of Minnesota An array of back sampling chain circuits were tested in a spray of neutrons.

“There hasn’t been a way to visualize these strike pulses,” says Chris H. Kim, the electrical engineering professor at the University of Minnesota who led the research. “The back-sampling chain circuit can measure the response of the circuit to the radiation event. We can also back calculate the strike current induced by each particle.”

Kim’s group is using the neutron strike data collected from the BSC circuit to develop a design tool that will help engineers avoid low-voltage designs that would be too sensitive to soft errors. There are no tools right now “that combine radiation data with circuit analysis and system design,” says Kim.

The BSC could also be integrated into chips destined for complex systems in spacecraft and other things that operate in harsh environments. The goal here is to collect data on the system in operation. “Once you have that data, you can map it to system level failure and use it to fine tune the supply voltage or clock frequency,” says Kim.

Original Link