Archive | Science & Technology RSS feed for this section

An Approach to 20nm IC Design

10 Sep

Last month at DAC I learned how IBM, Cadence, ARM, GLOBALFOUNDRIES and Samsung approach the challenges of SoC design, EDA design and fabrication at the 20nm node. Today I followed up by reading a white paper on 20nm IC design challenges authored by Cadence, a welcome relief to the previous marketing mantra of EDA 360.

Here’s a quick overview of the challenges, approaches to overcome the challenges and then the impact.

Challenges Approach Impact
Lithography Double Patterning Technology (DPT) More masks, mask costs of $5M to $8M, new DPT-aware EDA tools, new DPT-aware cells and IP,
Variability Early analysis of Layout Dependent Effects (LDE) New EDA tool flows, new design rules
Design Complexity Re-use IP, more verification, mixed-signal design, 3D, TSV Longer schedules, AMS tools required, SoC design costs $120M – $500M

Benefits of 20nm

With these challenges at 20nm versus 28nm the goal is to exploit the technology with products that have:

  • 30-50% performance improvement
  • 30% dynamic power savings
  • 50% area reduction
  • Up to 12 billion transistors


A picture is worth 1,000 words so here’s the story of why we need to use two masks for a single layer in order to resolve the patterns in silicon:

On the left is what happens when using a single mask and trying to produce small patterns at 20nm dimensions, while on the right is the much improved manufacturing result when using two masks for the same layer. With DPT each complex layout pattern is separated with enough spacing to allow the 192nm lithography equipment to adequately resolve without interference from neighboring patterns. Only the lowest mask layers in 20nm require this special treatment, but it will cost you more in mask expenses.

All of the EDA tools that create IC layout must be updated to take into account how to create DPT patterns. Once you’ve made something like a standard cell, then you need to ensure that when that cell is placed, flipped or mirrored, that it still is DPT compliant:

In the above layout the layer has been colorized into red and green, denoting the two different masks required for DPT. With new automation the IC designer doesn’t have to think about how to make their IC layout DPT compliant, because a manual coloring would be too tedious and error prone.

Connecting the standard cells together is done by a Place and Route tool, so it needs to be updated to become DPT aware as well.

Extraction tools read these layouts to create a netlist for either circuit simulation or static timing analysis, and with DPT awareness it will take into account mask offsets that change the parasitic R, C and L values.

The DRC tools likewise have to take into account the hundreds of new rules at 20nm

The foundries want high yield and to ensure that yield they characterize the silicon and produce layout rules that need to be followed by layout and circuit designers. At 20nm you can expect about 5,000 design rule checks (DRC). What’s new at 20nm compared to 28nm are about 40 rules for DPT and 400 rules for things like layout directional orientation, transistor proximity and inter-digitation patterns.

Variations in the actual layout across the die need to be taken into account and simulated prior to tapeout, this means more circuit simulation, stating timing analysis, and statistical timing analysis. With 20nm there can be more coupling between nets and that will in turn impact sensitive analog circuits or memory cells.

One Layout Dependent Effect is how the Voltage Threshold (Vt) of an MOS device changes based on the proximity to the well layer:

In this diagram the MOS transistor is show at the top, and as the transistor is placed closer to the well the value of Vt goes up. This variation in Vt will impact the gain and speed of the transistor, so on the right is a chart showing the gain of the transistor and the design rule manual (DRM) recommendation for a safe spacing, active to NWell.

Even the placement of standard cells will trigger layout dependent effects that impact timing:

The good news is that all of these effects can be analyzed early in the design process through tools that have been made LDE aware:

Design Complexity

Sequential IC design flows have given way to concurrent design flows where the goal is to Prevent, Analyze and Optimize. Specific examples of this approach include:

  • Constraint-driven design
  • LDE aware placement
  • Color-aware place and route
  • In-design verification

Power, Performance and Area (PPA) goals define each SoC spec, and to meet specs you need to support complex clocking, multiple power domains, and automated low power techniques. One such approach is called clock concurrent design where concurrent optimization is used:


Cadence has been collaborating with foundries (IBM, GLOBALFOUNDRIES, Samsung, TSMC) at the earliest development stages of 20nm and below nodes to update the IC tool flows to ensure an automated approach to SoC design that follows a methodology of Prevent, Analyze and Optimize.


Demise of analog is exaggerated

28 Jun

Over the last 20 years, the world population grew at a compound annual growth rate of 1.4 percent, recently surpassing the 7 billion mark. During the same time period, overall semiconductor unit sales grew at a CAGR of 9.2 percent, reaching 660 billion chips in 2010, according to the World Semiconductor Trade Statistics organization.What’s interesting is that the analog semiconductor unit CAGR during the same period was 10.3 percent. That’s 92 billion analog chips in 2010, or higher than the overall semiconductor market. That’s over 13 analog chips per human on the planet – each year!

It’s safe to say that “the demise of analog” has been greatly exaggerated.

This rising growth of analog content in products is driven by new solutions to old applications (think hybrid electric vehicles, televisions and LED light bulbs), new applications such as personal computing with smartphones/tablets and smarter automobiles and new markets in personal medicine, alternative energy and safety/security.

Across all of these areas, we are seeing a rapid increase in the use of analog chips as well as sensors. An average smartphone now has more than eight sensors! Over the next five years, the CAGR for sensors/actuators is forecasted to grow 6.8 percent faster than the overall IC market, according to IC Insights. A quick look at a typical block diagram of an electronic device illustrates the number of analog and mixed signal IC’s on a circuit board.  These include amplifiers connected to a data source (such as a sensor), data converters, power management chips, clocks and timing devices and interface chips.Let’s take a look at the manufacturing technologies for analog semiconductors. Unlike digital products which march to the beat of Moore’s Law, the logic gate counts of most mixed signal and analog products do not increase significantly from generation to generation. Consequently, analog manufacturing processes migrate much more slowly from one lithography node to the next.

Analog platforms

It would be wrong to assume that analog ICs do not improve in performance or get smaller with each subsequent generation of manufacturing process. Improvements are achieved through device architecture, integration, packaging and materials optimization of separate process technologies for specific types of products. Texas Instruments has more than 50 such process platforms running in production, manufacturing nearly 45,000 products – each process optimized for a specific family of analog semiconductors or MEMS/sensors.

  • High-speed amplifiers typically need finely tuned capacitors, resistors and SiGe bipolar processes, often with SOI substrates to reduce noise.
  • Data converters are manufactured using analog processes with precision thin-film resistors, high linearity capacitors and low noise, well-matched transistors.
  • High-voltage manufacturing processes with thick power metal are essential for building power management ICs. The voltage range of the process is tuned to the application and can vary from a few volts to several hundreds of volts.
  • Micro-controllers are manufactured on mixed-signal process technologies, with key differentiators being low power non-volatile memories and ultra-low power transistors.
  • MEMS and sensors need custom process flows and use unique equipment for deep silicon etching, back-side wafer patterning, wafer-to-wafer bonding, etc.

I have led deep sub-micron CMOS development and more recently, analog technology development. The opportunities in analog development and manufacturing are quite different. Not being limited by a single industry roadmap, there are significant opportunities to differentiate through design, process, packaging and manufacturing.

Creative ideas are welcome!

I have a mental image of speed boats versus an aircraft carrier. Instead of a large development team, the model is one of many small teams, working in parallel on different market opportunities. Two recent examples of differentiated technologies come to mind. We recently developed a fast-write, low-power, non-volatile memory called ferroelectric random access memory to enable ultra-low power mixed signal microcontrollers that consume less than half the power of equivalent flash-based devices.

In another example, TI recently integrated thermocouple elements, MEMS processing along with high precision analog data converters and amplifiers to create a single chip infrared temperature sensor.

Moreover, development is not limited by the lack of availability or the immaturity of process equipment, so time to market at high yields is quite good and the cost of setting up an analog manufacturing line is significantly lower than a CMOS line.

There are others issues as well. Besides obvious technology challenges, analog products and hence analog manufacturing processes last a long time, sometimes over 20 years. This creates years of accumulated process and design IP, PDKs, libraries and Spice models that have to be maintained, updated and continuously improved. Also, managing the diversity of process technologies and products across many factories can be a logistical challenge – or a differentiator for those that do it well.

When it comes to analog, whoever coined the term “more than Moore’s Law” couldn’t have said it better.


Secondary Batteries vs. Primary Batteries

18 Jun

An electrical battery is an electrochemical cell that converts stored chemical energy into electrical energy. There are two types of batteries: primary batteries  (also called disposable batteries), which are designed to be used once and discarded when they are exhausted, and secondary batteries (rechargeable batteries), which are designed to be recharged and used multiple times. Nowadays, an extensive range of batteries is available in the market which includes batteries for notebooks and netbooks, laptop batteries manufactured by Dell, HP, Lenovo, Toshiba and Sony, batteries for hybrid electric vehicles, medical equipments, industrial applications, power tools, cellular phones, still and video cameras, electric wheel chairs, automobiles and much more.  Each battery has an explicit assembly and displays diverse properties. This built-in complexity makes it tough to build up a standard for rapid testing of batteries that works equally well with all technologies.

  • Primary batteries

Primary batteries can generate current right away on assemblage. Disposable batteries are intended to be used once and thrown away. These are most generally used in portable devices that have low current consumption, are only used sporadically, or are used well away from an unconventional power source, such as in alarm and communication circuits where other electric power is only occasionally accessible. Disposable primary cells cannot be consistently recharged since the chemical reactions are not effortlessly reversible and active resources may not go back to their original forms. Battery producers suggest against trying to recharge primary cells.

Common types of disposable batteries include zinc carbon batteries and alkaline batteries. Usually, these batteries have higher energy densities than rechargeable batteries but disposable batteries do not cost well under high drain applications with loads under 0.075 Kohms

  • Secondary batteries

Secondary batteries must be charged before use. They are usually made up of active materials in the discharged state. Rechargeable batteries or secondary cells can be recharged by applying electrical current which reverses the chemical reactions that occur during the usage. Devices that supply the suitable current are called chargers or rechargers.

The oldest type of rechargeable battery is the lead-acid battery. Their very low energy-to-weight ratio and energy-to-volume ratio, capability to supply high surge currents along with their low cost make them appropriate for use in automobiles as starter motors require high amount of current. Depending upon the application, two types of Lead acid batteries have evolved over the past few years, namely: Sealed Lead Acid (SLA) and Valve Regulated Lead Acid (VRLA). This battery is prominent in that it contains a liquid in an unsealed container. This requires that the battery is kept straight and the area has proper air circulation to guarantee safe diffusion of the hydrogen gas produced by these batteries during overcharging. The lead acid battery is also very weighty for the amount of electrical energy it can supply. Despite this, it’s small production cost and its elevated rush current levels make its use ordinary where a large capacity (over just about 10 A h) is necessary or where the heaviness and easiness of treatment are not concerns. Batteries for notebooks and netbooks, laptop batteries manufactured by Dell, HP, Lenovo, Toshiba and Sony are usually rechargeable to improve convenience.

Will Secondary batteries take over the Primary Batteries?

Secondary batteries, being easily rechargeable, are gradually superseding the primary batteries because of the following plus points:

  • Low cost -They can be recharged hundreds of times, which makes them the cheapest type of battery available
  • Greater efficiency and performance
  • Easy to use
  • Environment friendly
  • Rechargeable batteries consume less non-renewable natural resources (fossil and mineral) than disposable batteries.
  • Rechargeable batteries have less impact on climate warming than disposable batteries
  • The capacity is much higher than any alkaline battery, so they are ideal for power hungry equipment (such as digital cameras)
  • They last longer than other types of batteries. This property makes them suitable for all types of applications.

Secondary batteries such as NiCd, NiMH and Lithium Ion can be recharged, sometimes as often as 1,000 times, by the flow of direct current through them in a direction opposite to the current flow on discharge. By recharging after discharge, a higher state of oxidation is created at the positive plate (electrode) and a lower state at the negative plate (cathode), returning the plates to approximately their original charged condition.

Zapping and Its Impact on Battery Power

18 Jun

Battery zapping is a procedure where the individual cell is acted upon by a high energy impulse. Zapped cells have a lesser internal resistance and therefore retain higher terminal potential at modest to elevated current loads. This provides significantly greater power and performance than ordinary cells. There is no loss of capacity or useable life. Zapping is a procedure that brings dead batteries to life on the application of high current or voltage. This technique works practically well with nickel cadmium batteries when an elevated pulse current is applied to them. A nickel cadmium battery is rechargeable so the battery should be reusable for more than 100 charges under steady use. Three things that unfavorably influence battery charging are allowing a cell to become totally discharged, putting the cell on a charger in next to no time and overcharging. Crystals can develop inside rechargeable batteries that avert them from sustaining the charging process or keeping a charge for a customary span of time. This makes them an inappropriate choice for manufacturing batteries of notebooks and netbooks. Big companies like Dell, Toshiba, Sony, HP and Lenovo have switched to lithium ion batteries for making their laptop batteries. At times, a nickel cadmium battery can be re-energized and restored to regular overhaul. After a phase of time, the insulator within a Nickel Cadmium battery frequently develops holes which permit the battery to cultivate crystalline shorts that offer a transmission passageway between the positive and negative electrodes of the cell (which basically shorts out the cell). If this happens, you may have to waft open this short with a high current pulse before the cell will again allow charging. This process is sometimes referred to as zapping. A leaky Nickel Cadmium cell will constantly have an elevated self-discharge rate and will again grow in-house shorts if left on the ridge without some sort of trickle charge. The frustrating feature of batteries that die rapidly frequently prompts customers to fling away leaky Nickel Cadmium batteries, even though they may still be capable of giving almost full A.h capacity during discharging process.

Zapping is believed to augment the cell potential by 0.02 to 0.04 V when calculated under a 30 A load. This would amplify the battery potential from 1250 mV to about 1280 mV. According to specialists, zapping works only consistently with Ni Cd batteries. NiMH batteries have been tested but the consequences are open to doubt. The zapping process is performed with a 47 F capacitor charged to 0.09 V. Finest results are attained if the battery is cycled two times after healing, and then zapped another time. Once in examination, zapping will no longer develop the battery performance. In fact, zapping does not rejuvenate a battery that has become feeble.

Companies focusing in zapping batteries use best quality Japan manufactured nickel cadmium cells. The cells are specially selected at the factory. Particularly labeled, the batteries turn up in a discharged condition with an open terminal voltage of 1110 to 1120 mV. If below 1060 mV, the cell is doubtful and zapping does not perform well. A low voltage value may clue to a high self-discharge or chemical shortages. The 1110 mV is formed through the electrochemical voltage of the nickel cadmium battery. This potential is present even without any charge. Application of a load can cause the open circuit voltage to fall down.

There are no obvious drawbacks to zapping but still the battery producers stay ambiguous. No scientific elucidation is accessible and only slight information is there about the long life of the batteries after service.

The following are some of the guidelines that must be followed to elongate the lifespan of Nickel cadmium batteries in particular:

– Do not leave a nickel based battery in a charger for more than 24 hours with the ready light on. It is better to take out the battery from the charger and apply a charge before using.

– Apply intermittent discharge cycles. Running the battery down in the apparatus may also do this.

– It is not compulsory to discharge the battery before each charge. This would put too much pressure on the battery. And this is what companies like Dell, Toshiba, Sony, HP and Lenovo suggest for their laptop batteries

– Avoid high temperature. The battery should cool off and stay at moderate temperature after being fully charged.

– Use good quality chargers. This is true for highly specialized applications like laptops, batteries of notebooks and netbooks, cell phones etc.

Tips To Increase the Run-Time of a Wireless Device

18 Jun

Are you the type of person for whom a cell phone is more than just a requirement? Do you wish your battery does not drain out so frequently? It is possible to achieve more out of your battery if a few things are kept in mind.

  • Avoid the Memory Effect – Keep the battery fit by fully charging and then fully discharging it at least once every 2-3 weeks. However, Lithium ion batteries do not experience the memory effect. Never leave the battery inactive for long periods of time.
  • Always use a battery charger manufactured by the same company as of your phone as it gives more life.
  • Keep the Batteries Clean – Clean dirty battery contacts with a cotton swab and alcohol. This helps preserve a fine link between the battery and the portable device.
  • Never leave your cell phone near any heat source as great heat may affect the battery.
  • The vibrator mode of your cell phone consumes more battery, so if not necessary, use the ringtone mode.
  • Battery Storage – Always store the battery in dirt free, dry, cool place away from heat source and metal objects. Batteries are likely to discharge when not in use so make sure they are charged again before using. The performance of any cell phone battery usually degrades after about a year so it is better to get the battery replaced.
  • Living closer to the tower is advantageous as your battery will run for a longer period between charges.
  • Fully charge and discharge battery up to 4 cycles before acquiring full capability of a new battery
  • Fully discharge and then fully charge the battery every two to three weeks.
  •  Run the device until it shuts down or there is a low battery warning. Then recharge the battery.
  • Ensure utmost performance of the battery by optimizing the device’s power management features.
  •  Do not short-circuit a battery as it may cause serious damage to the battery.
  • Never drop, hit or abuse the battery as this may expose the cell contents which are caustic.
  • Never expose the battery to moisture or rain.
  • Never incinerate a battery.
  • Turn off 4G, Wi-Fi, GPS, and Bluetooth when you don’t require these features. These features are also commonly present in laptops and notebooks. Laptop giants like Dell, HP, Lenovo, Sony and Toshiba suggest turning off these features when not in use to protect their laptop batteries.
  • Lower screen brightness in cell phones to preserve battery
  • Reduce the screen timeout interval.
  • Disable wireless network location services when not required.
  • Avoid using live wallpaper. Use static wallpaper instead to improve battery performance.
  • Disable notification lights of your cell phone.
  • Reduce the backlight time.
  • Do not keep switching the phone on/off.
  • Another form of wireless communication that various cell phones provide is location services. This is what powers the GPS functionality of the device. Hence, when not in use, it shouldn’t be left on to avoid unnecessary power consumption.
  • Place your wireless router in an open position to ensure maximum signal strength.
  • Compatibility can be a big trouble especially for bi-directional transmissions. It may help if you buy your adapter and your router from the same company.
  • Add an extra Wireless Access Point (WAP) or Repeater to boost signal strength
  • Keep checking for your phone’s software update and when there is some update available, do it. Older firmwares consume a lot of battery to work with the newest versions of applications.
  • Many cell phones have a music equalizer built in to improve the sound of music. It drains the cell phone’s battery and should not left on for longer durations.
  • Avoid calls in low signals as the battery drains faster.
  • Optimize your power options to get maximum battery life. This one is commonly suggested to preserve cell phone batteries and also the batteries of notebooks and netbooks.
    • Continuously putting your phone to sleep and waking it up will also drain battery life
    • Use earphones. The power consumed from the battery to supply full volume to the headphones is less than to supply power to onboard speakers
    • Disconnect peripherals and quitting applications not in use to preserve laptop batteries.
    • Store the batteries of notebooks and netbooks with a 50 percent charge. Storing a battery when it’s fully discharged could make it fall into a deep discharge state which renders it unable of holding any charge.