ArrowArtboardCreated with Sketch.Title ChevronTitle ChevronEye IconIcon FacebookIcon LinkedinIcon Mail ContactPath LayerIcon MailMenu BurgerPositive ArrowIcon PrintIcon SearchSite TitleTitle ChevronIcon Twitter
Biotechnology

Chipmakers seek ways around transistors' limitations

A demo is run using Hitachi's new computer.

TOKYO -- Moore's law has been a driver of the digital revolution for half a century. It refers to the relentless advance of technology, first observed by Intel co-founder Gordon Moore in 1965, that has allowed chip designers to miniaturize their circuits, doubling the number of transistors on the same amount of silicon real estate every 18 months or so.

Now those transistors are so tiny they are bumping up against the physical limits of how much more they can shrink, posing a challenge for those looking to create still faster, better computer chips.

To overcome that barrier, designers are looking into how to use dedicated computers that run special algorithms. Some are based on the Ising model, a mathematical model developed to describe phase transitions, such as when solids become gases. Others are specialized computers that carry out a type of machine learning known as deep learning. These are used primarily in the field of artificial intelligence.

In both cases, the goal is to develop fast, energy-efficient methods of computation geared to these specialized devices and applications.

Optical fibers, measuring instruments, amplifiers and other equipment cover a table at NTT Basic Research Laboratories, part of the Japanese telecommunications company. It looks like an experiment in optical communications, but it is a computer.

More specifically, it is an Ising machine: a prototype computer that uses the Ising model to solve complex problems. "The machine exploits a natural phenomenon seen in laser light to solve the kinds of problems that conventional computers are bad at solving," said Hiroki Takesue, a distinguished researcher at the lab.

The hitherto steady miniaturization of semiconductor circuitry described in Moore's law is slowing, and the "law" may cease to hold before 2020. Circuit pathways are already so narrow that the current tends to stray out of its assigned lane, which hampers energy efficiency. Against this backdrop, research is accelerating on other ways to boost energy efficiency using specially designed algorithms on equally specialized machines.

One approach that has gotten a lot of press is the quantum computer. But quantum computing, which takes advantage of the varying states of quanta, is difficult to control. So far these machines can only process a few bits of information.

In contrast, the computer that NTT is working on eschews quantum mechanics in favor of the Ising model, which is normally used to describe physical phenomena such as laser oscillations and the cooling of matter.

NTT is trying to create a machine to solve so-called combinatorial optimization problems -- where an enormous number of choices is available and the goal is to find the best choice. The most famous of these is the "traveling salesman problem," in which a hypothetical salesman must travel between multiple cities by the shortest route. Practical applications of the problem are seen, for example, in the design of communications and distribution networks, power grids and the like.

NTT hopes to use its machine to create better communications networks. The hope is that the Ising model can yield results many orders of magnitude faster than a conventional computer.

NTT's prototype for a new type of computer using the Ising model

The prototype system has a kilometer of optical fiber. Thousands of pulse waves of laser light, each with a set phase, are sent through the fiber. Then another pulse is superimposed on these waves -- only this pulse wave has a phase determined by the Ising model. The light waves are amplified and sent around and around through the fiber. When the total energy of the system reaches its minimum, or ground state, the various pulse waves end up in one of two phases, translatable into the zeroes and ones of binary code, yielding the optimal answer to the problem.

BIT BY BIT

Working with Japan's National Institute of Informatics and others, NTT has developed a 10,000-bit prototype system. "By the end of fiscal 2018, we want to have a 100,000-bit system ready that we can actually use to solve problems," said Shoko Utsunomiya, a theoretician at the institute.

Canadian startup D-Wave Systems has already commercialized a device that makes use of the Ising model. Its version searches for the best answer to questions using the phenomenon in which matter settles from a high-energy state into a low-energy state.

U.S. search giant Google has used D-Wave's device to perform calculations up to 100 million times faster than conventional computers. Unfortunately, the device is expensive and can only operate at extremely low temperatures.

Hitachi is taking a different approach, seeking to use today's semiconductor devices linked together in a lattice to create a device based on the Ising model. The company has already come up with a 20,000-bit prototype.

"For the internet of things, the ability to resolve optimization problems will be enormously important," said Hitachi chief scientist Masanao Yamaoka.

To actually solve a problem with an Ising machine, the problem must first be converted into a form the model can handle. To find the optimal solution for a power grid or a distribution network, for example, the first step is to create a model that represents the optimal state. Posing a question in this form is very difficult. "You need the capabilities of a mathematician," explained Yamaoka, which is why Hitachi joined forces with mathematicians at Hokkaido University in July.

Another approach is to create software that can perform the calculations on a conventional computer. In June, Shu Tanaka of Waseda University and Recruit Communications presented their results simulating the Ising model at an international meeting hosted by Google. "Even if we can't find the optimal solution, at least we can find a better solution," Tanaka said.

Tanaka and Recruit Communications used their setup to study internet advertising.

Internet users can be classified based on different attributes, such as gender and place of residence. These attributes must be weighted appropriately to predict the sales potential of products aimed at online customers. The number of possible combinations is dauntingly large.

The new method based on the Ising model has proved much more efficient in tackling the problem. Recruit Communications expects to put the new system to practical use within the year "as business warrants," said Sougo Oishi, head of the company's advertising technology group.

HOW DEEP IS YOUR LEARN?

PEZY Computing, a Tokyo chip developer, has a demonstrated ability to make chips for supercomputers. The company had a hand in the development of the Shoubu, a machine at the government-affiliated Riken research institute that has topped the Green500 list of energy-efficient supercomputers three times in a row. The Green500 ranks supercomputers by how many floating point operations they can crunch with a given amount of electricity. PEZY contributed proprietary chip-cooling technologies and circuitry for parallel processing to the Shoubu.

Earlier this year, PEZY set up another company, called Deep Insights, that will build a chip designed specifically for deep learning applications in artificial intelligence. The new company hopes to create a chip that can perform calculations 1,000 times faster than a typical graphics processing unit with a 28-nanometer linewidth. "A practical version will be ready within two years," predicted PEZY President Motoaki Saito.

PEZY Computing President Motoaki Saito

The chip that Deep Insights will design for deep learning has different computational performance requirements than the chips made for supercomputers, which perform a variety of calculations.

The key difference is the number of digits in the values handled in each round of computation. For a supercomputer, even 64 bits of precision is barely enough to perform calculations with high precision. But far fewer digits are required for deep learning, where small calculations are performed in a hierarchical fashion countless times to "learn" and make inferences based on the results.

In some cases, even a precision as low as eight bits is sufficient for deep learning. Deep Insights aims to develop a fast, low-power chip to conduct these types of calculations.

AI startup Preferred Networks is also developing a chip for applications in artificial intelligence. "We intend to build our own chip, working in collaboration with other companies," said Executive Vice President Daisuke Okanohara.

Preferred Networks is known for its software. The company is working with Toyota Motor to develop artificial intelligence software for self-driving cars. It has also teamed up with robot maker Fanuc to develop similar programs for factory automation.

Preferred Networks also recognizes the importance of hardware and the need to develop chips with higher performance so that cars, robots and other machines that use artificial intelligence can quickly learn from the data they gather.

Image recognition using artificial intelligence is said to require 10 petaflops of number-crunching processing power for still pictures, and 10 exaflops for video. A petaflop is 1,000 trillion flops, or floating point operations per second. An exaflop is 1,000 petaflops.

Applying artificial intelligence to self-driving cars and drones will require chips capable of 100 exaflops, experts estimate.

"Other companies will be coming out with chips like this around fiscal 2019, so we need to have our product ready by then," said Okanohara.

Researchers in other countries are also beavering away at similar technology. Several specialist chips have already appeared. Google has developed a chip dubbed the TPU that is designed for deep learning on servers. In March, the company used the TPU and its go-playing computer program AlphaGo to beat a top-ranking go master at his own game.

At the Symposia on VLSI Technology and Circuits in June, a research team from Belgium described a low-power chip for deep learning that is designed for cars and gadgets like mobile terminals. And Nvidia, a leading maker of GPUs for graphics processors, brought a new device to market in April that is designed for deep learning and can quickly crunch through 16-bit calculations.

Sponsored Content

About Sponsored Content This content was commissioned by Nikkei's Global Business Bureau.

You have {{numberArticlesLeft}} free article{{numberArticlesLeft-plural}} left this monthThis is your last free article this month

Stay ahead with our exclusives on Asia;
the most dynamic market in the world.

Stay ahead with our exclusives on Asia

Get trusted insights from experts within Asia itself.

Get trusted insights from experts
within Asia itself.

Get Unlimited access

You have {{numberArticlesLeft}} free article{{numberArticlesLeft-plural}} left this month

This is your last free article this month

Stay ahead with our exclusives on Asia; the most
dynamic market in the world
.

Get trusted insights from experts
within Asia itself.

Try 3 months for $9

Offer ends April 19th

Your trial period has expired

You need a subscription to...

  • Read all stories with unlimited access
  • Use our mobile and tablet apps
See all offers and subscribe

Your full access to the Nikkei Asian Review has expired

You need a subscription to:

  • Read all stories with unlimited access
  • Use our mobile and tablet apps
See all offers
NAR on print phone, device, and tablet media