The STEMpunk Project: Transistors

After writing my post on basic electrical components I realized that batteries and transistors were going to require a good deal more research to understand adequately. Having completed my post on the former, the time has finally come to elucidate the foundation of modern electronics and computing: the humble transistor.

Transistor Origins

The development of the transistor began out of a need to find a superior means of amplifying telephone signals sent through long-distance wires. Around the turn of the twentieth century American Telephone and Telegraph (AT&T) had begun offering transcontinental telephone service as a way of staying competitive. The signal boost required to allow people to talk to each other over thousands of miles was achieved with triode vacuum tubes based on the design of Lee De Forest, an American inventor. But these vacuum tubes consumed a lot of power, produced a lot of heat, and were unreliable to boot. Mervin Kelly of Bell Labs recognized the need for an alternative and, after WWII, began assembling the team that would eventually succeed.

Credit for pioneering the transistor is typically given to William Shockley, John Bardeen, and Walter Brattain, also of of Bell Labs, but they were not the first people to file patents for the basic transistor principle: Julius Lilienfeld filed one for the field-effect transistor in 1925 and Oskar Hiel filed one in 1934. Neither man made much of an impact in the growing fields of electronics theory or electronics manufacturing, but there is evidence that William Shockley and Gerald Pearson, a co-worker at Bell Labs, did build a functioning transistor prototype from Lilienfeld’s patents.

Shockley, Brattain, and Bardeen understood that if they could solve certain basic problems they could build a device that would act like a signal amplifier in electronic circuits by exploiting the properties of semiconductors to influence electron flow.

Actually accomplishing this, of course, proved fairly challenging. After many failed attempts and cataloging much anomalous behavior a practical breakthrough was achieved. A strip of the best conductor, gold, was attached to a plastic wedge and then sliced with a razor, producing two gold foil leads separated by an extremely small space. This apparatus was then placed in contact with a germanium crystal which had an additional lead attached at its base. The space separating the two pieces of gold foil was just large enough to prevent electron flow. Unless, that is, current were applied to one of the gold-tipped leads, which caused ‘holes’ — i.e. spaces without electrons — to gather on the surface of the crystal. This allowed electron flow to begin between the base lead and the other gold-tipped lead. This device became known as the point-contact transistor, and gained the trio a Nobel Prize.

Though the point-contact transistor showed promise and was integrated with a number of electrical devices it was still fragile and impractical at a larger scale. This began to change when William Shockley, outraged at not receiving the credit he felt he deserved for the invention of this astonishing new device, developed an entirely new kind of transistor based on a ‘sandwich’ design. The result was essentially a precursor to the bipolar junction transistor, which is what almost everyone in the modern era means by the term ‘transistor’.

Under the Hood

In the simplest possible terms a transistor is essentially a valve for controlling the flow of electrons. Valves can be thought of as amplifiers: when you turn a faucet handle, force produced by your hand is amplified to control the flow of thousands of gallons of water, and when you press down on the accelerator in your car, the pressure of your foot is amplified to control the motion of thousands of pounds of fire and steel.

Valves, in other words, allow small forces to control much bigger forces. Transistors work in a similar way.

One common type of modern transistor is the bipolar junction NPN transistor, a cladistic descendant of Shockley’s original design. It is constructed from alternating layers of silicon which are doped with impurities to give them useful characteristics.

In its pure form silicon is a textbook semiconductor. It contains four electrons in its valence shell which causes it to form very tight crystal lattices that typically don’t facilitate the flow of electrons. The N layer is formed by injecting trace amounts of phosphorus, which contains five valence electrons, into this lattice. It requires much less energy to knock this fifth electron loose than it would to knock loose one of the four valence electrons in the silicon crystal, making the N layer semiconductive. Similarly, the P layer is formed by adding boron which, because of the three electrons in its valence shell, leaves holes throughout the silicon into which electrons can flow.

It’s important to bear in mind that neither the P nor the N layers are electrically charged. Both are neutral and both permit greater flow of electrons than pure silicon would. The interface between the N and P layers quickly becomes saturated as electrons from the phosphorus move into the holes in the valence shell of the Boron. As this happens it becomes increasingly difficult for electrons to flow between the N and P layers, and eventually a boundary is formed. This is called the ‘depletion layer’

Now, imagine that there is a ‘collector’ lead attached to the first N layer and another ’emitter’ lead attached to the other N layer. Current cannot flow between these two leads because the depletion layer at the P-N junction won’t permit it. Between these two layers, however, there is a third lead, called a ‘base’, placed very near the P layer. By making the base positively charged electrons can overcome the P-N junction and begin flowing from the emitter to the collector.

The key here is to realize that the amount of charge to the base required to get current moving is much smaller than the current flowing to the collector, and that current flow can be increased or decreased by a corresponding change in the current to the base. This is what gives the transistor its amplifier properties.

Transistors and Moore’s Law

Even more useful than this, however, is the ability of a transistor to act as a switch. Nothing about the underlying physics changes here. If current is not flowing in the transistor it is said to in cutoff, and if current is flowing in the transistor it is said to be in saturation. This binary property of transistors makes them ideally suited for the construction of logic gates, which are the basic components of every computer ever made.

A full discussion of logic gate construction would be well outside the purview of this essay, but it is worth briefly discussing one popular concept which requires a knowledge of transistors in order to be understood.

Named after Intel co-founder Gordon Moore, Moore’s Law is sometimes stated as the rule that computing power will double roughly every two years. The more accurate version is that the number of transistors which can fit in a given unit area will double every two years . These two definitions are fairly similar, but keeping the latter in mind will allow you to better understand the underlying technology and where it might head in the future.

Moore’s law has held for as long as it has because manufacturers have been able to make transistors smaller and smaller. Obviously this can’t continue forever, both because at a certain transistor density power consumption and heat dissipation become serious problems, and because at a certain size effects like quantum tunneling prevent the sequestering of electrons.

A number of alternatives to silicon-based chips are being seriously considered as a way of extending Moore’s Law. Because of how extremely thin it can be made, graphene is one such contender. The problem, however, is that the electrophysical properties of graphene are such that building a graphene transistor that can switch on and off is not straightforward. A graphene-based computer, therefore, might well have to develop an entirely different logical architecture to perform the same tasks as modern computers.

Other potentially fruitful avenues are quantum computing, optical computing, and DNA computing, all of which rely on very different architectures than conventional Von-Neumann computers. As I’m nearing the 1500 word mark I think I’ll end this essay here, but I do hope to return to these advanced computing topics at some point in the future 🙂

***

More on transistors:

One thought on “The STEMpunk Project: Transistors

  1. Pingback: The STEMpunk Project: Fifth Month’s Progress | Rulers To The Sky

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s