The STEMpunk Project: The Humble Washer

In As The World Opens I discussed the deeper appreciation I’ve gained for the infrastructure of civilization as a result of The STEMpunk Project. Today I want to reinforce that theme by satisfying a curiosity I’ve long held about a little piece of metal which is so ubiquitous that it’s nearly impossible to have done much tinkering without having encountered one.

I’m talking of course about washers, which you may recognize as these things:


washer is a small metal disc with a hole in its middle, like a doughnut. They are often found wherever a threaded fastener like a screw or a bolt is being used to bind multiple objects together. On many occasions, after having disassembled something I have dutifully put the washers back in place without having really understood their purpose.

People sometimes use the term “washer” to describe the gaskets used in plumbing to stop unwanted water leakage, but washers do a variety of things besides acting as seals. Perhaps the most important is that they work to evenly distribute force and load.

Imagine that you’re fastening two boards together with a screw. Getting a really tight fit means drilling the screw as far as possible into the board. As the screw head, still turning, comes into contact with the top board a tremendous amount of torque is generated. Many a novice carpenter has watched in dismay as a screw placed near the end of a board causes a nasty split in the wood, introducing a source of weakness. But placing a washer between the screw head and board greatly reduces the chances of such splitting because the torque is spread out over the larger, harder surface area of the metal disc.

Now imagine that you’ve used bolts to fasten a short board to a long board and have hung an equal weight from both ends of the long board. Where is all the resulting downward force concentrated? In the small area covered by the bolt as it exits the board! With a small amount of weight this isn’t going to be a problem, but it can become one as weight increase. The most common solution I have seen is to place a washer on the exit side and to use a nut to secure it in place. Here, the washer is both distributing the load of the weights and distributing the torque of the nut as its being tightened into place.

Why is it necessary to fasten things together so tightly that we risk splitting a board? In part because forces that aren’t of much consequence when making a bookshelf become extremely important when building a deck or chassis. As a vehicle runs it vibrates, and any bolts in the engine or body will begin to shake loose. As a deck weathers multiple seasons the wood of which it is made expands and contracts, compromising the integrity of the joints holding it together. If this shifting is extreme enough something important might break loose, causing injury or death.

With this in mind there are washer variants designed to keep even more tension on fasteners and joints. Star washers are built with teeth which bite into their point of contact, making them harder to dislodge. Belleville, wave, and curved washers all deform slightly when they are tightened down, increasing the pressure at the joint in a manner similar to how a compressed spring might.

This isn’t all that washers do. They can also be used as a kind of adapter whenever a bolt is too small for a hole, as spacers if a bolt happens to be too long, as boundaries between two metals like aluminum and steel whose contact will result in corrosion, and as electrical insulators.

So don’t make the beginner mistake of assuming that you can leave the washers off. They really are important!

The STEMpunk Project: Internal Combustion Engines.

I have begun the first stage of the mechanics module of The STEMpunk Project, my year-long attempt to learn as much about computers, electronics, mechanics, and robotics as I can. Naturally, this means I have been thinking about internal combustion engines, intriguing devices that can be found in a variety of machines ranging from lawnmowers to speed boats. All of them rely on the exothermic chemical process known as combustion, which occurs internally, hence their name.

The present essay is a broad-strokes discussion of the internal combustion engines found in modern motor vehicles, both because this knowledge will likely prove the most useful to me after my current project is finished and because I have to draw a line somewhere.

The majority of vehicles on the road are propelled by a four-stroke spark-ignition internal combustion engine. Vehicles on the less powerful end of the scale might only sport 3- or 4-cylinder engines while those at the other extreme contain 8- or even 16-cylinder engines.

There are a few different ways to arrange these cylinders within the engine block. In an inline configuration the cylinders are in a row, as the name suggests. Obviously there is a limit to how many cylinders can be made to fit in a straight line under the hood, so many engines have their cylinders in a “V” shape (hence the term “V-8”). One arm of the V will contain half the cylinders and the other arm will contain the other half, making better use of the space available. Less common than this are engines which have their cylinders laying sideways and the pistons moving left-to-right instead of up-and-down.

Each cylinder houses a piston, which is a metal drum that compresses the fuel-air mixture as it enters the cylinder cavity (also called the cylinder bore) and pushes exhaust out at the end of a full cycle. Each piston is connected to the crankshaft via a connecting rod, and it is the crankshaft which keeps all the pistons moving in sync. To prevent oil from leaking into the cylinder bore and exhaust from leaking out, each piston is wrapped in a set of rings which seals it in.

Internal combustion engines do not run on gasoline alone, but rather a mixture of air and gasoline together. In older vehicles and in simple, modern machines, the mixing of air and fuel is accomplished with a carburetor, but these days a fuel-injection system is more common. Air is brought into the engine and distributed to each cylinder by a series of tubes called an intake (or inlet) manifold. These come in numerous shapes, but a simple visualization of the manifold for a V-8 engine resembles a cylindrical octopus laying on top of the engine with one leg going to every cylinder.

Four-stroke engines are so called because each piston turns fuel into motion by going through four distinct strokes: intake, compression, power, and exhaust. During intake an intake valve at the top of the cylinder bore opens and the fuel-air mixture is drawn into the cylinder cavity by the downward motion of the piston. The piston then screams upward during compression, crushing the mixture into 1/8th or 1/10th its original volume, depending on the engine’s compression ratio. Then a spark plug ignites the mixture, beginning the “power” phase. The subsequent explosion drives the piston downward, with the resulting force being distributed to the tires and causing them to rotate.

Now the cylinder bore is filled with noxious fumes, and an exhaust valve opens at the top of the cylinder bore to allow the upward motion of the piston to expel them.

The spectacular synchronization between intake valves, exhaust valves, and pistons is achieved in part by a camshaft. The camshaft is a metal rod with tear-drop shaped lobes attached to it, each one of which connects through a rocker arm to either an intake valve or an exhaust valve. Rocker arms have a long side and a short side, and as the camshaft spins one lobe presses against the short side of a rocker arm which causes the long side to descend and open a valve.

These valves are spring loaded, so when the camshaft rotates and the lobe disengages its rocker arm the valve shuts again. Taken together the camshaft, valves, and rocker arms are called the valvetrain, and are connected to the crankshaft by a timing belt which keeps their motion in tune.

The result can be viewed as an exquisitely timed dance of fire and steel: a piston expels exhaust through an exhaust valve opened by a rocker arm, and is thus ready to begin its cycle anew; the crankshaft rotates, and the piston is drawn downward by its connecting rod; the camshaft rotates, synchronized to the crankshaft by a timing belt, and one of its lobes touches a rocker arm which opens the intake valve for the piston; the fuel-air mixture enters the cylinder bore, sucked in by the piston’s descent; the crankshaft continues to spin, now pushing the piston upward and compressing the mixture into a tiny space; there comes a spark, and an explosion, which fires the piston downward with great violence; the vehicle moves; the camshaft, bound by the timing belt to the crankshaft, now uses a different rocker arm to open the exhaust valve; the crankshaft sends the piston skyward again, and the exhaust is expelled.

This process naturally generates enormous amounts of friction. The engine is able to withstand this because much of its surface area is coated with oil, which in addition to lubrication also serves to marginally cool the engine down. The engine’s oil reservoir is called a sump, and usually sits below the crankshaft. An oil pump draws sends the oil to an oil filter before it is distributed by oil channels to the crankshaft bearings, cylinder bore, valvetrain, and anywhere else metal is touching metal. After performing it’s job the oil returns to the sump to be sent through the cycle again. Like everything oil breaks down eventually, which is why it must be regularly changed to keep the engine running smoothly.

But the cooling effects of oil are very minor compared to the tremendous amounts of heat created by combustion, which means that engines require an additional dedicated cooling system. While it is possible to air cool an engine, most vehicles rely on liquid cooling.

As the coolant of choice, water sits in a plastic tank waiting to be pumped throughout the engine. Because vehicles are expected to operate year round in a variety of different conditions the coolant must be protected from extreme cold by antifreeze and from extreme heat by being pressurized enough to push its boiling point up into a safe range.

A water pump sends the coolant through a number of hoses which spread like blood vessels through the engine, absorbing heat. When the coolant has absorbed as much heat as it is designed to, it is sent to the radiator. Consisting of many thin, usually horizontal tubes, the radiator is designed to “spread” the coolant out so that it releases the heat it has absorbed into the air, effectively carrying it away from the engine. Proper heat dissipation requires there to be a constant stream of air running over the radiator’s tubing; this is simple when the vehicle is going fast, but small electric fans are required to maintain airflow when the vehicle is going slow or at a stop.

Modern engines make use of ingenious devices to maintain the appropriate coolant pressure and temperature. The radiator pressure cap is built so that when pressure exceeds a certain threshold a small amount of coolant is let out into a reserve tank where it waits until it can be reintroduced into the cooling system. The mechanical thermostat is calibrated to open only when the coolant reaches a certain temperature. If the coolant is still cool it is recirculated through the engine, but if it has gotten hot it is sent to the radiator to cool down.

How then does the engine receive the initial spark it requires to turn over? Usually a lead-acid battery and an induction coil are used together to begin ignition, after which point the engine is somewhat self-sustaining. As the engine runs it spins an alternator, which is just a small generator inside the car that feeds energy back to the battery.

That covers the basics! Obviously there is a vast amount of additional material that could be included here. If time allows I’d like to write a bit about different engine types and engine improvements that could be on the horizon, as well as possibly getting a bit into the history of these remarkable contraptions that have done so much to shrink distances.

As always, thank you for reading.


The STEMpunk Project: As The World Opens.

She moved slowly along the length of the motor units, down a narrow passage between the engines and the wall. She felt the immodesty of an intruder, as if she had slipped inside a living creature, under its silver skin, and were watching its life beating in gray metal cylinders, in twisted coils, in sealed tubes, in the convulsive whirl of blades in wire cages. The enormous complexity of the shape above her was drained by invisible channels, and the violence raging within it was led to fragile needles on glass dials, to green and red beads winking on panels, to tall, thin cabinets stenciled “High Voltage.”

Why had she always felt that joyous sense of confidence when looking at machines?—she thought. In these giant shapes, two aspects pertaining to the inhuman were radiantly absent: the causeless and the purposeless. Every part of the motors was an embodied answer to “Why?” and “What for?”—like the steps of a life-course chosen by the sort of mind she worshipped. The motors were a moral code cast in steel.

They are alive, she thought, because they are the physical shape of the action of a living power—of the mind that had been able to grasp the whole of this complexity, to set its purpose, to give it form. For an instant, it seemed to her that the motors were transparent and she was seeing the net of their nervous system. It was a net of connections, more intricate, more crucial than all of their wires and circuits: the rational connections made by that human mind which had fashioned any one part of them for the first time.

They are alive, she thought, but their soul operates them by remote control. Their soul is in every man who has the capacity to equal this achievement. Should the soul vanish from the earth, the motors would stop, because that is the power which keeps them going—not the oil under the floor under her feet, the oil that would then become primeval ooze again—not the steel cylinders that would become stains of rust on the walls of the caves of shivering savages—the power of a living mind —the power of thought and choice and purpose.

Atlast Shrugged, Ayn Rand

In the classic film American Beauty there is a famous scene wherein one character shows another a video of a plastic bag as it’s blown about by the wind. In whispers he describes how beautiful he found the experience of watching it as it danced, and amidst platitudes about “a benevolent force” he notes that this was the day he fully learned that there is a hidden universe behind the objects which most people take for granted.

One of the chief benefits of The STEMpunk Project has been that it has reinforced this experience in me. While I have thoroughly enjoyed gaining practical knowledge of gears, circuits, and CPUs, perhaps the greater joy has come from a heightened awareness of the fact that the world is shot through with veins of ingenuity and depth.

Understanding the genesis of this awareness requires a brief detour into psychology. Many people seem to labor under the impression that perception happens in the sense organs. Light or sound from an object hits someone and that person observes the object. Cognitive science shows definitively that this is not the case. Perception happens in the brain, and sensory data are filtered heavily through the stock of concepts and experiences within the observer. This is why an experienced mechanic can listen to a malfunctioning engine and hear subtle clues which point to one possible underlying cause or another where I only hear a vague rattling noise.

As my conceptual toolkit increases, therefore, I can expect to perceive things that were invisible to me before I had such knowledge. And this has indeed been the case. More than once I have found myself passing some crystallized artifact of thought — like a retaining wall, or an electrical substation — and wondering how it was built. That this question occurs to me at all is one manifestation of a new perspective on the infrastructure of modern life which is by turns fascinating, humbling, and very rewarding.

I have begun to see and appreciate the symmetry of guard rails on a staircase, the system of multicolored pipes carrying electricity and water through a building, the lattice of girders and beams holding up a bridge; each one the mark of a conscious intelligence, each one a frozen set of answers to a long string of “whys” and “hows”.

This notion can be pushed further: someone has to make not just the beams, but also the machinery that helps to make the beams, and the machinery which mines the materials to make the beams, and the machinery which makes the trucks which carries raw materials and finished products to where they are needed, like ripples in a fabric of civilization pulsing across the world [1].

It’s gorgeous.

A corollary to the preceding is an increased confidence in my own ability to understand how things work, and with it a more robust sense of independent agency. For most of my life I have been a very philosophical person: I like symbols and abstractions, math, music, and poetry. But if every nut and bolt in my house was placed there in accordance with the plans of a human mind, then as the possessor of a (reasonably high-functioning) human mind I ought to be able to puzzle out the basic idea.

Don’t misunderstand me: I know very well that poking around in a breaker box without all the appropriate precautions in place could get me killed. I still approach actual physical systems carefully. But I like to sit in an unfinished basement and trace the path from electrical outlet to conduit to box to subpanel to main panel. On occasion I even roll up my sleeves and actually fix things, albeit after doing a lot of research first.

In fact, you can do a similar exercise right now, wherever you are, to experience some of what I’ve been describing without going through the effort of The STEMpunk Project. Chances are if you’re reading this you’re in a room, probably one built with modern techniques by a contractor’s crew.

Set a timer on your phone for five minutes, and simply look around you. Perhaps your computer is sitting on a table or a desk. What kind of wood is the desk made out of? Were the legs and top machine-made or crafted by hand? If it has a rolling top, imagine how difficult it must have been for the person who made the first prototype.

Does the room have carpet or hardwood floors? Have you ever seen the various materials that go under carpets? Could you lay carpet, if you needed to replace a section? Are different materials used beneath carpet and beneath hardwood? If so, why?

You’re probably surrounded by four walls. Look at where they meet the floor. Is there trim at the seam? What purpose does it serve, and how was it installed so tightly? Most people know that behind their walls there are evenly-spaced boards called “studs”. Who figured out the optimum amount of space between studs? How do you locate studs when you want to hang a picture or a poster on your wall? Probably with a stud finder. How did they find studs before the stud finder was invented?

Does the ceiling above you lay flat or rise up to a point? If it’s a point, have you ever wondered how builders get the point of the ceiling directly over the center of the room? Sure, they probably took measurements of the length and width of the room and did some simple division to figure out where the middle lies. But actually cutting boards and rafters and arranging them so that they climb to an apex directly over the room’s midpoint is much harder than it sounds.

If you do this enough you’ll hopefully find that the mundane and quotidian are surprisingly beautiful in their own way. Well-built things, even just dishwashers and ceiling fans, possess an order and exactness to rival that of the greatest symphonies.

I’m glad I learned to see it.


[1] See Leonard Read’s classic essay I, Pencil, for more.

The STEMpunk Project: Fifth Month’s Progress

Because July was mostly spent reading and watching youtube lectures I’ve opted not to include a picture this month.

During the computing module I was able to maintain fairly sharp boundaries between the three different stages, but this was much less so in the electronics module. Trying to make sense of even basic circuits required me to spend at least a little time digging into theoretical discussions of voltage, current, power, and resistance, and while I did talk an electrician into letting me work for him, it looks like that’s not going to happen for a few weeks, so I’ll have to wait until then.

Nevertheless, I read most of “Basic Electricity”, a dense little manual written by the U.S. Navy, which I complemented with Dave Grynuik’s informative and entertaining course on beginning electronics. I also watched a number of videos on residential wiring, with a special emphasis on wiring breaker boxes and electrical subpanels (see 1, 2, 3, 4).

Further, I managed to write posts on batteries, transistors, the differences between three common means of storing charge in a circuit, and a basic treatment of the theoretical foundations of electronics (linked above). Though not directly STEMpunk-related, I also explored the existence of a fascinating quiet spot in the electromagnetic spectrum and reviewed James Michener’s excellent novel “Space“.

Not a bad haul!

Profundis: Space

One of mankind’s crowning achievements has been our ascent to the stars. From time immemorial the twinkling lights in the night sky have drawn the attention of the wonder- and wander-hungry among us, who catalogued them, grouped them into shapes, tracked their movements, navigated by them, and wove them into the rich tapestry of the world’s mythical traditions.

In “Space”, James A. Michener deftly explores the magisterial arch of our titanic effort to escape the pull of gravity. He spends much time building rich backstories for his fictionalized characters, with the result being that these men and women seem almost to stand up from the page and assume a life of their own. The tragic deaths of the test pilots who became the first astronauts are genuinely saddening; we sympathize with Stanley Mott as he tackles the sysiphean task of fusing conflicting motives and the tangling whirl between bureaucracy and pure science into an alloy capable of solving the greatest engineering problems in history;  whatever disdain we may have for Tucker Thompson, we don’t envy the journalist as he tries to shape public opinion so as to maintain support for the space effort. Even the great fraud Leopold Strabismus is treated with a sensitivity and nuance that makes him borderline likable. It’s hard to believe these people never existed!

But Michener certainly doesn’t shy away from extended discussions of orbital mechanics, planetology, or rocket science, and I found that I learned a lot. With the benefit of hindsight it can be hard to remember that the engineers who dreamed of going to space first had to build knowledge that is now taught in high schools. Why, for example, is the atmosphere structured such that temperature steadily drops with rising altitude before abruptly climbing up to almost 2000 °C and then falling again?  And if heat sinks prove too heavy to shield a craft reentering the atmosphere what kind of material could be used as an ablative that won’t burn away too quickly?

As years became decades and dreams took physical shape these and many other problems were solved, and thus the first unsteady steps of man toward the heavens blossomed into a race toward the furthest reaches of the solar system, and beyond. This is truly the tale of our greatest triumph, told in exquisite detail by one of our ablest scribes.


The STEMpunk Project: Foundations in Electronics Theory


Upon first seeing a circuit diagram like the above, with its dizzying, labyrinthine interconnections and mysterious hieroglyphics, you can be forgiven for believing that electronics might forever be beyond comprehension. And it is true that while the field of electronics has a useful array of water-based metaphors for explaining where electrons are going, there are some strange things happening deep inside the devices that make modern life possible.

All that having been said, understanding circuits boils down to being able to trace the interactions of four basic forces: voltage, current, resistance, and power.

Voltage, measured in volts, is often analogized as being like water pressure in a hose. For a given hose with a set diameter and length, more water pressure is going to mean more water flow and less water pressure is going to mean less water flow. If two 100-gallon tanks, one empty and one full, are connected by a length of pipe with a shutoff valve at its center, the water in the full tank is going to exert a lot of pressure on the valve because it ‘wants’ to flow into the empty tank.

Voltage is essentially electrical pressure, or, more technically, a difference in electrical potential. The negative terminal of a battery contains many electrons which, because of their like charges, are repelling each other and causing a build up of pressure. Like the water in the 100-gallon tank they ‘want’ to flow through the conductor to the positive terminal.

Current, measured in amps, is the amount of electricity flowing past a certain point in one second, not unlike the amount of water flowing through a hose. If more pressure (i.e. ‘voltage’) is applied, then current goes up, and correspondingly drops if pressure decreases. Returning to our two water tanks, how could we increase water pressure so as to get more water to flow? By replacing the full 100-gallon tank with a full 1000-gallon tank!

But neither the water in the pipe nor the current in the wire flows unimpeded. Both encounter resistance, measured in ohms when in a circuit, in the form of friction from their respective conduits. No matter how many gallons of water we put in the first tank, the pipe connecting them only has so much space through which water can move, and if we increase the pressure too much the pipe will simply burst. But if we increase its diameter, its resistance decreases and more water can flow through it at the same amount of pressure.

At this point you may be beginning to sense the basic relationship between voltage, current, and resistance. If we increase voltage we get more current because voltage is like pressure, but this can only be pushed so far because the conductor exhibits resistance to the flow of electricity. Getting a bigger wire means we can get more current at the same voltage, or more means we can increase current to get even more current.

If only there were some simple, concise mathematical representation of all this! There is, and its called Ohm’s Law:


Here ‘E’ means voltage, ‘I’ means current, and ‘R’ means resistance. This equation says that voltage is directly proportional to the product of current and resistance. Some basic algebraic manipulations yield other useful equations:

I = E/R

R = E/I

From these we can see clearly what before we were only grasping with visual metaphors. Current is directly proportional to voltage: more pressure means more current. It is indirectly proportional to resistance: more resistance means less current. Knowing any two of these values allows us to solve for the other.

That last fundamental force we need to understand is power. In physics, power is defined as ‘the ability to do work’. Pushing a rock up a hill requires a certain amount of power, and pushing a bigger rock up a hill, or the same rock up a steeper hill, requires more power.

For our purposes power, measured in watts, can be represented by this equation:

P = IE

You have a given amount of electrical pressure and a given amount of electrical flow, and together they give you the ability to turn a lightbulb on. As before we can rearrange the terms in this equation to generate other useful insights:

I = P/E

E = P/I

From this we can deduce, for example, that for a 1000 watt appliance increasing the voltage allows us to draw less current. This is very important if you’re trying to do something like build a flower nursery and need to know how many lights will be required, how many watts will be used by each light, and how many amps and volts can be supplied to your building.

There you have it! No matter how complicated a power grid or the avionics on a space shuttle might seem, everything boils down to how power, voltage, current, and resistance interact.

The majority of my knowledge on this subject was comes from an excellent series of lectures given by a former Navy-trained electrician, Joe Gryniuk. His teaching style is jocular and his practical knowledge vast. Sadly, near video eighteen or so, the audio quality begins to degrade and makes the lectures significantly less enjoyable. Still highly recommended.

The STEMpunk Project: Inductors, Capacitors, Batteries.

When I first began learning about basic electrical components I had a hard time distinguishing between inductors, capacitors, and batteries because they all appear to do the same thing: store energy. As the inner workings of these devices became less opaque, however, myriad differences came into view. To help the beginner avoid some of my initial confusion I sat down to write a brief treatment of all three.

Though inductors, capacitors, and batteries do indeed store energy their means of doing so vary tremendously. This has implications for how quickly they can be charged, how quickly they can discharge, when and where they are most appropriately used, what future developments we can expect, etc.

Inductors store energy electromagnetically. Though there is some controversy over the specific mechanics of energy storage in an inductor, most seem to agree that it relies on the magnetic field that is created when current runs through the inductor wire. As current increases the magnetic field increases, opposing the change in the current and absorbing energy in the process. When current levels off the magnetic field just sits there, holding on to its energy stores and not hassling the electrons as they flow through. But when current begins decreasing the magnetic field begins to collapse, and its energy goes towards keeping the electrons flowing. Thus the energy stored in the initial buildup of current is discharged when current begins to slow down.

Capacitors store energy electrostaticallyA basic capacitor is two conductive plates separated by an insulator, like air or micah. When current begins to flow onto one of these plates there is a build up of electrons and a resulting negative charge. On the other plate, electrons are drawn away both by the repulsive force of the electrons on the first plate and the attractive force of the positive terminal of the voltage source. As this is happening the orbits of the electrons of the atoms in the insulator separating the two plates begin to warp, spending more time near the positively-charged plate. Energy is thus stored in the field between the plates in a way similar to how energy is stored in a compressed spring.

Batteries store energy electrochemically. As I’ve written before the simplest kind of battery consists of two electrodes made of different materials immersed in an electrolyte bath. The electrodes must be made such that one is more likely to give up electrons than the other. When a load is attached to the battery electrons flow from the ‘negative’ terminal through the load to the ‘conductor’. Unlike inductors and capacitors batteries bring all their charge to the circuit in the beginning.

The STEMpunk Project: Transistors

After writing my post on basic electrical components I realized that batteries and transistors were going to require a good deal more research to understand adequately. Having completed my post on the former, the time has finally come to elucidate the foundation of modern electronics and computing: the humble transistor.

Transistor Origins

The development of the transistor began out of a need to find a superior means of amplifying telephone signals sent through long-distance wires. Around the turn of the twentieth century American Telephone and Telegraph (AT&T) had begun offering transcontinental telephone service as a way of staying competitive. The signal boost required to allow people to talk to each other over thousands of miles was achieved with triode vacuum tubes based on the design of Lee De Forest, an American inventor. But these vacuum tubes consumed a lot of power, produced a lot of heat, and were unreliable to boot. Mervin Kelly of Bell Labs recognized the need for an alternative and, after WWII, began assembling the team that would eventually succeed.

Credit for pioneering the transistor is typically given to William Shockley, John Bardeen, and Walter Brattain, also of of Bell Labs, but they were not the first people to file patents for the basic transistor principle: Julius Lilienfeld filed one for the field-effect transistor in 1925 and Oskar Hiel filed one in 1934. Neither man made much of an impact in the growing fields of electronics theory or electronics manufacturing, but there is evidence that William Shockley and Gerald Pearson, a co-worker at Bell Labs, did build a functioning transistor prototype from Lilienfeld’s patents.

Shockley, Brattain, and Bardeen understood that if they could solve certain basic problems they could build a device that would act like a signal amplifier in electronic circuits by exploiting the properties of semiconductors to influence electron flow.

Actually accomplishing this, of course, proved fairly challenging. After many failed attempts and cataloging much anomalous behavior a practical breakthrough was achieved. A strip of the best conductor, gold, was attached to a plastic wedge and then sliced with a razor, producing two gold foil leads separated by an extremely small space. This apparatus was then placed in contact with a germanium crystal which had an additional lead attached at its base. The space separating the two pieces of gold foil was just large enough to prevent electron flow. Unless, that is, current were applied to one of the gold-tipped leads, which caused ‘holes’ — i.e. spaces without electrons — to gather on the surface of the crystal. This allowed electron flow to begin between the base lead and the other gold-tipped lead. This device became known as the point-contact transistor, and gained the trio a Nobel Prize.

Though the point-contact transistor showed promise and was integrated with a number of electrical devices it was still fragile and impractical at a larger scale. This began to change when William Shockley, outraged at not receiving the credit he felt he deserved for the invention of this astonishing new device, developed an entirely new kind of transistor based on a ‘sandwich’ design. The result was essentially a precursor to the bipolar junction transistor, which is what almost everyone in the modern era means by the term ‘transistor’.

Under the Hood

In the simplest possible terms a transistor is essentially a valve for controlling the flow of electrons. Valves can be thought of as amplifiers: when you turn a faucet handle, force produced by your hand is amplified to control the flow of thousands of gallons of water, and when you press down on the accelerator in your car, the pressure of your foot is amplified to control the motion of thousands of pounds of fire and steel.

Valves, in other words, allow small forces to control much bigger forces. Transistors work in a similar way.

One common type of modern transistor is the bipolar junction NPN transistor, a cladistic descendant of Shockley’s original design. It is constructed from alternating layers of silicon which are doped with impurities to give them useful characteristics.

In its pure form silicon is a textbook semiconductor. It contains four electrons in its valence shell which causes it to form very tight crystal lattices that typically don’t facilitate the flow of electrons. The N layer is formed by injecting trace amounts of phosphorus, which contains five valence electrons, into this lattice. It requires much less energy to knock this fifth electron loose than it would to knock loose one of the four valence electrons in the silicon crystal, making the N layer semiconductive. Similarly, the P layer is formed by adding boron which, because of the three electrons in its valence shell, leaves holes throughout the silicon into which electrons can flow.

It’s important to bear in mind that neither the P nor the N layers are electrically charged. Both are neutral and both permit greater flow of electrons than pure silicon would. The interface between the N and P layers quickly becomes saturated as electrons from the phosphorus move into the holes in the valence shell of the Boron. As this happens it becomes increasingly difficult for electrons to flow between the N and P layers, and eventually a boundary is formed. This is called the ‘depletion layer’

Now, imagine that there is a ‘collector’ lead attached to the first N layer and another ’emitter’ lead attached to the other N layer. Current cannot flow between these two leads because the depletion layer at the P-N junction won’t permit it. Between these two layers, however, there is a third lead, called a ‘base’, placed very near the P layer. By making the base positively charged electrons can overcome the P-N junction and begin flowing from the emitter to the collector.

The key here is to realize that the amount of charge to the base required to get current moving is much smaller than the current flowing to the collector, and that current flow can be increased or decreased by a corresponding change in the current to the base. This is what gives the transistor its amplifier properties.

Transistors and Moore’s Law

Even more useful than this, however, is the ability of a transistor to act as a switch. Nothing about the underlying physics changes here. If current is not flowing in the transistor it is said to in cutoff, and if current is flowing in the transistor it is said to be in saturation. This binary property of transistors makes them ideally suited for the construction of logic gates, which are the basic components of every computer ever made.

A full discussion of logic gate construction would be well outside the purview of this essay, but it is worth briefly discussing one popular concept which requires a knowledge of transistors in order to be understood.

Named after Intel co-founder Gordon Moore, Moore’s Law is sometimes stated as the rule that computing power will double roughly every two years. The more accurate version is that the number of transistors which can fit in a given unit area will double every two years . These two definitions are fairly similar, but keeping the latter in mind will allow you to better understand the underlying technology and where it might head in the future.

Moore’s law has held for as long as it has because manufacturers have been able to make transistors smaller and smaller. Obviously this can’t continue forever, both because at a certain transistor density power consumption and heat dissipation become serious problems, and because at a certain size effects like quantum tunneling prevent the sequestering of electrons.

A number of alternatives to silicon-based chips are being seriously considered as a way of extending Moore’s Law. Because of how extremely thin it can be made, graphene is one such contender. The problem, however, is that the electrophysical properties of graphene are such that building a graphene transistor that can switch on and off is not straightforward. A graphene-based computer, therefore, might well have to develop an entirely different logical architecture to perform the same tasks as modern computers.

Other potentially fruitful avenues are quantum computing, optical computing, and DNA computing, all of which rely on very different architectures than conventional Von-Neumann computers. As I’m nearing the 1500 word mark I think I’ll end this essay here, but I do hope to return to these advanced computing topics at some point in the future 🙂


More on transistors:

The STEMpunk Project: Batteries

In The STEMpunk Project: Basic Electrical Components I wrote about resistors, capacitors, inductors, and diodes, but I had originally wanted to include batteries and transistors as well. As I did research for that post however it occurred to me that these latter two devices were very complex and would require their own discussion. In today’s post I cover a remarkable little invention familiar to everyone: batteries.

Battery Basics

The two fundamental components of a battery are electrodes and an electrolyte, which together make up one cell. The electrodes are made of different metals whose respective properties give rise to a difference in electrical potential energy which can be used to induce current flow. These electrodes are then immersed in an electrolyte, which can be made from a sulfuric acid chemical bath, a gel-like paste, or many other materials. When an external conductor is hooked up to each electrode current will flow from one of them (the ‘negative terminal’) to the other (the ‘positive terminal’).

Battery cells can be primary or secondary, and are distinguished by whether or not the chemical reactions happening in the cell cause one of the terminals to erode. The simplest primary cell consists of a zinc electrode as the negative terminal, a carbon electrode as the positive terminal, and sulfuric acid diluted with water as the electrolyte. As current flows zinc molecules combine with sulfuric acid to produce zinc sulfate and hydrogen gas, thus consuming the zinc electrode.

But even when not connected to a circuit impurities in the zinc electrode can cause small amounts of current to flow in the electrode and correspondingly slow rates of erosion to occur. This is called local action and is the reason why batteries can die even when not used for long periods of time. Of course there exist techniques for combating this, like coating the zinc electrode in mercury to pull out impurities and render them less reactive. None of these work flawlessly, but advances in battery manufacturing have allowed for the creation of long-storage batteries with a sealed electrolyte, released only when the battery is actually used, and of primary cell batteries that can be recharged.

A secondary cell works along the same chemical principles as a primary cell, but the electrodes and electrolyte are composed of materials that don’t dissolve when they react. In order to be classifiable as ‘rechargeable’ it must be possible to safely reverse the chemical reactions inside the cell by means of running a current through it in the reverse direction of how current normally flows out of it. Unlike the zinc-carbon voltaic cell discussed above, for example, in a nickel-cadmium battery the molecules formed during battery discharge are easily reverted to their original state during recharging.

Naturally it is difficult to design and build such a sophisticated electrochemical mechanism, which is why rechargeable batteries are more expensive.

Much more information on the chemistry of primary and secondary cells can be found in this Scientific American article. I also found this article on how batteries work from Save On Energy to be helpful.

Combining Batteries in Series or in Parallel

Like most other electrical components batteries can be hooked up in series, in parallel, or in series-parallel. To illustrate, imagine four batteries lined up in a row, with their positive terminals on the left and their negative terminals on the right. If wired in series, the negative terminal on the rightmost battery would be the negative terminal for the whole apparatus and the positive terminal on the leftmost battery would be the positive terminal for the whole apparatus. In between, the positive terminals of one battery are connected to the negative terminals of the next battery, causing the voltage of the individual batteries to be cumulative. This four-battery setup would generate six volts total (1.5V per battery multiplied by the number of batteries), and the total current of the circuit load (a light bulb, a radio, etc.) is non-cumulative and would flow through each battery.

If wired in parallel, the positive and negative terminals of the rightmost battery would connect to the same terminal on the next battery, and the terminals for the leftmost battery would connect to the external circuit. In this setup it is voltage which is non-cumulative and current which is cumulative.  By manipulating and combining these properties of batteries it is possible to supply power to a wide variety of circuit configurations.

Different Battery Types [1]

Nickel Cadmium: NiCd batteries are a mature technology and thus well-understood. They have a long life but relatively low energy density and are thus suited for applications like biomedical equipment, radios, and power tools. They do contain toxic materials and aren’t eco-friendly.

Nickel-Metal Hydride: NiMH batteries have a shorter life span and correspondingly higher energy density. Unlike their NiCd cousins NiMH batteries contain nothing toxic.

Lead Acid: Lead Acid batteries tend to be very heavy and so are most suitable for use in places where weight isn’t a factor, like hospital equipment, emergency lighting, and automobiles.

Absorbent Glass Mat: The AGM is a special kind of lead acid battery in which the sulfuric acid electrolyte is absorbed into a fine fiberglass mesh. This makes the battery spill proof and capable of being stored for very long periods of time. They are also vibration resistance and have a high power density, all of which combine to make them ideal for high-end motorcycles, NASCAR, and military vehicles.

Lithium Ion: Li-on is the fastest growing battery technology. Being high-energy and very lightweight makes them ideal for laptops and smartphones.

Lithium Ion Polymer: Li-on polymer batteries are very similar to plain Li-on batteries but ever smaller.

The Future of Batteries

Batteries have come a very long way since Ewald Von Kleist first stored static charge in a Leyden jar in 1744. Lithium Ion seems to be the hot topic of discussion, but there are efforts being made at building aluminum batteries, solid state batteries, and microbatteries, and some experts maintain that the exciting thing to watch out for is advances in battery manufacturing.

Hopefully before long we’ll have batteries which power smart clothing and extend the range of electric vehicles to thousands of miles.


[1] Most of this section is just a summary of the information found here.

The STEMpunk Project: Fourth Month’s Progress

Throughout the STEMpunk Project I’m going to try to take a picture each month of the books I’ve read and projects I’ve completed, as sort of a visual metaphor for my progress.

Here’s what I accomplished in June:


I didn’t add any books to my pile because, like in the first stage of the electronics module I wanted to spend a lot of time tinkering. This was accomplished with the Elenco Electronics Playground (EEP, big toyish thing in right corner), the Sparkfun Inventor’s Kit and the Sparkfun Inventor’s kit for Photon (both shown directly behind and above the EEP), and the Elenco soldering practice kit (to the left of the EEP). I also included a soldering iron and wire stripper I used while learning to solder.

The table is starting to look a little busier!