The STEMpunk Project: Foundations in Electronics Theory

complex_circuitry

Upon first seeing a circuit diagram like the above, with its dizzying, labyrinthine interconnections and mysterious hieroglyphics, you can be forgiven for believing that electronics might forever be beyond comprehension. And it is true that while the field of electronics has a useful array of water-based metaphors for explaining where electrons are going, there are some strange things happening deep inside the devices that make modern life possible.

All that having been said, understanding circuits boils down to being able to trace the interactions of four basic forces: voltage, current, resistance, and power.

Voltage, measured in volts, is often analogized as being like water pressure in a hose. For a given hose with a set diameter and length, more water pressure is going to mean more water flow and less water pressure is going to mean less water flow. If two 100-gallon tanks, one empty and one full, are connected by a length of pipe with a shutoff valve at its center, the water in the full tank is going to exert a lot of pressure on the valve because it ‘wants’ to flow into the empty tank.

Voltage is essentially electrical pressure, or, more technically, a difference in electrical potential. The negative terminal of a battery contains many electrons which, because of their like charges, are repelling each other and causing a build up of pressure. Like the water in the 100-gallon tank they ‘want’ to flow through the conductor to the positive terminal.

Current, measured in amps, is the amount of electricity flowing past a certain point in one second, not unlike the amount of water flowing through a hose. If more pressure (i.e. ‘voltage’) is applied, then current goes up, and correspondingly drops if pressure decreases. Returning to our two water tanks, how could we increase water pressure so as to get more water to flow? By replacing the full 100-gallon tank with a full 1000-gallon tank!

But neither the water in the pipe nor the current in the wire flows unimpeded. Both encounter resistance, measured in ohms when in a circuit, in the form of friction from their respective conduits. No matter how many gallons of water we put in the first tank, the pipe connecting them only has so much space through which water can move, and if we increase the pressure too much the pipe will simply burst. But if we increase its diameter, its resistance decreases and more water can flow through it at the same amount of pressure.

At this point you may be beginning to sense the basic relationship between voltage, current, and resistance. If we increase voltage we get more current because voltage is like pressure, but this can only be pushed so far because the conductor exhibits resistance to the flow of electricity. Getting a bigger wire means we can get more current at the same voltage, or more means we can increase current to get even more current.

If only there were some simple, concise mathematical representation of all this! There is, and its called Ohm’s Law:

E=IR

Here ‘E’ means voltage, ‘I’ means current, and ‘R’ means resistance. This equation says that voltage is directly proportional to the product of current and resistance. Some basic algebraic manipulations yield other useful equations:

I = E/R

R = E/I

From these we can see clearly what before we were only grasping with visual metaphors. Current is directly proportional to voltage: more pressure means more current. It is indirectly proportional to resistance: more resistance means less current. Knowing any two of these values allows us to solve for the other.

That last fundamental force we need to understand is power. In physics, power is defined as ‘the ability to do work’. Pushing a rock up a hill requires a certain amount of power, and pushing a bigger rock up a hill, or the same rock up a steeper hill, requires more power.

For our purposes power, measured in watts, can be represented by this equation:

P = IE

You have a given amount of electrical pressure and a given amount of electrical flow, and together they give you the ability to turn a lightbulb on. As before we can rearrange the terms in this equation to generate other useful insights:

I = P/E

E = P/I

From this we can deduce, for example, that for a 1000 watt appliance increasing the voltage allows us to draw less current. This is very important if you’re trying to do something like build a flower nursery and need to know how many lights will be required, how many watts will be used by each light, and how many amps and volts can be supplied to your building.

There you have it! No matter how complicated a power grid or the avionics on a space shuttle might seem, everything boils down to how power, voltage, current, and resistance interact.

The majority of my knowledge on this subject was comes from an excellent series of lectures given by a former Navy-trained electrician, Joe Gryniuk. His teaching style is jocular and his practical knowledge vast. Sadly, near video eighteen or so, the audio quality begins to degrade and makes the lectures significantly less enjoyable. Still highly recommended.

The STEMpunk Project: Inductors, Capacitors, Batteries.

When I first began learning about basic electrical components I had a hard time distinguishing between inductors, capacitors, and batteries because they all appear to do the same thing: store energy. As the inner workings of these devices became less opaque, however, myriad differences came into view. To help the beginner avoid some of my initial confusion I sat down to write a brief treatment of all three.

Though inductors, capacitors, and batteries do indeed store energy their means of doing so vary tremendously. This has implications for how quickly they can be charged, how quickly they can discharge, when and where they are most appropriately used, what future developments we can expect, etc.

Inductors store energy electromagnetically. Though there is some controversy over the specific mechanics of energy storage in an inductor, most seem to agree that it relies on the magnetic field that is created when current runs through the inductor wire. As current increases the magnetic field increases, opposing the change in the current and absorbing energy in the process. When current levels off the magnetic field just sits there, holding on to its energy stores and not hassling the electrons as they flow through. But when current begins decreasing the magnetic field begins to collapse, and its energy goes towards keeping the electrons flowing. Thus the energy stored in the initial buildup of current is discharged when current begins to slow down.

Capacitors store energy electrostaticallyA basic capacitor is two conductive plates separated by an insulator, like air or micah. When current begins to flow onto one of these plates there is a build up of electrons and a resulting negative charge. On the other plate, electrons are drawn away both by the repulsive force of the electrons on the first plate and the attractive force of the positive terminal of the voltage source. As this is happening the orbits of the electrons of the atoms in the insulator separating the two plates begin to warp, spending more time near the positively-charged plate. Energy is thus stored in the field between the plates in a way similar to how energy is stored in a compressed spring.

Batteries store energy electrochemically. As I’ve written before the simplest kind of battery consists of two electrodes made of different materials immersed in an electrolyte bath. The electrodes must be made such that one is more likely to give up electrons than the other. When a load is attached to the battery electrons flow from the ‘negative’ terminal through the load to the ‘conductor’. Unlike inductors and capacitors batteries bring all their charge to the circuit in the beginning.

The STEMpunk Project: Transistors

After writing my post on basic electrical components I realized that batteries and transistors were going to require a good deal more research to understand adequately. Having completed my post on the former, the time has finally come to elucidate the foundation of modern electronics and computing: the humble transistor.

Transistor Origins

The development of the transistor began out of a need to find a superior means of amplifying telephone signals sent through long-distance wires. Around the turn of the twentieth century American Telephone and Telegraph (AT&T) had begun offering transcontinental telephone service as a way of staying competitive. The signal boost required to allow people to talk to each other over thousands of miles was achieved with triode vacuum tubes based on the design of Lee De Forest, an American inventor. But these vacuum tubes consumed a lot of power, produced a lot of heat, and were unreliable to boot. Mervin Kelly of Bell Labs recognized the need for an alternative and, after WWII, began assembling the team that would eventually succeed.

Credit for pioneering the transistor is typically given to William Shockley, John Bardeen, and Walter Brattain, also of of Bell Labs, but they were not the first people to file patents for the basic transistor principle: Julius Lilienfeld filed one for the field-effect transistor in 1925 and Oskar Hiel filed one in 1934. Neither man made much of an impact in the growing fields of electronics theory or electronics manufacturing, but there is evidence that William Shockley and Gerald Pearson, a co-worker at Bell Labs, did build a functioning transistor prototype from Lilienfeld’s patents.

Shockley, Brattain, and Bardeen understood that if they could solve certain basic problems they could build a device that would act like a signal amplifier in electronic circuits by exploiting the properties of semiconductors to influence electron flow.

Actually accomplishing this, of course, proved fairly challenging. After many failed attempts and cataloging much anomalous behavior a practical breakthrough was achieved. A strip of the best conductor, gold, was attached to a plastic wedge and then sliced with a razor, producing two gold foil leads separated by an extremely small space. This apparatus was then placed in contact with a germanium crystal which had an additional lead attached at its base. The space separating the two pieces of gold foil was just large enough to prevent electron flow. Unless, that is, current were applied to one of the gold-tipped leads, which caused ‘holes’ — i.e. spaces without electrons — to gather on the surface of the crystal. This allowed electron flow to begin between the base lead and the other gold-tipped lead. This device became known as the point-contact transistor, and gained the trio a Nobel Prize.

Though the point-contact transistor showed promise and was integrated with a number of electrical devices it was still fragile and impractical at a larger scale. This began to change when William Shockley, outraged at not receiving the credit he felt he deserved for the invention of this astonishing new device, developed an entirely new kind of transistor based on a ‘sandwich’ design. The result was essentially a precursor to the bipolar junction transistor, which is what almost everyone in the modern era means by the term ‘transistor’.

Under the Hood

In the simplest possible terms a transistor is essentially a valve for controlling the flow of electrons. Valves can be thought of as amplifiers: when you turn a faucet handle, force produced by your hand is amplified to control the flow of thousands of gallons of water, and when you press down on the accelerator in your car, the pressure of your foot is amplified to control the motion of thousands of pounds of fire and steel.

Valves, in other words, allow small forces to control much bigger forces. Transistors work in a similar way.

One common type of modern transistor is the bipolar junction NPN transistor, a cladistic descendant of Shockley’s original design. It is constructed from alternating layers of silicon which are doped with impurities to give them useful characteristics.

In its pure form silicon is a textbook semiconductor. It contains four electrons in its valence shell which causes it to form very tight crystal lattices that typically don’t facilitate the flow of electrons. The N layer is formed by injecting trace amounts of phosphorus, which contains five valence electrons, into this lattice. It requires much less energy to knock this fifth electron loose than it would to knock loose one of the four valence electrons in the silicon crystal, making the N layer semiconductive. Similarly, the P layer is formed by adding boron which, because of the three electrons in its valence shell, leaves holes throughout the silicon into which electrons can flow.

It’s important to bear in mind that neither the P nor the N layers are electrically charged. Both are neutral and both permit greater flow of electrons than pure silicon would. The interface between the N and P layers quickly becomes saturated as electrons from the phosphorus move into the holes in the valence shell of the Boron. As this happens it becomes increasingly difficult for electrons to flow between the N and P layers, and eventually a boundary is formed. This is called the ‘depletion layer’

Now, imagine that there is a ‘collector’ lead attached to the first N layer and another ’emitter’ lead attached to the other N layer. Current cannot flow between these two leads because the depletion layer at the P-N junction won’t permit it. Between these two layers, however, there is a third lead, called a ‘base’, placed very near the P layer. By making the base positively charged electrons can overcome the P-N junction and begin flowing from the emitter to the collector.

The key here is to realize that the amount of charge to the base required to get current moving is much smaller than the current flowing to the collector, and that current flow can be increased or decreased by a corresponding change in the current to the base. This is what gives the transistor its amplifier properties.

Transistors and Moore’s Law

Even more useful than this, however, is the ability of a transistor to act as a switch. Nothing about the underlying physics changes here. If current is not flowing in the transistor it is said to in cutoff, and if current is flowing in the transistor it is said to be in saturation. This binary property of transistors makes them ideally suited for the construction of logic gates, which are the basic components of every computer ever made.

A full discussion of logic gate construction would be well outside the purview of this essay, but it is worth briefly discussing one popular concept which requires a knowledge of transistors in order to be understood.

Named after Intel co-founder Gordon Moore, Moore’s Law is sometimes stated as “computing power will double roughly every two years”. The more accurate version is “the number of transistors which can fit in a given unit area will double every two years” . These two definitions are fairly similar, but keeping the latter in mind will allow you to better understand the underlying technology and where it might head in the future.

Moore’s law has held for as long as it has because manufacturers have been able to make transistors smaller and smaller. Obviously this can’t continue forever, both because at a certain transistor density power consumption and heat dissipation become serious problems, and because at a certain size effects like quantum tunneling prevent the sequestering of electrons.

A number of alternatives to silicon-based chips are being seriously considered as a way of extending Moore’s Law. Because of how extremely thin it can be made, graphene is one such contender. The problem, however, is that the electrophysical properties of graphene are such that building a graphene transistor that can switch on and off is not straightforward. A graphene-based computer, therefore, might well have to develop an entirely different logical architecture to perform the same tasks as modern computers.

Other potentially fruitful avenues are quantum computing, optical computing, and DNA computing, all of which rely on very different architectures than conventional Von-Neumann computers. As I’m nearing the 1500 word mark I think I’ll end this essay here, but I do hope to return to these advanced computing topics at some point in the future :)

***

More on transistors:

The STEMpunk Project: Batteries

In The STEMpunk Project: Basic Electrical Components I wrote about resistors, capacitors, inductors, and diodes, but I had originally wanted to include batteries and transistors as well. As I did research for that post however it occurred to me that these latter two devices were very complex and would require their own discussion. In today’s post I cover a remarkable little invention familiar to everyone: batteries.

Battery Basics

The two fundamental components of a battery are electrodes and an electrolyte, which together make up one cell. The electrodes are made of different metals whose respective properties give rise to a difference in electrical potential energy which can be used to induce current flow. These electrodes are then immersed in an electrolyte, which can be made from a sulfuric acid chemical bath, a gel-like paste, or many other materials. When an external conductor is hooked up to each electrode current will flow from one of them (the ‘negative terminal’) to the other (the ‘positive terminal’).

Battery cells can be primary or secondary, and are distinguished by whether or not the chemical reactions happening in the cell cause one of the terminals to erode. The simplest primary cell consists of a zinc electrode as the negative terminal, a carbon electrode as the positive terminal, and sulfuric acid diluted with water as the electrolyte. As current flows zinc molecules combine with sulfuric acid to produce zinc sulfate and hydrogen gas, thus consuming the zinc electrode.

But even when not connected to a circuit impurities in the zinc electrode can cause small amounts of current to flow in the electrode and correspondingly slow rates of erosion to occur. This is called local action and is the reason why batteries can die even when not used for long periods of time. Of course there exist techniques for combating this, like coating the zinc electrode in mercury to pull out impurities and render them less reactive. None of these work flawlessly, but advances in battery manufacturing have allowed for the creation of long-storage batteries with a sealed electrolyte, released only when the battery is actually used, and of primary cell batteries that can be recharged.

A secondary cell works along the same chemical principles as a primary cell, but the electrodes and electrolyte are composed of materials that don’t dissolve when they react. In order to be classifiable as ‘rechargeable’ it must be possible to safely reverse the chemical reactions inside the cell by means of running a current through it in the reverse direction of how current normally flows out of it. Unlike the zinc-carbon voltaic cell discussed above, for example, in a nickel-cadmium battery the molecules formed during battery discharge are easily reverted to their original state during recharging.

Naturally it is difficult to design and build such a sophisticated electrochemical mechanism, which is why rechargeable batteries are more expensive.

Much more information on the chemistry of primary and secondary cells can be found in this Scientific American article.

Combining Batteries in Series or in Parallel

Like most other electrical components batteries can be hooked up in series, in parallel, or in series-parallel. To illustrate, imagine four batteries lined up in a row, with their positive terminals on the left and their negative terminals on the right. If wired in series, the negative terminal on the rightmost battery would be the negative terminal for the whole apparatus and the positive terminal on the leftmost battery would be the positive terminal for the whole apparatus. In between, the positive terminals of one battery are connected to the negative terminals of the next battery, causing the voltage of the individual batteries to be cumulative. This four-battery setup would generate six volts total (1.5V per battery multiplied by the number of batteries), and the total current of the circuit load (a light bulb, a radio, etc.) is non-cumulative and would flow through each battery.

If wired in parallel, the positive and negative terminals of the rightmost battery would connect to the same terminal on the next battery, and the terminals for the leftmost battery would connect to the external circuit. In this setup it is voltage which is non-cumulative and current which is cumulative.  By manipulating and combining these properties of batteries it is possible to supply power to a wide variety of circuit configurations.

Different Battery Types [1]

Nickel Cadmium: NiCd batteries are a mature technology and thus well-understood. They have a long life but relatively low energy density and are thus suited for applications like biomedical equipment, radios, and power tools. They do contain toxic materials and aren’t eco-friendly.

Nickel-Metal Hydride: NiMH batteries have a shorter life span and correspondingly higher energy density. Unlike their NiCd cousins NiMH batteries contain nothing toxic.

Lead Acid: Lead Acid batteries tend to be very heavy and so are most suitable for use in places where weight isn’t a factor, like hospital equipment, emergency lighting, and automobiles.

Absorbent Glass Mat: The AGM is a special kind of lead acid battery in which the sulfuric acid electrolyte is absorbed into a fine fiberglass mesh. This makes the battery spill proof and capable of being stored for very long periods of time. They are also vibration resistance and have a high power density, all of which combine to make them ideal for high-end motorcycles, NASCAR, and military vehicles.

Lithium Ion: Li-on is the fastest growing battery technology. Being high-energy and very lightweight makes them ideal for laptops and smartphones.

Lithium Ion Polymer: Li-on polymer batteries are very similar to plain Li-on batteries but ever smaller.

The Future of Batteries

Batteries have come a very long way since Ewald Von Kleist first stored static charge in a Leyden jar in 1744. Lithium Ion seems to be the hot topic of discussion, but there are efforts being made at building aluminum batteries, solid state batteries, and microbatteries, and some experts maintain that the exciting thing to watch out for is advances in battery manufacturing.

Hopefully before long we’ll have batteries which power smart clothing and extend the range of electric vehicles to thousands of miles.

***

[1] Most of this section is just a summary of the information found here.

Black Lives *Do* Matter

In the wake of the recent Dallas shootings Facebook is ablaze with memes to this effect:

13615233_10154354070149485_8501849515733806138_n

The rhetoric of the BLM movement and the overall tone of their outrage leads me to believe that they consider white people, and especially white law enforcement, to be one of the single biggest threats to black people.

I don’t buy this, and I want to explain why by bracketing the data on police shootings with more general data on inter- and intra-racial homicide. Unfortunately the most recent (mostly) complete data I could find is from 2013, but I don’t think that’s a serious challenge to my thesis.

How many black people were killed by police in 2013? From killedbypolice.net [1] I count 181 black males and 7 black females killed between the months of May and December. That’s 23.5 per month, so let’s round that up to 24 and add this many deaths for the months of January, February, March, and April. That yields a total of 284 (188 + (4 x 24)).

To be safe let’s round that up to 300. And, for the sole purpose of disadvantaging my argument, I’m going to arbitrarily double that number to 600.

Bear in mind, this assumes each and every black person killed by the police in 2013 was innocent and includes those cases when the officer responsible for killing a black person was also black.

From the Expanded Homicide Data Table 6 from the FBI [2] I count 2,245 intra-racial black homicides and 189 white-on-black homicides. Let’s round the latter figure up to 200 and round the former figure down to 2,200. Again, just to disadvantage my argument.

That makes (600 blacks killed by cops) + (200 blacks killed by whites) / (2,200 blacks killed by blacks) = (800)/(2,200) = 36%.

Even when I have stacked the statistical deck against my argument in every conceivable way the total number of cop-on-black and white-on-black homicides is only a little over a third of the number of black-on-black homicides. If you remove my rounding and doubling, it’s more like 21%.

The conclusion seems inescapable: by far the single biggest threat to black people is black people. Not cops, not whites, not privilege, but other black people.

So far I haven’t found anyone who takes issue with my statistical analysis, but one objection that keeps cropping up is that none of the above is relevant because what BLM supporters are angry with is a pervasive and systemic bias against black people in modern society. When making this claim there is usually the implication, sometimes stated explicitly, that this bias is what is driving higher rates of criminality and even intra-racial homicide.

Now, my degree is in psychology. I took classes on evolutionary psychology, cognitive and behavioral neuroscience, and social cognition, and have read very widely in these fields in the years since I’ve graduated because I have a deep-seated interest in improving my ability to reason. I am aware of the fact that people have a natural tendency to trust those who look like them, that biases can be unconscious and nearly impossible to perceive, and that when summed across an entire majority population can exert major pressure on a minority population.

But I have two problems with this reply. The first is that so far every Leftist with whom I’ve discussed this issue seems comfortable treating ‘systemic bias’ as though it’s a universal and totally unambiguous explanation for every interracial disparity we might observe. Not long ago a Leftist made the claim that despite similar rates in drug usage blacks are more likely than whites to be arrested for drug possession. I noted that patterns of drug use could be different even while rates of drug use could be the same. That is, if black people are more likely to sell and use drugs on a street corner with their friends while whites are more likely to do so in their basement, it isn’t surprising that they’d be arrested more often.

I want to be clear here: I have no idea if this hypothesis is true or not. While I have presented this hypothetical a number of different times I have always made it clear that it is conjecture and nothing more. What disturbs me is simply that I haven’t yet encountered a Leftist who has even momentarily considered this possibility. If they observe an inter-racial difference in arrests for drug possession, white-on-black racism must be the explanation, QED.

My second problem with ‘systemic bias’ as sole explanation for intra-racial violence is that it doesn’t account for why other historically oppressed groups seem to have assimilated into modern society without astonishing amounts of violence in their communities.

The Irish and the Chinese were both exploited and reviled in the early years of their immigration to the United States. The former group had it so bad that even many ex-slaves noted that their lives were much better than those of the Irish, who were used to do work considered too dangerous or difficult for valuable slaves [4]. And Wikipedia notes that [5]:

Chinese immigrants in the 19th century worked as laborers, particularly on the transcontinental railroad, such as the Central Pacific Railroad. They also worked as laborers in the mining industry, and suffered racial discrimination at every level of society. While industrial employers were eager to get this new and cheap labor, the ordinary white public was stirred to anger by the presence of this “yellow peril“. Despite the provisions for equal treatment of Chinese immigrants in the 1868 Burlingame Treaty, political and labor organizations rallied against the immigration of what they regarded as a degraded race and “cheap Chinese labor”. Newspapers condemned the policies of employers, and even church leaders denounced the entrance of these aliens into what was regarded as a land for whites only. So hostile was the opposition that in 1882 the United States Congress eventually passed the Chinese Exclusion Act, which prohibited immigration from China for the next ten years. This law was then extended by the Geary Act in 1892. The Chinese Exclusion Act was the only U.S. law ever to prevent immigration and naturalization on the basis of race.[1] These laws not only prevented new immigration but also brought additional suffering as they prevented the reunion of the families of thousands of Chinese men already living in the United States (that is, men who had left China without their wives and children); anti-miscegenation laws in many states prohibited Chinese men from marrying white women.[2]

And yet both groups have more or less successfully become a part of modern America.

A further instructive example comes from the relationship between the Koreans and the Japanese. Throughout the history of the Pacific Rim the Japanese have repeatedly invaded the Korean peninsula and abused the Korean people. The most recent such episode occurred in the early years of the Twentieth Century, during which Japan occupied Korea, seized control of its government, and committed the usual slew of atrocities that tends to accompany such behavior [3].

This has lead to a significant and understandable distrust of the Japanese by the Koreans. But I can attest from personal experience that Korea is an exceptionally nice place to live, filled with courteous and good-natured people who seem not to have taken to killing each other as frequently as American blacks.

Doubtless racism does still exist and is a factor in rates of inter- and intra-racial homicide. But I think the above makes a compelling case that these simply can’t be the only major factors at work. I truly believe that black lives matter. That’s why I hope BLM will take an honest look at their own communities and search for ways that they make grassroots improvements there.

***

[1] http://www.killedbypolice.net/kbp2013.html

[2] https://www.fbi.gov/…/expanded_homicide_data_table_6_murder…

[3] https://en.wikipedia.org/wiki/Korea_under_Japanese_rule

[5] https://en.wikipedia.org/wiki/History_of_Chinese_Americans

The STEMpunk Project: Fourth Month’s Progress

Throughout the STEMpunk Project I’m going to try to take a picture each month of the books I’ve read and projects I’ve completed, as sort of a visual metaphor for my progress.

Here’s what I accomplished in June:

month_four

I didn’t add any books to my pile because, like in the first stage of the computing module I wanted to spend a lot of time tinkering. This was accomplished with the Elenco Electronics Playground (EEP, big toyish thing in right corner), the Sparkfun Inventor’s Kit and the Sparkfun Inventor’s kit for Photon (both shown directly behind and above the EEP), and the Elenco soldering practice kit (to the left of the EEP). I also included a soldering iron and wire stripper I used while learning to solder.

The table is starting to look a little busier!

The STEMpunk Project: Basic Electrical Components

Circuits can be things of stupefying power and complexity, responsible for everything from changing channels on t.v. to controlling spacecraft as they exit the outer boundaries of the solar system.

But for all that, there are a handful of basic components found in very nearly every circuit on the planet. An understanding of these components can go a long way toward making electronics more comprehensible.

Resistors

Resistors have the charming quality of doing exactly what their name implies, i.e. they resist the flow of electrons in a circuit. This is useful for keeping LEDs within acceptable ranges so they light up but don’t blow out, for creating voltage dividers for use in resistive components like photocells or flex sensors, and for incorporating things like buttons into circuits through the use of pull-up resistors.

More:

  1. Sparkfun’s resistor tutorial is carefully done and is the source of the examples of resistors cited in the above paragraph.
  2. Resistorguide’s thorough exploration of resistors is notable for its discussion of different kinds of resistors and the pros and cons of using each.
  3. This ScienceOnline tutorial carefully walks through how to interpret the colored bands found on most resistors, and demonstrates the effect on an LED’s brightness of running the same current through different resistors. It also notes that graphite is similar to the material used to make resistors, and does two fascinating little experiments with pencil marks on paper acting as a resistor in a circuit.
  4. GreatScott’s resistor video repeats much of the information in the other videos but succinctly explains what pull-up and pull-down resistors are.

Capacitors

Capacitors come in a wide variety of styles — ceramic disk, polyfilm, electrolytic — but all are designed to exploit properties of electromagnetic fields to store electrical charge. They are built by separating two conductive plates either with space or with a nonconducting material called a dielectric. When current is applied to a circuit with a capacitor, negative charge piles up on one plate. The dielectric won’t conduct electricity but it can support an electric field, which gets stronger as electrons accrue on one side of the capacitor. This causes positive charges to gather on the other plate, and the electric field between the positively- and negatively-charged plates stores a proportional amount of power, which can later be discharged.

More:

  1. Collin Cunningham elucidates capacitors by ripping one apart, delving briefly into their history, and then constructing one from a pill bottle and some aluminum foil.
  2. HumanHardDrive approaches capacitors and capacitance from a theoretical standpoint, delving into the chemistry and math involved.
  3. Eugene Khutoryansky offers an even more granular look at what’s going on inside capacitors.

Inductors

Like capacitors, inductors store electrical energy. A typical inductor will be made up of metal wire wrapped around something like an iron bar. When current is applied to an inductor a magnetic field begins to build and when current is cut off it begins to disintegrate. As a rule magnetic fields don’t like changing, so the generated field resists the initial increase in current and the later decrease in current. Once current levels off, however, the inductor will act like a normal wire for as long as the current doesn’t change.

As I wrote about in “The STEMpunk Project: Literally Reducing a (Black) Box“, inductor motors exploit these electromagnetic properties to generate torque for applications like spinning fan blades.

More:

  1. Eugene Khutoryansky does another fantastic job in his video on the behavior of inductors in a circuit.
  2. Afrotechmods spends a lot of time demonstrating how current changes in response to different inductance values.

Diodes

Diodes are small semiconductors whose purpose in life is to allow current to flow in one direction only. If a negative voltage is applied to a diode it is reverse-biased (“off”) and no current can flow, but if zero or positive voltage is applied it is forward-biased (“on”) and current can flow from its anode terminal to its cathode terminal. If enough negative voltage is applied to the diode, it is possible for current to begin flowing backwards, from the cathode terminal to the anode terminal.

More: 

  1. Sparkfun’s very thorough introduction to diodes.
  2. Collin Cunningham of MAKE magazine returns to explain the basics of diode function.