Profundis: Space

One of mankind’s crowning achievements has been our ascent to the stars. From time immemorial the twinkling lights in the night sky have drawn the attention of the wonder- and wander-hungry among us, who catalogued them, grouped them into shapes, tracked their movements, navigated by them, and wove them into the rich tapestry of the world’s mythical traditions.

In “Space”, James A. Michener deftly explores the magisterial arch of our titanic effort to escape the pull of gravity. He spends much time building rich backstories for his fictionalized characters, with the result being that these men and women seem almost to stand up from the page and assume a life of their own. The tragic deaths of the test pilots who became the first astronauts are genuinely saddening; we sympathize with Stanley Mott as he tackles the sysiphean task of fusing conflicting motives and the tangling whirl between bureaucracy and pure science into an alloy capable of solving the greatest engineering problems in history;  whatever disdain we may have for Tucker Thompson, we don’t envy the journalist as he tries to shape public opinion so as to maintain support for the space effort. Even the great fraud Leopold Strabismus is treated with a sensitivity and nuance that makes him borderline likable. It’s hard to believe these people never existed!

But Michener certainly doesn’t shy away from extended discussions of orbital mechanics, planetology, or rocket science, and I found that I learned a lot. With the benefit of hindsight it can be hard to remember that the engineers who dreamed of going to space first had to build knowledge that is now taught in high schools. Why, for example, is the atmosphere structured such that temperature steadily drops with rising altitude before abruptly climbing up to almost 2000 °C and then falling again?  And if heat sinks prove too heavy to shield a craft reentering the atmosphere what kind of material could be used as an ablative that won’t burn away too quickly?

As years became decades and dreams took physical shape these and many other problems were solved, and thus the first unsteady steps of man toward the heavens blossomed into a race toward the furthest reaches of the solar system, and beyond. This is truly the tale of our greatest triumph, told in exquisite detail by one of our ablest scribes.

 

The STEMpunk Project: Foundations in Electronics Theory

complex_circuitry

Upon first seeing a circuit diagram like the above, with its dizzying, labyrinthine interconnections and mysterious hieroglyphics, you can be forgiven for believing that electronics might forever be beyond comprehension. And it is true that while the field of electronics has a useful array of water-based metaphors for explaining where electrons are going, there are some strange things happening deep inside the devices that make modern life possible.

All that having been said, understanding circuits boils down to being able to trace the interactions of four basic forces: voltage, current, resistance, and power.

Voltage, measured in volts, is often analogized as being like water pressure in a hose. For a given hose with a set diameter and length, more water pressure is going to mean more water flow and less water pressure is going to mean less water flow. If two 100-gallon tanks, one empty and one full, are connected by a length of pipe with a shutoff valve at its center, the water in the full tank is going to exert a lot of pressure on the valve because it ‘wants’ to flow into the empty tank.

Voltage is essentially electrical pressure, or, more technically, a difference in electrical potential. The negative terminal of a battery contains many electrons which, because of their like charges, are repelling each other and causing a build up of pressure. Like the water in the 100-gallon tank they ‘want’ to flow through the conductor to the positive terminal.

Current, measured in amps, is the amount of electricity flowing past a certain point in one second, not unlike the amount of water flowing through a hose. If more pressure (i.e. ‘voltage’) is applied, then current goes up, and correspondingly drops if pressure decreases. Returning to our two water tanks, how could we increase water pressure so as to get more water to flow? By replacing the full 100-gallon tank with a full 1000-gallon tank!

But neither the water in the pipe nor the current in the wire flows unimpeded. Both encounter resistance, measured in ohms when in a circuit, in the form of friction from their respective conduits. No matter how many gallons of water we put in the first tank, the pipe connecting them only has so much space through which water can move, and if we increase the pressure too much the pipe will simply burst. But if we increase its diameter, its resistance decreases and more water can flow through it at the same amount of pressure.

At this point you may be beginning to sense the basic relationship between voltage, current, and resistance. If we increase voltage we get more current because voltage is like pressure, but this can only be pushed so far because the conductor exhibits resistance to the flow of electricity. Getting a bigger wire means we can get more current at the same voltage, or more means we can increase current to get even more current.

If only there were some simple, concise mathematical representation of all this! There is, and its called Ohm’s Law:

E=IR

Here ‘E’ means voltage, ‘I’ means current, and ‘R’ means resistance. This equation says that voltage is directly proportional to the product of current and resistance. Some basic algebraic manipulations yield other useful equations:

I = E/R

R = E/I

From these we can see clearly what before we were only grasping with visual metaphors. Current is directly proportional to voltage: more pressure means more current. It is indirectly proportional to resistance: more resistance means less current. Knowing any two of these values allows us to solve for the other.

That last fundamental force we need to understand is power. In physics, power is defined as ‘the ability to do work’. Pushing a rock up a hill requires a certain amount of power, and pushing a bigger rock up a hill, or the same rock up a steeper hill, requires more power.

For our purposes power, measured in watts, can be represented by this equation:

P = IE

You have a given amount of electrical pressure and a given amount of electrical flow, and together they give you the ability to turn a lightbulb on. As before we can rearrange the terms in this equation to generate other useful insights:

I = P/E

E = P/I

From this we can deduce, for example, that for a 1000 watt appliance increasing the voltage allows us to draw less current. This is very important if you’re trying to do something like build a flower nursery and need to know how many lights will be required, how many watts will be used by each light, and how many amps and volts can be supplied to your building.

There you have it! No matter how complicated a power grid or the avionics on a space shuttle might seem, everything boils down to how power, voltage, current, and resistance interact.

The majority of my knowledge on this subject was comes from an excellent series of lectures given by a former Navy-trained electrician, Joe Gryniuk. His teaching style is jocular and his practical knowledge vast. Sadly, near video eighteen or so, the audio quality begins to degrade and makes the lectures significantly less enjoyable. Still highly recommended.

The STEMpunk Project: Transistors

After writing my post on basic electrical components I realized that batteries and transistors were going to require a good deal more research to understand adequately. Having completed my post on the former, the time has finally come to elucidate the foundation of modern electronics and computing: the humble transistor.

Transistor Origins

The development of the transistor began out of a need to find a superior means of amplifying telephone signals sent through long-distance wires. Around the turn of the twentieth century American Telephone and Telegraph (AT&T) had begun offering transcontinental telephone service as a way of staying competitive. The signal boost required to allow people to talk to each other over thousands of miles was achieved with triode vacuum tubes based on the design of Lee De Forest, an American inventor. But these vacuum tubes consumed a lot of power, produced a lot of heat, and were unreliable to boot. Mervin Kelly of Bell Labs recognized the need for an alternative and, after WWII, began assembling the team that would eventually succeed.

Credit for pioneering the transistor is typically given to William Shockley, John Bardeen, and Walter Brattain, also of of Bell Labs, but they were not the first people to file patents for the basic transistor principle: Julius Lilienfeld filed one for the field-effect transistor in 1925 and Oskar Hiel filed one in 1934. Neither man made much of an impact in the growing fields of electronics theory or electronics manufacturing, but there is evidence that William Shockley and Gerald Pearson, a co-worker at Bell Labs, did build a functioning transistor prototype from Lilienfeld’s patents.

Shockley, Brattain, and Bardeen understood that if they could solve certain basic problems they could build a device that would act like a signal amplifier in electronic circuits by exploiting the properties of semiconductors to influence electron flow.

Actually accomplishing this, of course, proved fairly challenging. After many failed attempts and cataloging much anomalous behavior a practical breakthrough was achieved. A strip of the best conductor, gold, was attached to a plastic wedge and then sliced with a razor, producing two gold foil leads separated by an extremely small space. This apparatus was then placed in contact with a germanium crystal which had an additional lead attached at its base. The space separating the two pieces of gold foil was just large enough to prevent electron flow. Unless, that is, current were applied to one of the gold-tipped leads, which caused ‘holes’ — i.e. spaces without electrons — to gather on the surface of the crystal. This allowed electron flow to begin between the base lead and the other gold-tipped lead. This device became known as the point-contact transistor, and gained the trio a Nobel Prize.

Though the point-contact transistor showed promise and was integrated with a number of electrical devices it was still fragile and impractical at a larger scale. This began to change when William Shockley, outraged at not receiving the credit he felt he deserved for the invention of this astonishing new device, developed an entirely new kind of transistor based on a ‘sandwich’ design. The result was essentially a precursor to the bipolar junction transistor, which is what almost everyone in the modern era means by the term ‘transistor’.

Under the Hood

In the simplest possible terms a transistor is essentially a valve for controlling the flow of electrons. Valves can be thought of as amplifiers: when you turn a faucet handle, force produced by your hand is amplified to control the flow of thousands of gallons of water, and when you press down on the accelerator in your car, the pressure of your foot is amplified to control the motion of thousands of pounds of fire and steel.

Valves, in other words, allow small forces to control much bigger forces. Transistors work in a similar way.

One common type of modern transistor is the bipolar junction NPN transistor, a cladistic descendant of Shockley’s original design. It is constructed from alternating layers of silicon which are doped with impurities to give them useful characteristics.

In its pure form silicon is a textbook semiconductor. It contains four electrons in its valence shell which causes it to form very tight crystal lattices that typically don’t facilitate the flow of electrons. The N layer is formed by injecting trace amounts of phosphorus, which contains five valence electrons, into this lattice. It requires much less energy to knock this fifth electron loose than it would to knock loose one of the four valence electrons in the silicon crystal, making the N layer semiconductive. Similarly, the P layer is formed by adding boron which, because of the three electrons in its valence shell, leaves holes throughout the silicon into which electrons can flow.

It’s important to bear in mind that neither the P nor the N layers are electrically charged. Both are neutral and both permit greater flow of electrons than pure silicon would. The interface between the N and P layers quickly becomes saturated as electrons from the phosphorus move into the holes in the valence shell of the Boron. As this happens it becomes increasingly difficult for electrons to flow between the N and P layers, and eventually a boundary is formed. This is called the ‘depletion layer’

Now, imagine that there is a ‘collector’ lead attached to the first N layer and another ’emitter’ lead attached to the other N layer. Current cannot flow between these two leads because the depletion layer at the P-N junction won’t permit it. Between these two layers, however, there is a third lead, called a ‘base’, placed very near the P layer. By making the base positively charged electrons can overcome the P-N junction and begin flowing from the emitter to the collector.

The key here is to realize that the amount of charge to the base required to get current moving is much smaller than the current flowing to the collector, and that current flow can be increased or decreased by a corresponding change in the current to the base. This is what gives the transistor its amplifier properties.

Transistors and Moore’s Law

Even more useful than this, however, is the ability of a transistor to act as a switch. Nothing about the underlying physics changes here. If current is not flowing in the transistor it is said to in cutoff, and if current is flowing in the transistor it is said to be in saturation. This binary property of transistors makes them ideally suited for the construction of logic gates, which are the basic components of every computer ever made.

A full discussion of logic gate construction would be well outside the purview of this essay, but it is worth briefly discussing one popular concept which requires a knowledge of transistors in order to be understood.

Named after Intel co-founder Gordon Moore, Moore’s Law is sometimes stated as the rule that computing power will double roughly every two years. The more accurate version is that the number of transistors which can fit in a given unit area will double every two years . These two definitions are fairly similar, but keeping the latter in mind will allow you to better understand the underlying technology and where it might head in the future.

Moore’s law has held for as long as it has because manufacturers have been able to make transistors smaller and smaller. Obviously this can’t continue forever, both because at a certain transistor density power consumption and heat dissipation become serious problems, and because at a certain size effects like quantum tunneling prevent the sequestering of electrons.

A number of alternatives to silicon-based chips are being seriously considered as a way of extending Moore’s Law. Because of how extremely thin it can be made, graphene is one such contender. The problem, however, is that the electrophysical properties of graphene are such that building a graphene transistor that can switch on and off is not straightforward. A graphene-based computer, therefore, might well have to develop an entirely different logical architecture to perform the same tasks as modern computers.

Other potentially fruitful avenues are quantum computing, optical computing, and DNA computing, all of which rely on very different architectures than conventional Von-Neumann computers. As I’m nearing the 1500 word mark I think I’ll end this essay here, but I do hope to return to these advanced computing topics at some point in the future 🙂

***

More on transistors:

The STEMpunk Project: Batteries

In The STEMpunk Project: Basic Electrical Components I wrote about resistors, capacitors, inductors, and diodes, but I had originally wanted to include batteries and transistors as well. As I did research for that post however it occurred to me that these latter two devices were very complex and would require their own discussion. In today’s post I cover a remarkable little invention familiar to everyone: batteries.

Battery Basics

The two fundamental components of a battery are electrodes and an electrolyte, which together make up one cell. The electrodes are made of different metals whose respective properties give rise to a difference in electrical potential energy which can be used to induce current flow. These electrodes are then immersed in an electrolyte, which can be made from a sulfuric acid chemical bath, a gel-like paste, or many other materials. When an external conductor is hooked up to each electrode current will flow from one of them (the ‘negative terminal’) to the other (the ‘positive terminal’).

Battery cells can be primary or secondary, and are distinguished by whether or not the chemical reactions happening in the cell cause one of the terminals to erode. The simplest primary cell consists of a zinc electrode as the negative terminal, a carbon electrode as the positive terminal, and sulfuric acid diluted with water as the electrolyte. As current flows zinc molecules combine with sulfuric acid to produce zinc sulfate and hydrogen gas, thus consuming the zinc electrode.

But even when not connected to a circuit impurities in the zinc electrode can cause small amounts of current to flow in the electrode and correspondingly slow rates of erosion to occur. This is called local action and is the reason why batteries can die even when not used for long periods of time. Of course there exist techniques for combating this, like coating the zinc electrode in mercury to pull out impurities and render them less reactive. None of these work flawlessly, but advances in battery manufacturing have allowed for the creation of long-storage batteries with a sealed electrolyte, released only when the battery is actually used, and of primary cell batteries that can be recharged.

A secondary cell works along the same chemical principles as a primary cell, but the electrodes and electrolyte are composed of materials that don’t dissolve when they react. In order to be classifiable as ‘rechargeable’ it must be possible to safely reverse the chemical reactions inside the cell by means of running a current through it in the reverse direction of how current normally flows out of it. Unlike the zinc-carbon voltaic cell discussed above, for example, in a nickel-cadmium battery the molecules formed during battery discharge are easily reverted to their original state during recharging.

Naturally it is difficult to design and build such a sophisticated electrochemical mechanism, which is why rechargeable batteries are more expensive.

Much more information on the chemistry of primary and secondary cells can be found in this Scientific American article. I also found this article on how batteries work from Save On Energy to be helpful.

Combining Batteries in Series or in Parallel

Like most other electrical components batteries can be hooked up in series, in parallel, or in series-parallel. To illustrate, imagine four batteries lined up in a row, with their positive terminals on the left and their negative terminals on the right. If wired in series, the negative terminal on the rightmost battery would be the negative terminal for the whole apparatus and the positive terminal on the leftmost battery would be the positive terminal for the whole apparatus. In between, the positive terminals of one battery are connected to the negative terminals of the next battery, causing the voltage of the individual batteries to be cumulative. This four-battery setup would generate six volts total (1.5V per battery multiplied by the number of batteries), and the total current of the circuit load (a light bulb, a radio, etc.) is non-cumulative and would flow through each battery.

If wired in parallel, the positive and negative terminals of the rightmost battery would connect to the same terminal on the next battery, and the terminals for the leftmost battery would connect to the external circuit. In this setup it is voltage which is non-cumulative and current which is cumulative.  By manipulating and combining these properties of batteries it is possible to supply power to a wide variety of circuit configurations.

Different Battery Types [1]

Nickel Cadmium: NiCd batteries are a mature technology and thus well-understood. They have a long life but relatively low energy density and are thus suited for applications like biomedical equipment, radios, and power tools. They do contain toxic materials and aren’t eco-friendly.

Nickel-Metal Hydride: NiMH batteries have a shorter life span and correspondingly higher energy density. Unlike their NiCd cousins NiMH batteries contain nothing toxic.

Lead Acid: Lead Acid batteries tend to be very heavy and so are most suitable for use in places where weight isn’t a factor, like hospital equipment, emergency lighting, and automobiles.

Absorbent Glass Mat: The AGM is a special kind of lead acid battery in which the sulfuric acid electrolyte is absorbed into a fine fiberglass mesh. This makes the battery spill proof and capable of being stored for very long periods of time. They are also vibration resistance and have a high power density, all of which combine to make them ideal for high-end motorcycles, NASCAR, and military vehicles.

Lithium Ion: Li-on is the fastest growing battery technology. Being high-energy and very lightweight makes them ideal for laptops and smartphones.

Lithium Ion Polymer: Li-on polymer batteries are very similar to plain Li-on batteries but ever smaller.

The Future of Batteries

Batteries have come a very long way since Ewald Von Kleist first stored static charge in a Leyden jar in 1744. Lithium Ion seems to be the hot topic of discussion, but there are efforts being made at building aluminum batteries, solid state batteries, and microbatteries, and some experts maintain that the exciting thing to watch out for is advances in battery manufacturing.

Hopefully before long we’ll have batteries which power smart clothing and extend the range of electric vehicles to thousands of miles.

***

[1] Most of this section is just a summary of the information found here.

The STEMpunk Project: Basic Electrical Components

Circuits can be things of stupefying power and complexity, responsible for everything from changing channels on t.v. to controlling spacecraft as they exit the outer boundaries of the solar system.

But for all that, there are a handful of basic components found in very nearly every circuit on the planet. An understanding of these components can go a long way toward making electronics more comprehensible.

Resistors

Resistors have the charming quality of doing exactly what their name implies, i.e. they resist the flow of electrons in a circuit. This is useful for keeping LEDs within acceptable ranges so they light up but don’t blow out, for creating voltage dividers for use in resistive components like photocells or flex sensors, and for incorporating things like buttons into circuits through the use of pull-up resistors.

More:

  1. Sparkfun’s resistor tutorial is carefully done and is the source of the examples of resistors cited in the above paragraph.
  2. Resistorguide’s thorough exploration of resistors is notable for its discussion of different kinds of resistors and the pros and cons of using each.
  3. This ScienceOnline tutorial carefully walks through how to interpret the colored bands found on most resistors, and demonstrates the effect on an LED’s brightness of running the same current through different resistors. It also notes that graphite is similar to the material used to make resistors, and does two fascinating little experiments with pencil marks on paper acting as a resistor in a circuit.
  4. GreatScott’s resistor video repeats much of the information in the other videos but succinctly explains what pull-up and pull-down resistors are.

Capacitors

Capacitors come in a wide variety of styles — ceramic disk, polyfilm, electrolytic — but all are designed to exploit properties of electromagnetic fields to store electrical charge. They are built by separating two conductive plates either with space or with a nonconducting material called a dielectric. When current is applied to a circuit with a capacitor, negative charge piles up on one plate. The dielectric won’t conduct electricity but it can support an electric field, which gets stronger as electrons accrue on one side of the capacitor. This causes positive charges to gather on the other plate, and the electric field between the positively- and negatively-charged plates stores a proportional amount of power, which can later be discharged.

More:

  1. Collin Cunningham elucidates capacitors by ripping one apart, delving briefly into their history, and then constructing one from a pill bottle and some aluminum foil.
  2. HumanHardDrive approaches capacitors and capacitance from a theoretical standpoint, delving into the chemistry and math involved.
  3. Eugene Khutoryansky offers an even more granular look at what’s going on inside capacitors.

Inductors

Like capacitors, inductors store electrical energy. A typical inductor will be made up of metal wire wrapped around something like an iron bar. When current is applied to an inductor a magnetic field begins to build and when current is cut off it begins to disintegrate. As a rule magnetic fields don’t like changing, so the generated field resists the initial increase in current and the later decrease in current. Once current levels off, however, the inductor will act like a normal wire for as long as the current doesn’t change.

As I wrote about in “The STEMpunk Project: Literally Reducing a (Black) Box“, inductor motors exploit these electromagnetic properties to generate torque for applications like spinning fan blades.

More:

  1. Eugene Khutoryansky does another fantastic job in his video on the behavior of inductors in a circuit.
  2. Afrotechmods spends a lot of time demonstrating how current changes in response to different inductance values.

Diodes

Diodes are small semiconductors whose purpose in life is to allow current to flow in one direction only. If a negative voltage is applied to a diode it is reverse-biased (“off”) and no current can flow, but if zero or positive voltage is applied it is forward-biased (“on”) and current can flow from its anode terminal to its cathode terminal. If enough negative voltage is applied to the diode, it is possible for current to begin flowing backwards, from the cathode terminal to the anode terminal.

More: 

  1. Sparkfun’s very thorough introduction to diodes.
  2. Collin Cunningham of MAKE magazine returns to explain the basics of diode function.

The STEMpunk Project: Adventures With The Sparkfun Inventor’s Kit

The electronics module of The STEMpunk Project began early this week when I excitedly tore open my Sparkfun Inventor’s Kit (SIK) and plugged the Redboard into my computer.  The instructions for installing and configuring the SIK have six parts: download the Arduino IDE appropriate to your OS (OSx Yosemite 10.10.5, in my case), connect the RedBoard to the computer via the supplied USB cable, install the Arduino FTDI drivers, uninstall the native FTDI VCP drivers if you’re running OSx Yosemite (10.9) or later, select the correct board and serial port, and then download the SIK code from the supplied URL.

 

All seemed well when I built out the first project circuit — just a simple blinking LED — but when I tried to upload the code for the second circuit to the Redboard I discovered that the Arduino IDE I was using simply couldn’t interface with it.

 

There were three places where I ran into trouble. The first and biggest was getting the right Arduino IDE installed. For whatever reason the latest install, v. 1.6.9, just doesn’t work because it just can’t upload the code to the RedBoard. I tried uninstalling it and installing 1.6.8 and had no better luck. Finally, I opted to install 1.6.5, the last known fully-functional version, which worked.

 

During each of these installs I re-ran the FTDI driver install and the uninstall script for the Mac FTDI drivers. It seemed like sometimes when I skipped this step the IDE couldn’t see the SIK code file, though I have no idea why.

 

When I finally installed IDE v. 1.6.5 and re-installed the drivers I ran into an issue with the USB port. Evidently v. 1.6.5 doesn’t recognize the same ports, because I had to go in and select a different one. Further, for some reason the IDE wouldn’t recognize my SIK code file either; when I re-downloaded the exact same file and I tried to upload the SIK code for the second circuit, everything worked.

 

I mention all this because throughout the process I repeatedly found myself getting annoyed that I even had to go through this much effort. Hadn’t I paid money for this equipment? Shouldn’t it work out of the box?

 

Then I realized that I was being silly, for two reasons. First, a lot of the software upon which the SIK relies is free, open source, and actually very good. Unless I can produce something better then I have no right to complain about the fact that I have to spend a morning or two getting it to work. Second, even with million-dollar programs and state-of-the-art computers, troubleshooting is a fact of life[1]. The STEMpunk Project is about building serious technical skills, and if I loathe the process of tinkering with hardware and software then I stand no chance of succeeding.

 

But once I did get the SIK working I knocked out all sixteen project circuits in about three days! I realize they won’t mean much out of context, but I took some pictures as I went along:

 

This is circuit one, just a simple little LED light:

 

SIK_1

 

Circuit two contains a potentiometer — basically a knob for adjusting voltage — which gave me the power to brighten or dim the LED:

 

SIK_2

 

Circuit eight powered a small servo motor with a propeller attached:

 

SIK_8

 

Circuit fifteen displayed a ‘hello world’ message on the tiny LCD screen:

 

SIK_15

 

The final circuit, number sixteen, coded a simple memory game where I had to use the buttons to replay a pattern produced by the LED lights:

 

SIK_16
***
[1] One of my friends who already has a lot of technical skill told me that my phrasing here wasn’t strong enough. Troubleshooting isn’t just a ‘fact of life’, it’s the core technical skill.

Peripatesis: E-Governance; Lighting Up The Dark; Regulating Superintelligences.

Nestled in the cold reaches of Northern Europe, Estonia is doing some very interesting things with the concept of ‘e-governance‘. Their small population, short modern history, and smattering of relatively young government officials make experimenting with Sovereignty easier than it would be in, say, The United States. The process of starting a business and paying taxes in Estonia has been streamlined, for example, leading to the predictable influx of ‘e-residents’ wanting to run their internet-based business from Estonia.

***

There are some truly fascinating advancements happening at the cutting edge of farming and horticulture. Some enterprising researchers have discovered a way to channel natural light into unlit places, and there are talks of using this technology to set up a public garden in the abandoned Williamsburg Bridge Trolley Terminal beneath New York City. It’s not really clear from the linked article whether or not all of this light is natural or whether or it’s a mix of natural and artificial light, but it’s still interesting.

I would love to see a variant of this technology utilized far and wide to foster localized farming and the greening of urban centers. Plenty of buildings have rooftop gardens now, but with a means of gathering and arbitrarily distributing sunlight it would be possible to have, say, one floor in ten of a big skyscraper devoted to a small orchard or garden space. Advanced greenhouses could be both heavily insulated and capable of showering their interior with photons, making farming at high altitudes and in colder climates more straightforward.

***

The BBC has a piece on ‘anti-languages’, slangs developed by insular communities like thieves or prison inmates to make their communication indecipherable to outsiders. They share the grammar of their parent language but use a plethora of new terms in place of old ones to achieve something akin to encryption.

These new terms — such as ‘bawdy basket’, which meant ‘thief’ in the English anti-language used among Elizabethan criminals — are generated through all sorts of techniques, including things like metaphor and reversing the spelling or meaning of terms from the parent language.

***

An essay by Marc McAllister at The Babel Singularity argues that laws enforcing human control over superintelligences are tantamount to slavery, and won’t be of much use any way because these beings will have moral concepts which we baseline humans simply can’t fathom with our outdated brains.

He seems to be missing the point of the arguments made by groups like MIRI and the Future of Life Institute. To the best of my knowledge no one is advocating that humans remain strictly in control of advanced AIs indefinitely. In fact, the opposite is true: the point of building a superintelligence is to eventually put it in charge of solving really hard problems on behalf of humanity. In other words, ceding control to it.

To that end, the efforts made by people who think about these issues professionally seem to be aimed at understanding human values, intelligence, and recursively improving algorithms well enough to: 1) encode those values into an AI; 2) Predict with an acceptably strict level of confidence that this human-compatible goal architecture will remain intact as the software rewrites itself; 3) reason, however dimly, about the resulting superintelligence. These are by no means trivial tasks. Human values are the messy, opaque result of millennia of evolution, and neither intelligence nor recursion are well understood.

But if we succeed in making a “Friendly” AI then control, in a ‘coercive sense’, won’t be necessary because its values will be aligned with our own.

***

Somewhat related: Big Think has published a very brief history of Artificial Intelligence. With the increasing sophistication and visibility of advancements in the field, understanding its roots becomes ever more important.

***

Vector Space Systems is a new player in an arena long dominated by Blue Origins, SpaceX, and Virgin Galactic. Their goal: to be to spaceflight what taxis are to terrestrial modes of transport. According to their website they have been quietly working on a micro satellite launch vehicle designed to carry payloads in the 5 – 50 kg range into orbit.

If they succeed this will allow companies wanting to develop new space technologies to launch more frequently and less expensively, driving faster growth in space commerce, exploration, and tourism.

The Future Postponed

Earlier this year MIT published “The Future Postponed“, a report examining different areas in which basic research could have profound economic significance.The authors of the report postulate that declining investment in basic research could lead to an innovation deficit in The United States, precipitating our decline as one of the biggest economic powerhouses of the world.

Some of these topics are very familiar; few doubt that robotics is going to be a major driver of trends in technology and economics, and that investing in robotics research will be a key maneuver for any country wanting to position itself as a technology leader. The same goes for supercomputing, space exploration, and cybersecurity.

Others are relatively unfamiliar. Research into catalysts — chemicals which quicken or facilitate other chemical processes — has the potential to revolutionize whole swathes of the global economy. The right catalysts could foster the development of artificial photosynthesis, better manufacturing of plastics, and processes for converting CO2 into energy, among many, many other advances. But the catalysts used today are crude by comparison to the ones found in the metabolic processes of living organisms. Some of the biggest efforts to understand these naturally-occurring catalysts are being made in China and Germany, not The United States.

Plant science is another arena overlooked even by those with an interest in the future. Simply put, food production and nutrient density need to increase significantly or billions of people are going to starve. This would have already occurred had it not been for the Green Revolution of the mid-20th century, but even deeper advances will be required to meet the demands of a rapidly expanding population. Basic research will hopefully allow for the creation of cereal crops with elevated nutrients like Vitamin A, as well as crops that are resistant to a panoply of diseases.

Funding basic research of this sort can be difficult, in part because it’s usually pretty expensive and because, by its very nature, it isn’t always clear what sort of payoff can be expected. But if the history of science has demonstrated anything, it’s that digging as far down into the bedrock of reality as possibly usually proves fruitful in the long run.

 

 

The STEMpunk Project: Goals, and How to Achieve Them

My primary focus in 2016 is the STEMpunk project, so I wanted to take a moment to make it clear exactly what I mean when I said I wanted to “shore up my techie credentials“.

Before I do, though, I want to point out how important it is to engage in the sort of plan building I’ve done below when trying to accomplish something of any magnitude. Vague slogans and unarticulated ambitions might be enough to get started on a large project, but they aren’t enough to see it through to the end. Actually finishing, or even making enough progress to not feel bad about the time invested, requires careful and consistent forethought.

With that having been said, here is the broad outline:

COMPUTING (~10 WEEKS)

Funds permitting I will build my next desktop computer, use my current machine as a linux box or a media center, build an understanding of networks and computer security (including possibly getting some certifications), and hopefully make an entire virtual computer from NAND gates up.

Stage ISpend two hours a day or so reading books about building computers and putting a parts list together. Run the list by techier friends and then, if money isn’t tight, order the parts.

Even if I’m not able to actually build the computer until later in the year going through the process of researching components and how they interrelate will be an invaluable learning experience.

Stage IIComplete as much of the “build a computer from first principles” course as I can, collecting parts while I do so. When all the parts come in, build the computer, or defer the building until later in the year when I have more money to spend on components.

Stage IIIBegin with a CompTIA A+ book and work through it, spending about two hours a day. Get a certification if time permits and I think it’s worth having. Repeat the process with CompTIA’s Network+ and Security+ books. 

ELECTRONICS (~8 WEEKS)

Stage I: Swallow my pride enough to play with some kid’s toys. The tentative list includes the SparkFun Inventor’s kit, around 5 or so soldering projects similar to this one, a 200-in-1 electronics project kit (though I’m not committing to doing all 200; I will keep it up as long as I feel I’m learning new things), the SparkFUN Photon kit, the SparkFUN Car Diagnostics Kit, the SparkFUN LabVIEW kit, and the SparkFUN Raspberry PI starter kit. The kits will probably be done in this order, though I’ll make executive decisions on moving things around. There are also some good projects at startingelectronics.com that I might pilfer if this list doesn’t hold me over.

Stage IIBegin the theory stage by reading books, completing tutorials (like this, this, or this), and taking classes like “Circuits and Electronics”, “Introduction to Electronics Signals and Measurement”, “Practical Electronics”, “Advanced Circuit Techniques”, “Power Electronics”, and “Electrical Machines”. This is a paltry sampling, of course, and I will seek out more resources as time goes on.

Stage IIIMake an inventory of all the electrical devices and systems in my house. Go through them and see how much my new-found knowledge allows me to understand, cataloging the remaining gaps. Either make a plan to fill those gaps or arrange to have a contractor/electrician come to my house and spend half a day explaining it all to me.

Stage IVReach out to an electrician buddy of mine and offer to do some free work for him in exchange for a kind of fast-paced Apprenticeship Blitz lasting a couple of weeks.

MECHANICS (~6 WEEKS)

Stage I: Begin with kid’s toys like the 16-project Erector set I purchased late in 2015, a model v8 Engine Kit, a model jet engine, models of Da Vinci Clocks and Da Vinci catapults, a stirling external combustion engine, interactive “how cars work” book, perhaps others.

Stage IIMove on to theory, by reading books like “Basics of Mechanical Engineering”, “Basic Machines and How They Work”, “1800 Mechanical Movements, Devices, and Appliances”, and “507 Mechanical Movements”, “How Cars Work”, and “How Machines Work”. Take some classes like “Engineering Mechanics I”, “Engineering Mechanics II”, “How and Why Machines Work”, “Internal Combustion Engines”, and maybe even “Elements of Mechanical Design”.

Stage IIIMake an inventory of all the electrical devices and systems in my house. Go through them and see how much my new-found knowledge allows me to understand, cataloging the remaining gaps. Either make a plan to fill those gaps or arrange to have a contractor/electrician come to my house and spend half a day explaining it all to me.

Stage IVProceed through a series of real-life disassemble/repair/reassamble projects of escalating complexity. I haven’t mapped this part out completely, but I was thinking of doing something along the lines of coffee maker, water pump, weed eater motor, and cheap old motorcycle.

ROBOTICS (~10 WEEKS)

I’m actually most excited about this, after computing. The plan here is to use what I’ve learned in electronics, computing, mechanics, and programming to do some basic home automation. I have this vision of myself walking through my living room and casually throwing out commands in a few different foreign languages to my refrigerator, the blender, little robot arms holding up the six computer monitors my custom-built desktop is outfitted with, etc.

At a minimum, maybe I can get the refrigerator door to open on command or something.

Stage II need to keep this part lean because stage three will involve a heavy software component with a steep learning curve. I’m thinking just a few projects to get oriented and then going from there: Electronictechcrafts 14-in-1 solar robot kit, Monoprice Robot Kit, the SparkFUN RedBot kit.  

Stage IITake some classes like “Introduction to Robotics”, “Lego Robotics”, “Mechatronics”, and “Design of Electromechanical Robotic Systems”. Read “Robot Building for Beginners”. Sparkfun has got some tutorials as well.

Stage IIII’m not sure yet. I have ideas for basic home automation but no way of calibrating the difficulty of doing these things. At the end of stages I and II I may be good enough to do several of these projects, or I may be barely good enough to even get started on one. I will have to get this far and then plan more deeply.

_______________________________________________

There you have it, guys. This is what I’ll be spending most of 2016 working on!

The STEMpunk Project: Introduction and Motivations

As of yesterday I have officially begun “The STEMpunk project”, in which I will endeavor to shore up my techie credentials by completing a series of projects chunked into four large categories: Computing, Electronics, Mechanics, and Robotics.

I have many reasons for wanting to do this: I love computers and programming; I read an almost unhealthy amount of science fiction, to the exclusion of nearly every other kind of fiction; the idea of investing in tech startups is appealing, and I think I might be good at it; the world still has a dire shortage of people doing long-term, sensible analyses of emerging technologies, and maybe I can help with some small part of that.

Plus, I’ve been fascinated by technology as far back as I can remember, but for various reasons have failed to nurture or explore that fascination.

Well, that changes now.

Thinking Inside the (Black) Box.

Viewed one way, civilization can be thought of as the proliferation of black boxes, i.e. things whose internal workings are more or less a mystery to anyone who isn’t either a specialist or a person who has made a special effort to learn how the black box works.

Let’s take an example: what is a refrigerator?

Well, it’s a device that keeps food cold. I know that it doesn’t work if I leave the door open, which implies that some amount of sealing is required. I don’t know what freon is or what it does, but I have heard it mentioned in connection with air conditioners and other cooling apparatuses, so I assume it is involved somehow.

An entire segment of the economy exists to manufacture, distribute, repair, and improve upon refrigerators and related technologies, and they get along perfectly well without me. I can cheerfully write computer code without having to also invent refrigeration, and when I get hungry I can just open the refrigerator, pull something out, and eat it without having to go hunting.

If you don’t know anymore about refrigerators than what I’ve written above then they qualify as a black box. For the most part the proliferation of black boxes  is a good thing, and our ignorance is usually harmless. Still, though, I don’t think it’s good to have too many things I rely on every day be mysterious. As an adult male with a growing degree of responsibility I should probably have some idea of how to do basic car repairs, what an electrical panel is and the rudiments of how to wire one, what a computer is and how to build one, etc.

And besides that, as I grow older I find myself increasingly fascinated by how awe-inspiringly awesome this stuff is.

How many eons did men cower in fear under rock ledges because some vicious electrical storm or forest fire was raging just beyond their shelter? How many gods were invented and placated because those same men not only didn’t know what they were looking at, but hadn’t yet even conceived of a general method for understanding what they were looking at?

These days, however, lightning is channeled through hidden conduits in my walls so that I can keep my living room a comfortable 75 degrees year round, and I use fire to propel a metal cage sitting on four inflated rubber donuts down a ribbon of asphalt at twice the top galloping speed of a horse. These miracles are called electricity and driving, and they’re so common as to be almost boring.

That fact amazes me.

Not Just About the Technology

While this year’s project is about cultivating a richer set of models for understanding mechanical, electrical, and computational systems, on a deeper level it’s about developing two macro-abilities which will allow me to begin playing at the level of the men and women I most admire:

1) Building the strength of focus to make rapid progress and produce large quantities of value.

2) Conceiving of, planning, and executing large-scale learning projects with many degrees of uncertainty;

To that end I’ll probably spend most of my blogging energies on issues related to motivation, practice, attention, and so on. And I plan on covering the structure of The STEMpunk project, including ways it deviates from similar large-scale undertakings like Scott Young’s “MIT Project”, how to make changes along with an expanding knowledge base, how to iterate between theory and practice when you don’t know much of either, etc.

I’ve been planning this for a while and I’m frankly pretty excited about seeing how far I can get. Stay tuned.