## Understanding Computer Architecture

Have you ever wondered about how a computer program manipulates the underlying hardware of a computer system? Or, why computers use the binary numbering system? Perhaps you’ve found yourself pondering what is inside a CPU, or how your voice gets converted into a digital file stored on disk? Many academics have realized in recent years that as computer systems become more complex, the basic details of their operations get buried under many layers of abstraction. This abstraction hides the details and provides a powerful tool to use in building complex systems from discrete parts. However, it also hides the knowledge those details carry with them, making complex computer systems seem almost magical. New students who enter computer science programs today, tend to lack knowledge of the basic underlying principles on which computer systems are built. That knowledge was once was learned from hardware hacking. Today’s complex personal computers are simply to complex for a user to go mucking around in. In this column I hope to provide projects that will expose the reader to some of the underlying principles of computer architecture. Over time, I plan to present a set of projects that will provide a holistic view of a complete computer system. We will begin this tour with how transistors work, and how they are turned into gates. Then we will move on to how more complex logic functions are built from simple gate. We will eventually see how logic gates become the building blocks of a CPU, as well as how software manipulates CPU’s and other hardware systems in the computer. Then we will focus on how we provide the various software layers for a Basic Input/Output System (BIOS), and the Operating System (OS). Next we will look at how application software works with the OS to provide the user experience. I don’t expect to cover all the details or to dip to deeply into the electrical engineering portion of computer science. However, I want to provide enough information in each issue that the reader has sufficient knowledge to learn more elsewhere, and I hope to peak the reader’s interest of computer architecture starting from the basic building blocks of transistors and logic gates to the various software layers, until we have examined all aspects of a complete system. We will demonstrate how all of these components work together to provide a complete computer system. In this way I hope to help fill the gap in knowledge between how the hardware is implemented and dispel the mysteries behind computer systems. With each issue I will include a further installment meant to build on previous articles. Slowly and methodically building on our understanding of the underlying components, and building hardware and software until we achieve a complete system. Next, we will turn to increasing the depth of our knowledge by looking at today’s more complex systems. In this first article we will begin with looking at the lowly transistor. Then, we will move onto simulating hardware using LogiSim, an open source logic simulator. It is tools of this type that hardware designers use to build real hardware. So it seems natural to use these tools to build a simulated machine. I hope to produce a design that could easily be moved to real hardware if the reader so desires. I realize it’s a lot to bite off. With each article I will provide a list of links that can help the reader fill in the blanks I leave do to lack of space and time. Now, On with the Show!

### Transistors, What Are They?

Just as cells are the building blocks of life, transistors are the building blocks of modern electronics. I recall a time when transistors didn’t exist in the consumer market. My first portable radio had vacuum tubes and large batteries. However, it was quickly replaced with a cool new tiny (in comparison) pocket sized radio. This new transistor radio didn’t need time to warm up, didn’t need large 67.5 volt batteries encased in the read and blue EverReady card-stock. It was a huge leap forward in technology. All at a time when we were beginning space exploration. Our world would be a much different place with out transistors. So what are they? Would you believe they are made sand! OK, so its not exactly sand its Silicon. In its pure form, it has a crystalline structure and sits in the 14th place on the periodic table where it is written as Si. Silicon is one of the most widely available elements on earth and appears more commonly in the form of compounds such as silica (SiO2), where it is combined with oxygen, that usually takes the form of sand!

Silicon was first isolated in 1824 by a Swedish chemist named Jons Jacob Berzelius.

A transistor is an amplifier. A device that is used to increase the power of a signal. You may be familiar with audio amplifiers which take a sound source and increase it’s power, making it seem louder to the human ear. You can think of a transistor as a sort of water faucet. When you turn on the water in you kitchen sink, you can adjust the water flow from the faucet by varying the amount of travel of the faucet handle. In a similar fashion, the flow of electricity through a transistor is varied by adjusting the transistors base current. Transistors most typically have three legs or terminals. See figure 1. These include the Base which acts as the lever used to control the flow of electricity between the Collector and the Emitter. Using conventional electron flow, the electricity can be thought of as flowing into the collector and out of the emitter on an NPN transistor. The amount of flow is controlled by the amount of current that flows into the base.

Transistors have two basic modes of operation. They can operate as an amplifier or as a switch. Think about what happens if you turn your faucet from off to its full open position quickly. You could think of the faucet as a switch that turns the water off and on. A transistor’s gain is the amount of amplification that occurs to the base current. For example, if 1 milli-amp (mA) is placed on the base leg and the transistor has an amplification factor of 300, then the collector current will be 300 mA. A transistor also has a turn voltage and current. The transistor will remain off until the base current is sufficient to produce 0.7 volts across the base emitter junction. At this point the junction begins to conduct. If the current at the base is switch between a low value and a high value, the transistor acts like a switch. It is either full on or full off. It is this mode of operation that we used to build logic gates. Now that we understand that we can use a transistor as a switch, we can start to build logic gates out of them. The logic gates I show here could easily be built on a breadboard using any general purpose small signal transistor such as the 2N3904 and 2N3906 which are widely available, even from Radio Shack. So if you would like to build these simple circuits and cut your teeth with hardware hacking, go right ahead. The name of the game is experimentation! Playing around with transistors and actually building these simple circuits can teach you a lot! I will provide a link to a list of materials and additional documentation on bread-boarding simple circuits at the end of the article.

### Digital Signals And Binary Numbers

Before we get to involved in logic gates it would help to know a bit about digital logic. First, what is a digital system and how does it differ from other electronic systems? In figure 4 you can see a representation of a signal. This signal could be the temperature or an audio signal it really doesn’t matter. What is important is that this signal is a continuous signal with an infinite set of values use to represent it. Reference Point 1 on the graph. This value of the signal here could be at 8.70, or 8.7019362 or 8.701936247523… It is difficult to tell. But we know the value could be any of an infinite set of possible value between 8 and 9. We can always dig a little deeper and get a more accurate value for this or any other point on the signal curve. But we would need to write a number that was ever increasing in length. We as humans usually round values like this to something reasonable for the application. We don’t want to write long numbers and we usually don’t need the accuracy beyond a certain point. If we want more accuracy we can always zoom in a little closer and find finer detail to measure from.

This however causes problems for computers. If we needed to store an accurate value of the signal at any given point we would need an infinite amount of storage to hold the number with an infinite number of digits. To solve this issue we digitize signals. We define a certain number of levels and anything that falls between two levels must be rounded. That rounding can be to the nearest lower value or to the nearest highest value or to the nearest predetermined value. Looking at figure 5, the red arrows represent the stored digital value for the signal. Here the system is rounding the actual value to the nearest threshold point. As you can see we loose accuracy and miss any fluctuation of the signal between sampling. The analog signal is continues while our digital signal is discrete. It is limited in the number of values it can represent. All digital signals are subject to this loss of fidelity. What if we need more fidelity? Well, we simply divide the steps into smaller steps. We can continue to divide the steps into smaller steps until we reach the needed level of accuracy. However, each time we do so we are increasing the number of digits needed to store the discrete values. So system designers must make trade offs between fidelity and storage.

Computer systems turn these steps into binary values. Binary is a numbering system of base 2. Having a base of two means that the numbering system can only use two symbols. One of those symbols must be able to represent the absence of a value. So binary systems typically use the digits 0 and 1. Internally in the computer these values are mapped to a transistor being turn off (binary 0) and a transistor being turned on (binary 1). You can now see why we use transistors in the switching mode in digital computers. But what if you need a number bigger than 1? Do you recall when you were learning how to write numbers? We were all taught a decimal system. Most likely because early man used his fingers to count. Even today there are some remote tribes I’m told that use a base 20 system to count. They use both their hands and feet to count. In our base ten decimal system we use ten symbols or digits to represent our numbers, 0-9. Once we get to 9 we can’t count any higher because we’ve used all our symbols. So what we did was to devise a value placement system where not just the digit but also it’s placement in the number determined it’s value. So we write a number like ten as 10. Each place to the left of the radix point (decimal point in base ten systems) is raised another power of the base. For example, 9.0 is simply nine to the zero power which is always one. So it is 9 x 1. The second place to the right is the tens column and the value presented in this column is multiplied by 10 to get it’s real value, as in 20 is two times ten or 10 to the 1st power. The next column is the hundred’s column and it is multiplied by 10 to the 2nd power or 100, and so on… The binary system is no different. Once we have used up all our symbols we simply move to the right of the radix point (binary point in base two). So our first column is multiplied by 2 to the power of zero which is one. The second column is multiplied by 2 to the power of 1 which is two, and the third column is multiplied by 2 to the power of 2 which is four, and so on… figure 6 show how the placement of a binary digit effects its value.

Since computers must work with humans we constantly need to convert values between the binary system that the computer understands and the decimal system we humans prefer. To do this we can simply write the power of each column over each bit (Binary digIT). Then we simply add up all the powers for columns with a 1 in it. We can ignore the columns with zeros. Reference figure 6.

Binary numbers can be added, subtracted, multiplied and divided just like any other numbers. When you perform arithmetic on binary numbers you must line up the radix point just like you do in decimal arithmetic. To add two binary numbers you simply add the values in the respective columns. If a value is greater than 1, you must carry that value to the column to the left and continue adding. Subtraction works the same way. If you need to borrow from a column you borrow to the right just as in decimal. The rules for binary addition are simple.

- 0+0
_{2}= 0_{2} - 0+1
_{2}= 1_{2} - 1+1
_{2}= 10_{2}

And for Subtraction the rules are:

- 0-0
_{2}= 0_{2} - 1-0
_{2}= 1_{2} - 1-1
_{2}= 0_{2}

So now lets move on to Boolean Logic.

### Boolean Algebra

Boolean Algebra was developed by Gorge Boole back in 1854. He published a paper then called “An Investigation of the Laws of Thought On Which Are Founded The Mathematical Theories Of Logic And Probabilities”. However, his paper went almost unnoticed until after his death. Boolean Algebra is based on a simple concept that everything must be reduced to True and False. Note that this is a binary system just like our binary numbers and our transistor switch. True and False can be represented by 1s and 0s or On and Off. Any two values of symbols would work. Boolean Algebra has operators just like regular Algebra. These include add, subtract, multiply and divide. Boolean Logic also has a set of operators. These are AND, OR and NOT. To perform a logic operation on a binary value we simply follow the rules of binary logic. These are all very simple. Let’s take the NOT operator. NOT 1_{2} = 0_{2}. Very simple, isn’t it. Notice that the NOT operator is a unary operator. That means it works on only one value at a time. (In the decimal system there is also a unary operators. The minus symbol to the right of a number negates the number). The other two operators are just as simple. 1_{2} AND 1_{2.} = 1_{2} Finally, the OR operator, 0_{2} OR 0_{2} = 0_{2}. Anything else is 1_{2}. Here are the complete rules:

Logical NOT | |
---|---|

Input | Out |

Table 1 | |

0 | 1 |

1 | 0 |

Logical AND | ||
---|---|---|

A | B | Out |

Table 2 | ||

0 | 0 | 0 |

0 | 1 | 0 |

1 | 0 | 0 |

1 | 1 | 1 |

Logical OR | ||
---|---|---|

A | B | Out |

Table 3 | ||

0 | 0 | 0 |

0 | 1 | 1 |

1 | 0 | 1 |

1 | 1 | 1 |

There are other operators as well but they are made from combining these three basic logic operations. They include NAND, NOR, and XOR. Each of these is a binary operator like AND and OR.

Logical NAND | ||
---|---|---|

A | B | Out |

Table 4 | ||

0 | 0 | 1 |

0 | 1 | 1 |

1 | 0 | 1 |

1 | 1 | 0 |

Logical NOR | ||
---|---|---|

A | B | Out |

Table 5 | ||

0 | 0 | 1 |

0 | 1 | 0 |

1 | 0 | 0 |

1 | 1 | 0 |

Logical XOR | ||
---|---|---|

A | B | Out |

Table 6 | ||

0 | 0 | 0 |

0 | 1 | 1 |

1 | 0 | 1 |

1 | 1 | 0 |

These tables are referred to as Truth Tables. They show all possible states of the input(s) and the resultant output. Notice that the NOT operator always produces the opposite or complement of its input. While the AND operator requires all it’s inputs to be True for it’s output to be True. With the OR operator, if any input is True, its output will be True. The NAND operator is simply an AND followed by a NOT and so it inverts the AND’s output. Giving, a False output if all its inputs are True. The NOR is also just an OR operator followed by a NOT operator. It outputs True if all its inputs are False. The XOR (Exclusive OR) operator is special. It outputs a True if and only if one of its inputs is True. The other inputs must remain False. If two of more inputs are True the output will be False. You could also follow this operator with NOT operator to get an XNOR operator (Not show here). When writing Boolean logic equations the plus symbol “+” is used to specified an OR operation. The AND operation is specified by using the dot notation used for multiplication in mathematics. If you think about it for a moment you’ll realize this choice of operator symbol makes perfect sense. Consider the OR operator for a moment. 1+0 = 1 and 1 OR 0 = 1. 1 AND 0 = 0 and 1 * 0 = 0. The NOT operator is denoted by an Over Bar or slash as in “/A”, “~A”, or by a leading minus “-“, The word “NOT”, a leading “N”, or a leading “!” as in “!A” = “A” (The NOT Gate function). Perfectly logical! Computer systems use the electrical equivalent of these operators called gates. Figure 8 shows the various logic gates and their schematic symbols. You will notice this figure includes the Algebraic expression of the gate function. Note the bar over some of the terms. This bar indicates the term is inverted or NOT-ed. Also, note the small circle on some of the gates outputs. This small circle symbolizes an invert or NOT function on the output. All of these gates except for the NOT gate are shown with two inputs. However, they may have many inputs but only one output.

### Simple Transistor Logic

OK, so far we’ve look at transistors and we’ve introduce Boolean logic and the binary system. So how do we turn transistors into logic gates? Lets start with the simplest logic gate we have. The NOT gate. Referring to figure 9 you can see a simple transistor inverter circuit. When a positive voltage is applied to the base through the input, the transistor will turn on and conduct current. The resistor on the collector limits the amount of current that can flow into the collector leg. Since the transistor is capable of sinking more current than the resistor allows, the voltage on the collector fall to near zero, giving a zero or false output signal for the positive input. Now however, if the there is no positive voltage on the input, no current flows into the base of the transistor and the transistor remains in the off state. The current flowing through the resistor on the collector is then free to provide current to the output.

Next let us look at the transistor AND gate. Figure 10 shows a simple transistor AND gate. When a positive voltage is applied to both of the inputs, transistors Q4, and Q6 conduct current through resistor R11. This pulls the low end of R11 toward ground reducing the base current to Q5 below the turn-on threshold. With Q5 turned off, R8 is free to provide current to the output. If the positive voltage is removed from one of the inputs, the associated transistor remains off blocking the flow of current from R11 to ground. This allows R11 to supply current to the base of Q5 turning it on and pulling the output toward ground. Resulting in a False output.

Next, referring to figure 11, we see the implementation of a simple transistor OR gate. When a positive voltage is applied to either or both of the inputs, the associated transistor turns on, pulling the bottom end of R11 toward ground and reducing the current to the base of Q5. This turns Q5 off and allows resistor R8 to provide current to the output. Thus producing a True or positive output signal. If neither of the inputs are given a positive voltage, then both input transistors remain off, allow R11 to supply current to the base of Q5. This turns Q5 on and pulls the bottom end of R8 and the output toward ground resulting in a False or logic 0 output.

### A Very Simple Computing Circuit

You may be wondering how all this logic and binary algebra stuff translates to computing. Let’s take a look at a simple Adder circuit. Consider the logic needed to add two binary numbers together. Now to keep things as simple as possible we will limit these two digits to a single bit. Presented in Table 7 is truth table for a single bit Half-Adder:

Table 7

1-Bit Half Adder | |||
---|---|---|---|

A | B | Sum | Carry |

Table 7 | |||

0 | 0 | 0 | 0 |

0 | 1 | 1 | 0 |

1 | 0 | 1 | 0 |

1 | 1 | 0 | 1 |

Consider how we might implement this adder using our digital logic gates. We can see from table 7 that the sum should output a true or 1 value if one and only one of the inputs is true. Looking back at table 6 we can see that the XOR gate does just that. Therefor, the sum may be implemented using an XOR gate. When adding two binary digits which can only have a value of 0 or 1, there are 4 possible outcomes. Recall that in binary 0_{2} + 0_{2} = 0_{2}, 0_{2} + 1_{2} = 1_{2}, 1_{2} + 0 = 1_{2}, and 1_{2} + 1_{2} = 10_{2}. The last result requires a carry and so we need to implement a carry out function to indicate when we have reach a binary value of 2. Notice that the carry out should only be set to True or 1 when both inputs are True. Referring to table 2, we can see that the AND gate provides this function. Now all we need to do is wire together these two gates as shown in figure 12. Note that if both inputs are zero, then the sum and carry are both zero (figure 12). If only one of the two inputs is 1 then the sum is 1 and the carry is 0 (figure 13, 14). If both inputs are 1 then the carry is 1 and the sum is 0 (figure 15).

So there you have it. A simple calculator capable to calculating the addition of two single bit binary numbers. Now this may seem very simple however, it is exactly how your central processor works as we will soon see. I used LogiSim, a Linux Open Source Logic Simulator to model this circuit. If you’re running Linux it is worth installing from you system’s package manager and reading the tutorial on the help menu. Then using the gates, recreate the adder presented here and experiment with it a bit. We have covered a lot of ground here, still I have left much out of the text. However, I have provided a reference section below that includes further reading and some videos that will help to fill in the holes I have left. If you’ve found this article interesting and plan on continuing to read this column, then I suggest you use those links while you wait for the next installment. We haven’t even scratched the surface yet. There is so much more to discover. Our next installment will focus on building various mid-size logic circuits using LogiSim. We will then use these circuits to build an Arithmetic Logic Unit (ALU). The ALU will form the core of our CPU and will provide all the Arithmetic and logic functions.