Sunday, November 30, 2008
Parts of a Motherboard
1.Power supply terminal for motherboard
2.CPU slot
3.AGP slot
4.PCI slot
5.Ram slot
6.IDE slot for FDD
7.IDE slot for primary and secondary device
8.Ps/2 port
9.serial and parallel port
10.USB with lan port
11.CMOS battery
Monday, November 17, 2008
history of computer hardware
Computer hardware has transformed in the last few decades as computers evolved from bulky, beige monsters to sleek and sexy machines. The dictionary defines ‘computer’ as any programmable electronic device that can store, retrieve, and process data. Computer hardware evolved as data storage, calculation and data processing became important elements in work and life. In fact, the earliest comp. hardware is thought to be record keeping aids such as clay shapes that represented items in the real world – the early mechanics of merchants and accountants of the past. From the abacus and the slide rule came analogue and later, the electronic comp. hardware known today A timeline of the history of comp. hardware: 1632 the first mechanical calculator was built by Wilhelm Schickard. It used cogs and gears and became the predecessor for comp. hardware. 1801 punched card technology began and by 1890 sorting machines were handling data, the first comp. hardware and installations used punched cards until the 1970s. 1820, Charles Xavier Thomas created the first mass-produced calculator. 1835, Charles Babbage described his analytical engine, which was the layout of a general-purpose programmable comp. 1909, Percy Ludgate designed a programmable mechanical comp. 1914, a central component in comp. hardware – the binary numeral system- was described by Leibniz. 1930s, desktop mechanical calculators, cash registers and accounting machines were introduced. By the 1960s, calculators advanced with integrated circuits and microprocessors. Digitals comp. hardware replaced analogue on comp. Digital comp. hardware The era of the comp. as we know it today began with developments during the Second World War as researchers and scientists were spurred on by the military. 1960s and beyond ‘Third generation’ comp. hardware took off post 1960 thanks to the invention of the integrated circuit or microchip. This led to the microprocessor which in turn led to the microcomputer – comp. hardware that could be owned by individuals and small businesses. Steve Wozniak co-founded Apple Comp. and is credited with developing the first mass market comp., although the KIM-1 and Altair 8800 came first. Evolution in comp. hardware After the 1970s the personal comp. and evolution in comp. hardware exploded across the western world. Microsoft, Apple and many other PC companies fuelled the market and today, these companies are still striving to reduce the size and price of comp. hardware while improving its capacity.
Monday, November 10, 2008
History of CPU/Processor
The microprocessor, or CPU, as some people call it, is the brains of our personal computer. I’m getting into this history lesson not because I’m a history buff (though computers do have a wonderfully interesting past), but to go through the development step-by-step to explain how they work.
Well, not everything about how they work, but enough to understand the importance of the latest features and what they do for you. It’s going to take more than one article to dig into the inner secrets of microprocessors. I hope it’s an interesting read for you and helps you recognize computer buzzwords when you’re making your next computer purchase.
Well, not everything about how they work, but enough to understand the importance of the latest features and what they do for you. It’s going to take more than one article to dig into the inner secrets of microprocessors. I hope it’s an interesting read for you and helps you recognize computer buzzwords when you’re making your next computer purchase.
1. Where Did CPUs Come From?
When the 1970s dawned, computers were still monster machines hidden in air-conditioned rooms and attended to by technicians in white lab coats. One component of a mainframe computer, as they were known, was the CPU, or Central Processing Unit. This was a steel cabinet bigger than a refrigerator full of circuit boards crowded with transistors.
Computers had only recently been converted from vacuum tubes to transistors and only the very latest machines used primitive integrated circuits where a few transistors were gathered in one package. That means the CPU was a big pile of equipment. The thought that the CPU could be reduced to a chip of silicon the size of your fingernail was the stuff of science fiction.
2. How Does a CPU Work?
In the '40s, mathematicians John Von Neumann, J. Presper Eckert and John Mauchly came up with the concept of the stored instruction digital computer. Before then, computers were programmed by rewiring their circuits to perform a certain calculation over and over. By having a memory and storing a set of instructions that can be performed over and over, as well as logic to vary the path of instruction, execution programmable computers were possible.
The component of the computer that fetches the instructions and data from the memory and carries out the instructions in the form of data manipulation and numerical calculations is called the CPU. It’s central because all the memory and the input/output devices must connect to the CPU, so it’s only natural to keep the cables short to put the CPU in the middle. It does all the instruction execution and number calculations so it’s called the Processing Unit.
The CPU has a program counter that points to the next instruction to be executed. It goes through a cycle where it retrieves, from memory, the instructions in the program counter. It then retrieves the required data from memory, performs the calculation indicated by the instruction and stores the result. The program counter is incremented to point to the next instruction and the cycle starts all over.
3. The First Microprocessor
In 1971 when the heavy iron mainframe computers still ruled, a small Silicon Valley company was contracted to design an integrated circuit for a business calculator for Busicom. Instead of hardwired calculations like other calculator chips of the day, this one was designed as a tiny CPU that could be programmed to perform almost any calculation.
The expensive and time-consuming work of designing a custom wired chip was replaced by the flexible 4004 microprocessor and the instructions stored in a separate ROM (Read Only Memory) chip. A new calculator with entirely new features can be created simply by programming a new ROM chip. The company that started this revolution was Intel Corporation. The concept of a general purpose CPU chip grew up to be the microprocessor that is the heart of your powerful PC.
4. Bits Isn’t Enough
The original 4004 microprocessor chip handled data in four bit chunks. Four bits gives you sixteen possible numbers, enough to handle standard decimal arithmetic for a calculator. If it were only the size of the numbers we calculate with, we might still be using four bit microprocessors.
The problem is that there is another form of calculation a stored instruction computer needs to do. That is it has to figure out where in memory instructions are. In other words, it has to calculate memory locations to process program branch instructions or to index into tables of data.
Like I said, four bits only gets you sixteen possibilities and even the 4004 needed to address 640 bytes of memory to handle calculator functions. Modern microprocessor chips like the
5. The First Step Up, 8 Bits
With a total memory address space of 640 bytes, the Intel 4004 chip was not the first microprocessor to be the starting point for a personal computer. In 1972, Intel delivered the 8008, a scaled up 4004. The 8008 was the first of many 8- bit microprocessors to fuel the home computer revolution. It was limited to only 16 Kilobytes of address space, but in those days no one could afford that much RAM.
Two years later, Intel introduced the 8080 microprocessor with 64 Kilobytes of memory space and increased the rate of execution by a factor of ten over the 8008. About this time, Motorola brought out the 6800 with similar performance. The 8080 became the core of serious microcomputers that led to the Intel 8088 used in the IBM PC, while the 6800 family headed in the direction of the Apple II personal computer.
6. 16 Bits Enables the IBM PC
By the late '70s, the personal computer was bursting at the seams of the 8 bit microprocessor performance. In 1979, Intel delivered the 8088 and IBM engineers used it for the first PC. The combination of the new 16 bit microprocessor and the name IBM shifted the personal computer from a techie toy in the garage to a mainstream business tool.
The major advantage of the 8086 was up to 1 Megabyte of memory addressing. Now, large spreadsheets or large documents could be read in from the disk and held in RAM memory for fast access and manipulation. These days, it’s not uncommon to have a thousand times more than that in
7. Cache RAM, Catching Up With the CPU
We’ll have to continue the march through the lineup of microprocessors in the next installment to make way for the first of the enhancements that you should understand. With memory space expanding and the speed of microprocessor cores going ever faster, there was a problem of the memory keeping up.
Large low-powered memories cannot go as fast as smaller higher power RAM chips. To keep the fastest CPUs running full speed, microprocessor engineers started inserting a few of the fast and small memories between the main large RAM and the microprocessor. The purpose of this smaller memory is to hold instructions that get repeatedly executed or data that is accessed often.
This smaller memory is called cache RAM and allows the microprocessor to execute at full speed. Naturally, the larger the cache RAM the higher percentage of cache hits and the microprocessor can continue running full speed. When the program execution leads to instructions not in the cache, then the instructions need to be fetched from the main memory and the microprocessor has to stop and wait.
8. Cache Grows Up
The idea of cache RAM has grown along with the size and complexity of microprocessor chips. A pentium4
9. Cache Splits Up
As I mentioned above, smaller memories can be addressed faster. Even the physical size of a large memory can slow it down. Microprocessor engineers decided to give the cache memory a cache. Now we have what is known as L1 and L2 cache for level one and level two. The larger and slower cache is L2 and is the usual size quoted in specifications for cache capacity. A few really high-end chips like the Intel Itanium II had three levels of cache RAM.
Beware that the sheer size of cache RAM or the number of layers are not good indications of cache performance. Different microprocessor architectures between Intel and AMD make it especially hard to compare their cache specifications. Just like Intel’s super high clock rates don’t translate into proportionately more performance, doubling of cache size certainly doesn’t double the performance of a microprocessor. Benchmark tests are not perfect, but are a better indicator of microprocessor speed than clock rate or cache size specifications.
Final Words
I hope you enjoyed this first installment of the history of microprocessors. It’s nice to know the humble beginnings and compare them to how far we have come in the computing capability of a CPU. Understanding the basics of how a microprocessor works gives you a leg-up on grokking the more advanced features of today’s Mega-microprocessors.
In future installments, we are going to dig into such microprocessor enhancements as super-scalar, hyper-threading and dual core. The concepts aren’t that hard and in the end you can boast about the latest features of your new computer with confidence.
Subscribe to:
Posts (Atom)