Tuesday, January 13, 2009

front and back of system unit



Front;
1.Restart button
2.Poert button
3.OPtical media
4.Status indicator
5.Auxiliary port
6.USB port
Back;
1.Power Supply unit
2.Ps/2 ports
3.USB port
4.Parallel port
5.Serial port
6.AGP port
7.Lan port
8.Expansion slot

Sunday, January 11, 2009

The Hard Disks

A Hard disk drive (HDD), commonly referred to as a hard drive, hard disk, or fixed disk drive, is a non-volatile storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces. Strictly speaking, "drive" refers to a device distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.
HDDs (introduced in 1956 as data storage for an IBM accounting computer were originally developed for use with general purpose computers. In the 21st century, applications for HDDs have expanded to include digital video recorders, digital audio players, personal digit assistants, digital cameras and video game consoles. In 2005 the first mobile phones to include HDDs were introduced by Samsung and Nokia.The need for large-scale, reliable storage, independent of a particular device, led to the introduction of embedded systems such as RAID arrays, network attached storage (NAS) systems and storage area network (SAN) systems that provide efficient and reliable access to large volumes of data.


Platters
The actual storage media in the different types of disks. In a hard drive the platter haa a core of glass or aluminium covered with a thin layer of Ferric oxide or a Cobalt alloy (Co-Ni, Co-Cr, Co-Ni-W). This layer is protected by a layer of a very hard material (overcoat), and a thin layer of lubricant. A CD is a plastic disc in which the data is impressed. It has a metallic, reflecting backside.



Spindle
The platters are mounted by cutting a hole in the center and stacking.


Slider
The special electromagnetic read/write devices.

Actuator Arm
The hard drive's electronics control the movement of the actuator and the rotation of the disk, and perform reads and writes on demand from the disk controller. Feedback of the drive electronics is accomplished by means of special segments of the disk dedicated to servo feedback. These are either complete concentric circles (in the case of dedicated servo technology), or segments interspersed with real data (in the case of embedded servo technology). The servo feedback optimizes the signal to noise ratio of the GMR sensors by adjusting the voice-coil of the actuated arm. The spinning of the disk also uses a servo motor. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media which have failed.

Actuator
is a mechanical device for moving or controlling a mechanism or system.



Wednesday, December 10, 2008

buses




Data bus.In a computer architechture a bus is a subsystem that transfers data between computer components inside a computer or between computers. Unlike a point-to-point connection, a bus can logically connect several peripherals over the same set of wires. Each bus defines its set of connectors to physically plug devices, cards or cables together.Early computer buses were literally parallel electrical buses with multiple connections, but the term is now used for any physical arrangement that provides the same logical functionality as a parallel electrical bus. Modern computer buses can use both parallel and bit-serial connections, and can be wired in either a multidrop (electrical parallel) or daisy chain topology, or connected by switched hubs, as in the case of USB.


A control bus is (part of) a computer bus, used by CPUs for communicating with other devices within the computer. While the address bus carries the information on which device the CPU is communicating with and the data buss carries the actual data being processed, the control bus carries commands from the CPU and returns status signals from the devices, for example if the data is being read or written to the device the appropriate line (read or write) will be active (logic zero).

An address bus is a computer bus, controlled by CPUs or DMA-capable peripherals for specifying the physical addresses of computer memory elements that the requesting unit wants to access (read or write).The width of an address bus, along with the size of addressable memory elements, generally determines how much memory can be directly accessed. For example, a 16-bit wide address bus (commonly used in the 8-bit processors of the 1970s and early 1980s) reaches across 216 (65,536) memory locations , whereas a 32-bit address bus common in PC processors as of 2004 update can address 232 4,294,967,296 locations. Some microprocessors, such as the Digital Compaq Hewlett-Packard Alpha 21264 and Alpha 21364 have an address bus that is narrower than the amount of memory they can address. The address bus is clocked faster than the system or memory bus, enabling it to transfer an address in the same amount of time as an address bus of the same width as the address.In most microcomputers such addressable "locations" are 8-bit bytes, conceptually at least. In such case the above examples translate to 64 kilobytes (KB) and 4 gigabytes (GB) respectively. However, it should be noted that accessing an individual byte frequently requires reading or writing the full bus width a word at once. In these instances the least significant bits of the address bus may not even be implemented - it is instead the responsibility of the controlling device to isolate the individual byte required from the complete word transmitted. This is the case, for instance, with the VESA Local Bus which lacks the two least significant bits, limiting this bus to aligned 32 bit transfers.Historically, there were also some examples of computers which were only able to address larger words, such as 36 or 48 bits long.

Sunday, November 30, 2008

Parts of a Motherboard


















1.Power supply terminal for motherboard
2.CPU slot
3.AGP slot
4.PCI slot
5.Ram slot
6.IDE slot for FDD
7.IDE slot for primary and secondary device
8.Ps/2 port
9.serial and parallel port
10.USB with lan port
11.CMOS battery

Monday, November 17, 2008

history of computer hardware

Computer hardware has transformed in the last few decades as computers evolved from bulky, beige monsters to sleek and sexy machines. The dictionary defines ‘computer’ as any programmable electronic device that can store, retrieve, and process data. Computer hardware evolved as data storage, calculation and data processing became important elements in work and life. In fact, the earliest comp. hardware is thought to be record keeping aids such as clay shapes that represented items in the real world – the early mechanics of merchants and accountants of the past. From the abacus and the slide rule came analogue and later, the electronic comp. hardware known today A timeline of the history of comp. hardware: 1632 the first mechanical calculator was built by Wilhelm Schickard. It used cogs and gears and became the predecessor for comp. hardware. 1801 punched card technology began and by 1890 sorting machines were handling data, the first comp. hardware and installations used punched cards until the 1970s. 1820, Charles Xavier Thomas created the first mass-produced calculator. 1835, Charles Babbage described his analytical engine, which was the layout of a general-purpose programmable comp. 1909, Percy Ludgate designed a programmable mechanical comp. 1914, a central component in comp. hardware – the binary numeral system- was described by Leibniz. 1930s, desktop mechanical calculators, cash registers and accounting machines were introduced. By the 1960s, calculators advanced with integrated circuits and microprocessors. Digitals comp. hardware replaced analogue on comp. Digital comp. hardware The era of the comp. as we know it today began with developments during the Second World War as researchers and scientists were spurred on by the military. 1960s and beyond ‘Third generation’ comp. hardware took off post 1960 thanks to the invention of the integrated circuit or microchip. This led to the microprocessor which in turn led to the microcomputer – comp. hardware that could be owned by individuals and small businesses. Steve Wozniak co-founded Apple Comp. and is credited with developing the first mass market comp., although the KIM-1 and Altair 8800 came first. Evolution in comp. hardware After the 1970s the personal comp. and evolution in comp. hardware exploded across the western world. Microsoft, Apple and many other PC companies fuelled the market and today, these companies are still striving to reduce the size and price of comp. hardware while improving its capacity.

Monday, November 10, 2008

History of CPU/Processor


The microprocessor, or CPU, as some people call it, is the brains of our personal computer. I’m getting into this history lesson not because I’m a history buff (though computers do have a wonderfully interesting past), but to go through the development step-by-step to explain how they work.
Well, not everything about how they work, but enough to understand the importance of the latest features and what they do for you. It’s going to take more than one article to dig into the inner secrets of microprocessors. I hope it’s an interesting read for you and helps you recognize computer buzzwords when you’re making your next computer purchase.

1. Where Did CPUs Come From?
When the 1970s dawned, computers were still monster machines hidden in air-conditioned rooms and attended to by technicians in white lab coats. One component of a mainframe computer, as they were known, was the CPU, or Central Processing Unit. This was a steel cabinet bigger than a refrigerator full of circuit boards crowded with transistors.
Computers had only recently been converted from vacuum tubes to transistors and only the very latest machines used primitive integrated circuits where a few transistors were gathered in one package. That means the CPU was a big pile of equipment. The thought that the CPU could be reduced to a chip of silicon the size of your fingernail was the stuff of science fiction.

2. How Does a CPU Work?
In the '40s, mathematicians John Von Neumann, J. Presper Eckert and John Mauchly came up with the concept of the stored instruction digital computer. Before then, computers were programmed by rewiring their circuits to perform a certain calculation over and over. By having a memory and storing a set of instructions that can be performed over and over, as well as logic to vary the path of instruction, execution programmable computers were possible.
The component of the computer that fetches the instructions and data from the memory and carries out the instructions in the form of data manipulation and numerical calculations is called the CPU. It’s central because all the memory and the input/output devices must connect to the CPU, so it’s only natural to keep the cables short to put the CPU in the middle. It does all the instruction execution and number calculations so it’s called the Processing Unit.
The CPU has a program counter that points to the next instruction to be executed. It goes through a cycle where it retrieves, from memory, the instructions in the program counter. It then retrieves the required data from memory, performs the calculation indicated by the instruction and stores the result. The program counter is incremented to point to the next instruction and the cycle starts all over.

3. The First Microprocessor
In 1971 when the heavy iron mainframe computers still ruled, a small Silicon Valley company was contracted to design an integrated circuit for a business calculator for Busicom. Instead of hardwired calculations like other calculator chips of the day, this one was designed as a tiny CPU that could be programmed to perform almost any calculation.
The expensive and time-consuming work of designing a custom wired chip was replaced by the flexible 4004 microprocessor and the instructions stored in a separate ROM (Read Only Memory) chip. A new calculator with entirely new features can be created simply by programming a new ROM chip. The company that started this revolution was Intel Corporation. The concept of a general purpose CPU chip grew up to be the microprocessor that is the heart of your powerful PC.

4. Bits Isn’t Enough
The original 4004 microprocessor chip handled data in four bit chunks. Four bits gives you sixteen possible numbers, enough to handle standard decimal arithmetic for a calculator. If it were only the size of the numbers we calculate with, we might still be using four bit microprocessors.
The problem is that there is another form of calculation a stored instruction computer needs to do. That is it has to figure out where in memory instructions are. In other words, it has to calculate memory locations to process program branch instructions or to index into tables of data.
Like I said, four bits only gets you sixteen possibilities and even the 4004 needed to address 640 bytes of memory to handle calculator functions. Modern microprocessor chips like the
intel pentium4 can address 18,446,744,073,709,551,616 bytes of memory, though the motherboard is limited to less than this total. This led to the push for more bits in our microprocessors. We are now on the fence between 32 bit microprocessors and 64 bit monsters like the AMD.

5. The First Step Up, 8 Bits
With a total memory address space of 640 bytes, the Intel 4004 chip was not the first microprocessor to be the starting point for a personal computer. In 1972, Intel delivered the 8008, a scaled up 4004. The 8008 was the first of many 8- bit microprocessors to fuel the home computer revolution. It was limited to only 16 Kilobytes of address space, but in those days no one could afford that much RAM.
Two years later, Intel introduced the 8080 microprocessor with 64 Kilobytes of memory space and increased the rate of execution by a factor of ten over the 8008. About this time, Motorola brought out the 6800 with similar performance. The 8080 became the core of serious microcomputers that led to the Intel 8088 used in the IBM PC, while the 6800 family headed in the direction of the Apple II personal computer.

6. 16 Bits Enables the IBM PC
By the late '70s, the personal computer was bursting at the seams of the 8 bit microprocessor performance. In 1979, Intel delivered the 8088 and IBM engineers used it for the first PC. The combination of the new 16 bit microprocessor and the name IBM shifted the personal computer from a techie toy in the garage to a mainstream business tool.
The major advantage of the 8086 was up to 1 Megabyte of memory addressing. Now, large spreadsheets or large documents could be read in from the disk and held in RAM memory for fast access and manipulation. These days, it’s not uncommon to have a thousand times more than that in
a single 1GB Module, but back in that time it put the IBM PC in the same league with minicomputers the size of a refrigerator.

7. Cache RAM, Catching Up With the CPU
We’ll have to continue the march through the lineup of microprocessors in the next installment to make way for the first of the enhancements that you should understand. With memory space expanding and the speed of microprocessor cores going ever faster, there was a problem of the memory keeping up.
Large low-powered memories cannot go as fast as smaller higher power RAM chips. To keep the fastest CPUs running full speed, microprocessor engineers started inserting a few of the fast and small memories between the main large RAM and the microprocessor. The purpose of this smaller memory is to hold instructions that get repeatedly executed or data that is accessed often.
This smaller memory is called cache RAM and allows the microprocessor to execute at full speed. Naturally, the larger the cache RAM the higher percentage of cache hits and the microprocessor can continue running full speed. When the program execution leads to instructions not in the cache, then the instructions need to be fetched from the main memory and the microprocessor has to stop and wait.

8. Cache Grows Up
The idea of cache RAM has grown along with the size and complexity of microprocessor chips. A pentium4
has 2 Megabytes of cache RAM built into the chip. That’s more than twice the entire memory address space of the original 8088 chip used in the first PC and clones. Putting the cache right on the microprocessor itself removes the slowdown of the wires between chips. You know you are going fast when the speed of light for a few inches makes a difference!

9. Cache Splits Up
As I mentioned above, smaller memories can be addressed faster. Even the physical size of a large memory can slow it down. Microprocessor engineers decided to give the cache memory a cache. Now we have what is known as L1 and L2 cache for level one and level two. The larger and slower cache is L2 and is the usual size quoted in specifications for cache capacity. A few really high-end chips like the Intel Itanium II had three levels of cache RAM.
Beware that the sheer size of cache RAM or the number of layers are not good indications of cache performance. Different microprocessor architectures between Intel and AMD make it especially hard to compare their cache specifications. Just like Intel’s super high clock rates don’t translate into proportionately more performance, doubling of cache size certainly doesn’t double the performance of a microprocessor. Benchmark tests are not perfect, but are a better indicator of microprocessor speed than clock rate or cache size specifications.
Final Words
I hope you enjoyed this first installment of the history of microprocessors. It’s nice to know the humble beginnings and compare them to how far we have come in the computing capability of a CPU. Understanding the basics of how a microprocessor works gives you a leg-up on grokking the more advanced features of today’s Mega-microprocessors.
In future installments, we are going to dig into such microprocessor enhancements as super-scalar, hyper-threading and dual core. The concepts aren’t that hard and in the end you can boast about the latest features of your new computer with confidence.