Description

This blog was created for an assignment of Computer System Architecture DCS5048 group C9-C10. It created by student Information Technology from Multimedia University Cyberjaya. This blog will discuss all the main subject.At the end of this subject we should be able to explain the definition of Computer System Architecture and describe the function of each component in computer. Enjoy reading,thank you.

Muhammad Afiq Akmal B Azaman  1111112781
Muhammad Aiman B Adnan            1111111646
Nur Fathiha Bt Mohamed Zafrullah  1081102061
Mathirubiny a/p Maniraja                 1111114218

Computer System Architecture

It design interface between hardware and the lowest level software.
It sets the standard for all device that connect to it and all the software that runs on it.
It based on the type of programs that will run and the number of programs that run concurrently.


Content

  • Data Representation.
  • Instruction set Architecture.
  • Memory System Architecture.
  • Buses.
  • Input / Output Interface.
  • Pipelining & Riscs.

Data Representation



Aim of the chapter



This chapter explains how data can be represented in binary form so it can 
be stored in and processed by a computer.


Bits and bytes

At the most fundamental (and slightly simplified) level, today’s computers only recognise two
states: either there is an electric current running through a circuit or not. As a consequence, all information that is represented within a computer needs to be represented with the help of 
these two states. Rather than referring to a state as ‘running current’ or ‘no running current’, computer science uses a 1 and a 0 for the two different possibilities. Hence the most basic unit of information is the binary digit (referred to as a bit of information). A bit of information can contain either a 1 or a 0. A collection of eight bits of information is generally referred to as one byte, and memory size is generally measured in the number bytes of information a computer can store. The memory size of our early computers was measured in 210 bytes.
Since 210 = 1024, this was rounded down to 1000, and 1024 bytes were actually referred to (not quite correctly) as one kilobyte. Accordingly a machine with a capacity of 
16 kilobytes had 16,384 bytes. We now talk about memory size in terms of :

• Megabyte: 1 megabyte = 220 bytes =1,048,576 bytes
• Gigabyte: 1 gigabyte = 230 bytes = 1,073,741,824 bytes.

Now that we know that all information that we feed into a computer needs 
to be translated into a code of 0s and 1s, i.e. into binary digits (bits), the 
following sections explain how text and numbers can be encoded in this 
way.

Representing text

A common way to represent text is to agree on a unique code for every symbol (e.g. for every letter, punctuation mark or number) that needs to be presented. Each code consists of a fixed-length sequence of bits (but obviously the sequence is different for each code). A word can then
be ‘written’ by determining the code for each letter and stringing them together. The most well-known code that is used in this way is the ASCII (American Standard Code for Information Interchange).
It uses 8-bit strings to represent the English alphabet. You can find an overview of the ASCII code in Appendix A of Brookshear (2009) or you can look it up on the internet.
You should then be able to encode simple text, as in the following example.


The files we refer to as text files generally contain long sequences of code (mostly ASCII) as described above. A simple text editor can translate the code and make it readable to us (and the other way around).The programming language Java, which you will be introduced to in 
Part 2 of this guide, uses the Unicode standard for the representation of characters. Unicode provides 16 bits for the representation of a character (rather than 8), and therefore the Unicode character set is much larger than the ASCII set. The Unicode character set, therefore, also includes characters from alphabets other than English (e.g. Chinese and Greek).
Note that word-processor files on the other hand are more complex. 
Encoding schemes such as ASCII and Unicode only address the encoding 
of the text. They are not suitable for the formatting information a word 
processor provides.


Representing numbers – the binary system

When you look at an ASCII (or Unicode) table, you will find that it also includes codes for the representation of numbers. However, this is not an ideal way of representing numbers if they are used within a calculation. Also, within ASCII we would be using 8 bits to represent a single-digit number, and the largest number we could store would be 9. However, if we represent all numbers as binary numbers (rather than decimal) we can actually represent the numbers 0 to 127 with 8 bits. We also have the advantage that a number is translated into a different numeric system (i.e. the binary system) where it remains a number and where hence mathematical calculations can still be carried out (in ASCII a number is merely a symbol in the same way as a letter is a symbol).
Binary numbers are represented using bit sequences. The representation follows the same simple principle as the decimal system. So you need to remind yourself of these underlying principles: 
The decimal system uses 10 as a base, and the 10 digits available are 0, 
1, 2, 3, … , 9. Depending on its position in the whole number, each digit 
represents the amount of 1s, 10s, 100s, 1000s, etc. (i.e. 100s, 101s, 102s,103s, etc.).
For example, the decimal number 9348 represents 9×103 + 3×102 + 4×101+ 8×100
.
The binary system uses 2 as its base with the digits 0 and 1. Let’s look at 
110100 as an example of a binary number (in order to avoid confusion 
the base of a number may be indicated using the following convention: 

1101002
 = 0×20 + 0×21 + 1×22 + 0×23 + 1×24 + 1×25 = 0 + 0 + 4 + 0 + 
16 + 32 = 5210

In summary, what you need to be able to understand is that the value that is represented by a digit depends on the digit’s position within the number and the base of the numeric system used. With every move of a digit to the left the value represented increases by a power of the base. For the decimal system this means digits – from right to left – need to be multiplied by 1, 10, 100, 1000, etc. For the binary system, digits – again from right (the least significant bit) to left (the most significant bit) – need to be multiplied by 1, 2, 4, 8, 16, etc. to calculate their value.
As part of the explanation above you have been shown how a binary number can be converted into a decimal number.You also need to be able to convert a decimal number into a binary number. This can be done by:

• dividing the decimal number by 2 and recording the quotient and the remainder
• continuing to do this with every resultant quotient until the quotient is 0.

The backwards sequence of the recorded remainders is then the binary representation of the original decimal number.


Let us illustrate this with our example of 52:

52 : 2 = 26 remainder 0
26 : 2 = 13 remainder 0
13 : 2 = 6 remainder 1
6 : 2 = 3 remainder 0
3 : 2 = 1 remainder 1
1 : 2 = 0 remainder 1 
Hence 5210 = 1101002
You should now have an understanding of the underlying ideas of the 
binary system and how to convert from decimal into binary and vice versa.

Binary addition

The next step is to learn how simple mathematical calculations are carried 
out within the binary system. We will start with the addition of two 
positive binary numbers, and again it is a matter of mapping the procedure 
we are familiar with within the decimal system onto the binary system. An 
example is probably the best way to explain how it works. Let us add 52 
and 49.
In the binary system we would calculate by adding the units, noting the 
overflow under the 10s, then adding the 10s and noting the overflow 
under the 100s, and finally adding the 100s (which in this case only 
consist of the overflow).

100s                        10s                             1s
                                 5                                  2
+                               4                                  9
 1                              1                                            
 1                              0                                  1

We will now do the same addition in the binary system. We already know 
that 5210 = 1101002, and you should be able to work out now (Try it!) 
that 4910 = 1100012. We will follow the same approach as in the decimal 
system above. All you need to think about is what causes an overflow 
when adding binary digits:

0+0 = 0
1+0 = 0+1 = 1
1+1 = 10, i.e. there is an overflow of 1
1+1+1 = 11, i.e. there is an overflow of 1

64s          32s          16s          8s          4s          2s          1s
                  1              1              0            1            0             0
+                1              1              0            0            0             1
Overflow:  1              1
1                1              0              0            1            0             1

If you now check our result of 11001012 by converting it into a decimal number, you should find that 10110 = 11001012


Binary subtraction

In order to subtract one binary number from another we can again use the 
procedures we are familiar with from the decimal system. For example: 
100112 – 11112
64s              32s         16s              8s                  4s                    2s              1s
                                      1                 0                     0                      1                1
−                                                       1                     1                      1                1
Borrowed:                    1                 1
Result:                          0                 0                     1                      0                0

So we start from the least significant bits on the right where we deduct 1 
from 1 and get a result of 0. The next column is also straightforward. In 
the third and fourth columns we need to borrow 1 from the column on the 
left.
However, it is quite difficult for a computer to carry out subtraction in this 
(so familiar to us) way. A more computer-friendly approach that has been 
proposed relies on the idea of turning the number to be subtracted into its 
negative counterpart (referred to as its two’s complement) and then 
adding this negative using ordinary addition. (Please note that although 
you should be aware of the concept of representing the two’s complement 
of a binary number, a detailed knowledge of how the two’s complement 
can be determined is beyond the material covered in this course.)
A further well-known method that can be used to represent a negative 
binary number is the sign and magnitude method. This method 
follows the concept used within mathematics where negative numbers are 
represented by using the minus sign as a prefix. As the binary numbers 
used with computers do not have any extra symbols, an easy way to 
represent a negative number is to agree that the left-most bit is just the 
equivalent of a +/− sign, and the remainder of the number indicates the 
value, i.e. the magnitude of the number:
• 0 indicates that the number is positive
• 1 indicates that the number is negative.
For example, in an 8-bit representation of binary numbers 000011002
would represent 1210 and 100011002
 would be −1210.
Although the sign and magnitude method has been used in early computers and may still be used within the representation of floating-point numbers, the advantage of the two’s complement is evident: The two’s complement representation allows us (or more importantly the computer) to carry out subtraction through a combination of negation and addition. This means that a computer’s electronic circuits that are designed to carry out addition can also be used for subtraction. You will learn more about these circuits in the next section.
Instruction Set

A collection of commands that the processor can understand and execute.It can be in the form of binary or assemble code.

Element of machine instruction.

Opcode (Operation Code) : specifies the operation that need to be done.
                                                 e.g : ADD,SUB,MUL,DIV

Source Operand Reference : Any operation will need one or more operand/variable/inputs.
                                                   e.g : Y = B - C.B and C are operand.
                                                   Operands can be perform the I/O device as keybord,registe
                                                   or main memory.

Result Operand Reference : Where to restore the result of the operation.

Next Instruction Reference : Tell the processor to fetch the next instruction once the current
                                                   instruction is completed.


Instruction Cycle State diagram





INSTRUCTION REPRESENTATION

·         Instructions are represented by a sequence of bits.
·         These bits are then divided into different fields/sections.



·         Opcodes are represented using abbreviations called mnemonics.
·         Example : ADD, SUB. LOAD.
·         ADD A, B    -------      Get the value in address B to be added with the value in address A and
store the answer in address A

INSTRUCTION TYPES
·         In a typical instruction ----- X = X + Y, what is does is, it instructs the processor to add the values in address Y to the value stored in address X and keep the result in address X.
·         So basically the steps are:
1.    Load values to the register from memory location X.
2.    Add content of address Y to that register.
3.    Store the result of that register to memory location X.
·         So based on the steps above, we can state the types of instruction required for a general instruction.
1.    Data processing
§  To do arithmetic and logic operations
2.    Data Storage
§  Take data from memory and store in registers and vice versa
§  Take data from I/O devices and store in memory or registers and vice versa.
3.    Data movements
§  Move data from devices to memory and vice versa.
4.    Control
§  Testing data and branching of instruction.
§  Example : JUMP, LOOP




NUMBER OF ADDRESSES IN AN INSTRUCTION


·   One way of describing a processor’s architecture is in terms of the number of addresses contained in each instruction.
·    What is the maximum number of addresses needed for an instruction? Well…it depends on what operation needs to be done. Different processors might have different addresses for different reasons.
·  Usually arithmetic and logic operations require the most operands. So for a simple calculation like    X = A + B, 2 addresses are needed for the operands and 1 address is required to keep the result. Another address will be required to keep the address of the next instruction. So in this case, 4 address instructions are needed.
·    But then again…it all depends on the architecture of the processor.
·    Most architecture will have 1, 2 or 3 addresses which is sufficient to do all sorts of
      operations.
·    What is the implication of having lesser or more addresses in an instruction?
o   Lesser addresses:
§  Need lesser registers.
§  Instruction looks much for simpler and less complex.
§  Need more lines of instruction for a complete program thus making the program longer.
§  Faster in terms of fetching and executing instructions.
o   More addresses:
§  Need more registers.
§  Instructions become more complex and longer.
§  Fewer lines of instructions are required thus making the program shorter.
§  Takes longer time to fetch and execute each instruction.







Utilization of Instruction Addresses (Nonbranching Instructions)




AC = Accumulator
T = Top of stack
(T-1) = Second element of stack
A, B, C = Memory or register locations

INSTRUCTION SET DESIGN
·         Design of the instruction set is important and complex because it affects many parts of the computer system.
·         Programmers use this instructions set to control the processor (tell the processor what to do).
·         So the instruction set must be designed in such a way that it can be used easily to write commands.
·         Designs issues to consider:
1.    Operation repertoire
§  List of operations that can be done.
§  List of opcodes and what each opcode can do.
§  How complex are the operations?
2.    Data type
§  Type of data that can be operated on.
§  Integers, floating point numbers, characters. Boolean values.
3.    Instruction format
§  Length of the instruction, how many bits?
§  How many addresses required? 0? 1? 2? 3?
§  Each section/field of the instructions requires how many bits?
4.    Registers
§  Numbers of registers available for use.
§  Which operation can be performed on which registers?
5.    Addressing mode
§  Direct addressing? Immediate addressing?


TYPES OF OPERANDS
·         Machine instructions operate of data
·         General categories of data:
1.    Addresses
§  Considered as unsigned integers.
2.    Numbers
§  Integers, binary, floating point, hexadecimal, octal.
3.    Characters
§  String/text and characters values are easily understood by humans, but not by a machine. So reference codes like ACSII are used to represent characters and symbols.
4.    Logical
§  Having values of 1 or 0 to denote true or false values.
§  Common naming convention is flag.
TYPES OF OPERATIONS
·         Different machines use different opcodes. But there are many general ones that are used by majority of the processors.
·         General categories of operations:
·         Data transfer.
§  To more data from one location to another.
§  Example : Move, store, load, set, push, pop
·         Arithmetic.
§  To do calculations.
§  Example : Add, Sub, Mul, Div, Increment, Decrement
·         Logical.
§  Example : And, Or, Not, Shift, Rotate.
·         Transfer of Control.
§  Example: When an interrupt happens, the program will jump to another section to handle the interrupt.
§  Jump, return, skip, wait.
·         Input/Output
§  Getting input from keyboard or to display results onto the monitor.
§  Input(Read), output(write)
·         Conversion
§  Example: To change format --- hex to binary
·         System Control
§  Instructions reserved only for OS to use.
§  Maybe to set registers to Read Only mode.
§  Maybe to set only certain operations can access certain registers.





INSTRUCTION FORMATS
·         Instruction format defines the layout of bits in an instruction in terms of its basic fields.
·         An instruction format must include opcode, implicitly or explicitly with 0 or more operands which is being referenced using the addressing modes.
·         Addressing mode for each operand must be clearly indicated.
·         Most instruction sets might employ more than one instruction format.
·         Important design issues:
1.    Instruction Length:
§  If affects and affected by:
·         Memory size.
o   Programmers would want to design shorter programs. This requires instructions with more opcodes, more operands and more addressing modes. All this requires more memory space.
·         Memory organization.
o   How the addresses and data are organized in the memory. Usually it will be organized in terms of bytes. 
o   Data are stored in multiples like 8 bits, 16 bits, 32 bits.
·         Bus structure
o   Instruction length should be equal to the memory transfer length which is based on the bus structure. Otherwise instruction might be broken down and in certain cases lead to data loss.
·         CPU complexity and speed
o   Length of the instruction should be designed in a way that the processor can fetch and execute without any delay. It will be a problem if the processor is slow in executing the instruction but fast in fetching the instruction.
o   But these can be solved by using the many memory management options available like cache.

2.    Allocation of bits:
Another important issue to consider is how to allocate the bits in a desired instruction format. Factors to be considered when determining the use of addressing bits:
§  Number of addressing modes.
·         Different addressing modes requires different number of bits.
·         Some addressing modes can be incorporated into the opcode implicitly/hidden.
·         Some might need extra bits to determine which type of addressing mode is required.
§  Number of operands.
·         When more operand is included into an instruction, it results in the instruction getting longer. This means that more bits is required.
§  Register versus Memory
·         If more registers are used, number of bits required is lesser. This is because registers has limited range compared to memory.
·         But it all depends on the number of usable and available registers are in the processor.
§  Number of register sets
·         Most machines have 1 set of general purpose registers which contain 32 or more registers. It can be used to store data and address.
·         Some architectures have more than 1 set of general purpose registers.
·         More sets means shorter instructions and results in smaller bits required.
§  Address range.
·         Different addressing modes require different addressing bits.
·         Example: Direct addressing has limited number of address space. But Displacement addressing has requirements for 2 address When more operand is included into an instruction, it results in the
§  Address granularity.
·         When addressing the memory, they are being reference by bytes of words. Byte addressing is more convenient for the programmer because characters are represented in bytes but this kind of fixed sized memory requires more address bits