Section I: Key Terms and Foundational Concepts
Define Binary.
Base two number system based on the values 0 and 1.
Define a Bit and explain why the Binary system is ideal for computers.
A Bit is the abbreviation for binary digit. It is the smallest unit of digital information. The system is ideal because it uses digits 0 and 1, corresponding to the simple ON/OFF switch mechanism that logic gates opperate by.
What is a Nibble and a Byte?
4 bits are referred to as a Nibble. 8 bits are referred to as a Byte.
Define Hexadecimal.
A base 16 number system, which uses the denary digits 0 to 9 and the letters A to F.
Why/where is Hexadecimal used in computing?
Define Memory Dump.
Contents of computer memory output to a screen or printer.
What is a Character Set? How are strings converted using the character set?
Define character set: A list of all of the characters that can be used/represented by the computer hardware and software. Each character has a unique binary number.
How a string is converted using a character set: The characters are replaced by their binary values when used and is stored in sequence as its binary value using the character set. (answer includes definition + this)
Define ASCII Code and specify its bit usage and character capacity.
ASCII is a character set for all characters on an English keyboard and control codes. It uses 7 bits, allowing for 128 different codes/characters.
What is Extended ASCII’s bit usage and character capacity?
Extended ASCII uses 8 bits, allowing for 256 different codes/characters.
Define Unicode and specify its standard bit usage and capacity.
Unicode is a character set which represents all languages of the world. It can use up to 4 bytes (32 bits) per character, which means that it can represent up to 2^32 unique characters.
What is a key design feature of Unicode that correlates to ASCII?
The first 128 characters of Unicode are the same as ASCII.
Compare and contrast between ASCII character set and Unicode character sets.
Section II: Number Systems, Conversion, and Arithmetic
List the first 12 Binary Weightings.
1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048
Describe the procedure and give an example of converting Binary to Denary.
Successive addition.
Each ‘1’ in a column adds the column value (weighting) to the total.
Example: 01101011 = 64 + 32 + 8 + 2 + 1 = 107 (in Denary).
Describe the procedure and give an example of converting Denary to Binary.
Successive subtraction.
By subtracting the binary weightings, largest (possible) to smallest, from the number unti we reach 0 and writing 1s in those columns.
Example: To convert 58: 58- 32-16-8-2=0, resulting in binary: 00111010. [If the number is odd, the rightmost bit is 1]
Define Sign (for two’s complement) and Magnitude representation.
The left-most bit (MSB) is used to represent the sign (0= positive, 1= negative); the remaining bits represent the binary value.
Define One’s Complement.
Each binary digit in a number is reversed (flipped) to allow both negative and positive numbers to be represented.
Define Two’s Complement, and state when it’s positive/negative.
Two’s Complement is the one’s complement of a binary number plus 1 to the rightmost bit. It is used to represent negative numbers, and the most significant bit carries the negative of the equivalent value.
List the 3 steps to convert a positive denary number (N) into its negative Two’s Complement equivalent (-N), using -13 as an example.
What is the easiest procedure for Binary Subtraction?
Convert the second number (the subtrahend) into its 2’s complement, and then add the two numbers together. [Both numbers must be in two’s complement, don’t include overflow bits in the final answer].
Demonstrate the subtraction of 50 (00110010) from 103 (01100111) using 2’s Complement.
103 + (-50). 103 is 01100111.
2’s complement of 50 is 11001110.
Adding them gives 00110101 (53 in Denary).
[Both numbers must be in two’s complement, don’t include overflow bits in the final answer]
What is Overflow in binary arithmetic?
It occurs when adding two large positive numbers in two’s complement which results in a number that is too large to be represented in the given number of bits. The result is erroneously represented as a negative number.