SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
Search
Test your basic knowledge |
Computer Architecture And Design
Start Test
Study First
Subject
:
engineering
Instructions:
Answer 38 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. What are the base units of GHz?
10^9 cycles per sec
The performance enhancement possible with a given improvement is limited by the amount that the improvement feature is used.
The most expensive computers - costing tens of millions of dollars. They emphasize floating- point performance.
DRAM - RAM - Cache are examples of this type of memory.
2. What is data- level parallelism?
Algorithm - programming language - compiler - instruction set architecture
The combination of performance (measured primarily in therms of compute performance and graphics performance) and the price of a system.
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
3. What is secondary memory?
Non - volatile memory used to store programs and data between executions.
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
When it is possible to occasionally miss the time constraint on an event - as long as not too many are missed.
1- response time 2- throughput - response time and throughput are directly proportional or only interrelated - Interrelated only.
4. Amdahl's Law
The number of tasks completed per unit of time.
The performance enhancement possible with a given improvement is limited by the amount that the improvement feature is used.
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
Points to the next instruction to be executed.
5. What does jal <proc> do?
DRAM - RAM - Cache are examples of this type of memory.
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
Procedure call. Copies PC to $ra - # push $t0 - Subu $sp - $sp - 4 - sw $t0 - ($sp) - # pop $t0 - Lw $t0 - ($sp) - addu $sp - $sp - 4
10^9 cycles per sec
6. What are the industry standard benchmarks to measure performance (e.g. - with different vendor chips)?
Dedicated argument registers to reduce stack usage during procedure calls - consistently sized opcodes - separate instructions for store and load - improved linkage (jal and jr save $ra without using stack)
Storage that retains data even in the absence of a power source.
The most expensive computers - costing tens of millions of dollars. They emphasize floating- point performance.
1- response time 2- throughput - response time and throughput are directly proportional or only interrelated - Interrelated only.
7. An example of an improvement that would impact throughput (but not response time).
Procedure call. Copies PC to $ra - # push $t0 - Subu $sp - $sp - 4 - sw $t0 - ($sp) - # pop $t0 - Lw $t0 - ($sp) - addu $sp - $sp - 4
Add memory - additional processors to handle more tasks in a given time.
(1) pipelining (2) multiple instruction issue
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
8. What does hardware refer to?
10^9 cycles per sec
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
The specifics of a computer - including the detailed logic design and the packaging technology of the computer
Instructions and data are stored in memory as numbers
9. What are the five classic components of a computer?
Non - volatile memory used to store programs and data between executions.
Input - output - memory - datapath - control
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
Procedure call. Copies PC to $ra - # push $t0 - Subu $sp - $sp - 4 - sw $t0 - ($sp) - # pop $t0 - Lw $t0 - ($sp) - addu $sp - $sp - 4
10. What are embedded computers?
High- level aspects of a computer's design - such as the memory system - the memory interconnect - and the design of the internal processor or CPU (central processing unit
Computer speeds double every 18-24 months
10^9 cycles per sec
Computers that are lodged in other devices where their presence is not immediately obvious.
11. What is the $pc register used for?
When it is possible to occasionally miss the time constraint on an event - as long as not too many are missed.
Points to the next instruction to be executed.
Input - output - memory - datapath - control
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
12. What are the classes of computing applications (five)?
When it is possible to occasionally miss the time constraint on an event - as long as not too many are missed.
When a segment of the application has an absolute maximum execution time.
Desktop computer / laptop computer - server - super computer - embedded computer - mobile computing
Non - volatile memory used to store programs and data between executions.
13. What is a real- time performance requirement?
When a segment of the application has an absolute maximum execution time.
Memory used to hold program while they are executing.
Desktop computer / laptop computer - server - super computer - embedded computer - mobile computing
The most expensive computers - costing tens of millions of dollars. They emphasize floating- point performance.
14. What is non - volatile memory?
Storage that retains data even in the absence of a power source.
Non - volatile memory used to store programs and data between executions.
Magnetic disk - flash memory are examples of this type of memory.
Points to the next instruction to be executed.
15. What is soft real- time?
Computers that are lodged in other devices where their presence is not immediately obvious.
When it is possible to occasionally miss the time constraint on an event - as long as not too many are missed.
There does not exist the case of negative zero. - Can perform a- b as a+ (- b) without adjustments inside the CPU.
Algorithm - programming language - compiler - instruction set architecture
16. What is instruction - level parallelism?
Also called ILP. This is the potential overlap among instructions. There are two approaches: (1) hardware - and (2) software.
The specifics of a computer - including the detailed logic design and the packaging technology of the computer
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
The most expensive computers - costing tens of millions of dollars. They emphasize floating- point performance.
17. What is a supercomputer?
1- response time 2- throughput - response time and throughput are directly proportional or only interrelated - Interrelated only.
Procedure call. Copies PC to $ra - # push $t0 - Subu $sp - $sp - 4 - sw $t0 - ($sp) - # pop $t0 - Lw $t0 - ($sp) - addu $sp - $sp - 4
When it is possible to occasionally miss the time constraint on an event - as long as not too many are missed.
The most expensive computers - costing tens of millions of dollars. They emphasize floating- point performance.
18. What is the $sp register used for?
Points to the current top of the stack
An abstract interface between the hardware and the lowest level software that encompasses all the information necessary to write a machine language program that will run correctly - including instructions - registers - memory access - I/O - etc.
Computers that are lodged in other devices where their presence is not immediately obvious.
Instructions and data are stored in memory as numbers
19. Moore's Law
Non - volatile memory used to store programs and data between executions.
A faster processor to complete task sooner - a better algorithm to complete the program/task sooner.
The number of tasks completed per unit of time.
Computer speeds double every 18-24 months
20. An example of something typically associated with RISC architecture that is not typical in CISC architecture.
Input - output - memory - datapath - control
Using fixed or variable length encoding.
Dedicated argument registers to reduce stack usage during procedure calls - consistently sized opcodes - separate instructions for store and load - improved linkage (jal and jr save $ra without using stack)
High- level aspects of a computer's design - such as the memory system - the memory interconnect - and the design of the internal processor or CPU (central processing unit
21. What are two examples of instruction - level parallelism?
The number of tasks completed per unit of time.
(1) pipelining (2) multiple instruction issue
Also called TLP. A form of parallelization of computer code across multiple processors in parallel computing environments - which focuses on distributing execution processes (threads) across different parallel computing nodes.
Points to the current top of the stack
22. What is volatile memory?
Also called ILP. This is the potential overlap among instructions. There are two approaches: (1) hardware - and (2) software.
Storage that retains data only if it is receiving power
The number of tasks completed per unit of time.
Points to the next instruction to be executed.
23. What is main/primary memory?
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
Instructions and data are stored in memory as numbers
Memory used to hold program while they are executing.
(1) pipelining (2) multiple instruction issue
24. What is throughput?
An abstract interface between the hardware and the lowest level software that encompasses all the information necessary to write a machine language program that will run correctly - including instructions - registers - memory access - I/O - etc.
The number of tasks completed per unit of time.
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
When it is possible to occasionally miss the time constraint on an event - as long as not too many are missed.
25. What are the hardware/software components affecting program performance?
(1) pipelining (2) multiple instruction issue
Desktop computer / laptop computer - server - super computer - embedded computer - mobile computing
Algorithm - programming language - compiler - instruction set architecture
Also called TLP. A form of parallelization of computer code across multiple processors in parallel computing environments - which focuses on distributing execution processes (threads) across different parallel computing nodes.
26. How can you encode an ISA?
Storage that retains data even in the absence of a power source.
Using fixed or variable length encoding.
Points to the next instruction to be executed.
1- response time 2- throughput - response time and throughput are directly proportional or only interrelated - Interrelated only.
27. What is price performance?
The combination of performance (measured primarily in therms of compute performance and graphics performance) and the price of a system.
Procedure call. Copies PC to $ra - # push $t0 - Subu $sp - $sp - 4 - sw $t0 - ($sp) - # pop $t0 - Lw $t0 - ($sp) - addu $sp - $sp - 4
10^9 cycles per sec
There does not exist the case of negative zero. - Can perform a- b as a+ (- b) without adjustments inside the CPU.
28. What is response time?
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
Procedure call. Copies PC to $ra - # push $t0 - Subu $sp - $sp - 4 - sw $t0 - ($sp) - # pop $t0 - Lw $t0 - ($sp) - addu $sp - $sp - 4
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
Memory used to hold program while they are executing.
29. An example of volatile memory
The most expensive computers - costing tens of millions of dollars. They emphasize floating- point performance.
Computer speeds double every 18-24 months
High- level aspects of a computer's design - such as the memory system - the memory interconnect - and the design of the internal processor or CPU (central processing unit
DRAM - RAM - Cache are examples of this type of memory.
30. One reason why two's compliment is used as opposed to signed magnitude or one's compliment?
Points to the current top of the stack
There does not exist the case of negative zero. - Can perform a- b as a+ (- b) without adjustments inside the CPU.
10^9 cycles per sec
Desktop computer / laptop computer - server - super computer - embedded computer - mobile computing
31. What is included in the term organization?
32. An example of an improvement that would impact response time (but not throughput).
Also called TLP. A form of parallelization of computer code across multiple processors in parallel computing environments - which focuses on distributing execution processes (threads) across different parallel computing nodes.
A faster processor to complete task sooner - a better algorithm to complete the program/task sooner.
Instructions/unit time (e.g. - instructions/sec) - equal to 1/execution time
When it is possible to occasionally miss the time constraint on an event - as long as not too many are missed.
33. Stored Program Concept
Instructions and data are stored in memory as numbers
10^9 cycles per sec
There does not exist the case of negative zero. - Can perform a- b as a+ (- b) without adjustments inside the CPU.
Storage that retains data even in the absence of a power source.
34. An example of non - volatile memory
There does not exist the case of negative zero. - Can perform a- b as a+ (- b) without adjustments inside the CPU.
The combination of performance (measured primarily in therms of compute performance and graphics performance) and the price of a system.
Desktop computer / laptop computer - server - super computer - embedded computer - mobile computing
Magnetic disk - flash memory are examples of this type of memory.
35. What is the $epc register used for?
Instructions/unit time (e.g. - instructions/sec) - equal to 1/execution time
Points to the address of an instruction that caused an exception
Using fixed or variable length encoding.
Dedicated argument registers to reduce stack usage during procedure calls - consistently sized opcodes - separate instructions for store and load - improved linkage (jal and jr save $ra without using stack)
36. How is CPU performance measured?
Computer speeds double every 18-24 months
The combination of performance (measured primarily in therms of compute performance and graphics performance) and the price of a system.
Also called ILP. This is the potential overlap among instructions. There are two approaches: (1) hardware - and (2) software.
Instructions/unit time (e.g. - instructions/sec) - equal to 1/execution time
37. What is thread- level parallelism?
Magnetic disk - flash memory are examples of this type of memory.
The number of tasks completed per unit of time.
Also called TLP. A form of parallelization of computer code across multiple processors in parallel computing environments - which focuses on distributing execution processes (threads) across different parallel computing nodes.
Algorithm - programming language - compiler - instruction set architecture
38. What is an Instruction Set Architecture (ISA)?
An abstract interface between the hardware and the lowest level software that encompasses all the information necessary to write a machine language program that will run correctly - including instructions - registers - memory access - I/O - etc.
DRAM - RAM - Cache are examples of this type of memory.
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
Desktop computer / laptop computer - server - super computer - embedded computer - mobile computing