SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
Search
Test your basic knowledge |
Computer Architecture And Design
Start Test
Study First
Subject
:
engineering
Instructions:
Answer 38 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. What is instruction - level parallelism?
Also called ILP. This is the potential overlap among instructions. There are two approaches: (1) hardware - and (2) software.
Procedure call. Copies PC to $ra - # push $t0 - Subu $sp - $sp - 4 - sw $t0 - ($sp) - # pop $t0 - Lw $t0 - ($sp) - addu $sp - $sp - 4
There does not exist the case of negative zero. - Can perform a- b as a+ (- b) without adjustments inside the CPU.
The number of tasks completed per unit of time.
2. What is throughput?
Instructions/unit time (e.g. - instructions/sec) - equal to 1/execution time
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
The combination of performance (measured primarily in therms of compute performance and graphics performance) and the price of a system.
The number of tasks completed per unit of time.
3. What is the $sp register used for?
High- level aspects of a computer's design - such as the memory system - the memory interconnect - and the design of the internal processor or CPU (central processing unit
The specifics of a computer - including the detailed logic design and the packaging technology of the computer
Points to the current top of the stack
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
4. What is main/primary memory?
Also called ILP. This is the potential overlap among instructions. There are two approaches: (1) hardware - and (2) software.
When it is possible to occasionally miss the time constraint on an event - as long as not too many are missed.
Memory used to hold program while they are executing.
The performance enhancement possible with a given improvement is limited by the amount that the improvement feature is used.
5. What is an Instruction Set Architecture (ISA)?
An abstract interface between the hardware and the lowest level software that encompasses all the information necessary to write a machine language program that will run correctly - including instructions - registers - memory access - I/O - etc.
Magnetic disk - flash memory are examples of this type of memory.
Storage that retains data only if it is receiving power
10^9 cycles per sec
6. What is included in the term organization?
7. One reason why two's compliment is used as opposed to signed magnitude or one's compliment?
There does not exist the case of negative zero. - Can perform a- b as a+ (- b) without adjustments inside the CPU.
Instructions and data are stored in memory as numbers
Points to the current top of the stack
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
8. An example of volatile memory
DRAM - RAM - Cache are examples of this type of memory.
The specifics of a computer - including the detailed logic design and the packaging technology of the computer
Algorithm - programming language - compiler - instruction set architecture
Also called ILP. This is the potential overlap among instructions. There are two approaches: (1) hardware - and (2) software.
9. What are two examples of instruction - level parallelism?
When a segment of the application has an absolute maximum execution time.
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
(1) pipelining (2) multiple instruction issue
The most expensive computers - costing tens of millions of dollars. They emphasize floating- point performance.
10. An example of non - volatile memory
Also called TLP. A form of parallelization of computer code across multiple processors in parallel computing environments - which focuses on distributing execution processes (threads) across different parallel computing nodes.
Storage that retains data only if it is receiving power
Instructions/unit time (e.g. - instructions/sec) - equal to 1/execution time
Magnetic disk - flash memory are examples of this type of memory.
11. An example of an improvement that would impact throughput (but not response time).
Add memory - additional processors to handle more tasks in a given time.
Desktop computer / laptop computer - server - super computer - embedded computer - mobile computing
The performance enhancement possible with a given improvement is limited by the amount that the improvement feature is used.
Instructions and data are stored in memory as numbers
12. An example of something typically associated with RISC architecture that is not typical in CISC architecture.
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
The combination of performance (measured primarily in therms of compute performance and graphics performance) and the price of a system.
Dedicated argument registers to reduce stack usage during procedure calls - consistently sized opcodes - separate instructions for store and load - improved linkage (jal and jr save $ra without using stack)
The specifics of a computer - including the detailed logic design and the packaging technology of the computer
13. What are the classes of computing applications (five)?
1- response time 2- throughput - response time and throughput are directly proportional or only interrelated - Interrelated only.
Desktop computer / laptop computer - server - super computer - embedded computer - mobile computing
10^9 cycles per sec
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
14. What is secondary memory?
Also called ILP. This is the potential overlap among instructions. There are two approaches: (1) hardware - and (2) software.
Non - volatile memory used to store programs and data between executions.
High- level aspects of a computer's design - such as the memory system - the memory interconnect - and the design of the internal processor or CPU (central processing unit
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
15. What does jal <proc> do?
10^9 cycles per sec
Points to the address of an instruction that caused an exception
Non - volatile memory used to store programs and data between executions.
Procedure call. Copies PC to $ra - # push $t0 - Subu $sp - $sp - 4 - sw $t0 - ($sp) - # pop $t0 - Lw $t0 - ($sp) - addu $sp - $sp - 4
16. What is the $epc register used for?
Using fixed or variable length encoding.
Points to the address of an instruction that caused an exception
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
Non - volatile memory used to store programs and data between executions.
17. An example of an improvement that would impact response time (but not throughput).
There does not exist the case of negative zero. - Can perform a- b as a+ (- b) without adjustments inside the CPU.
DRAM - RAM - Cache are examples of this type of memory.
Non - volatile memory used to store programs and data between executions.
A faster processor to complete task sooner - a better algorithm to complete the program/task sooner.
18. What is a real- time performance requirement?
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
10^9 cycles per sec
When a segment of the application has an absolute maximum execution time.
Non - volatile memory used to store programs and data between executions.
19. What does hardware refer to?
Instructions and data are stored in memory as numbers
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
Procedure call. Copies PC to $ra - # push $t0 - Subu $sp - $sp - 4 - sw $t0 - ($sp) - # pop $t0 - Lw $t0 - ($sp) - addu $sp - $sp - 4
The specifics of a computer - including the detailed logic design and the packaging technology of the computer
20. Amdahl's Law
A faster processor to complete task sooner - a better algorithm to complete the program/task sooner.
Also called TLP. A form of parallelization of computer code across multiple processors in parallel computing environments - which focuses on distributing execution processes (threads) across different parallel computing nodes.
Memory used to hold program while they are executing.
The performance enhancement possible with a given improvement is limited by the amount that the improvement feature is used.
21. What is non - volatile memory?
Storage that retains data even in the absence of a power source.
The combination of performance (measured primarily in therms of compute performance and graphics performance) and the price of a system.
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
Input - output - memory - datapath - control
22. What are the base units of GHz?
Input - output - memory - datapath - control
10^9 cycles per sec
Instructions and data are stored in memory as numbers
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
23. What are the hardware/software components affecting program performance?
Also called ILP. This is the potential overlap among instructions. There are two approaches: (1) hardware - and (2) software.
Also called TLP. A form of parallelization of computer code across multiple processors in parallel computing environments - which focuses on distributing execution processes (threads) across different parallel computing nodes.
Algorithm - programming language - compiler - instruction set architecture
Memory used to hold program while they are executing.
24. What is data- level parallelism?
Magnetic disk - flash memory are examples of this type of memory.
Points to the next instruction to be executed.
High- level aspects of a computer's design - such as the memory system - the memory interconnect - and the design of the internal processor or CPU (central processing unit
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
25. What is volatile memory?
Magnetic disk - flash memory are examples of this type of memory.
DRAM - RAM - Cache are examples of this type of memory.
The combination of performance (measured primarily in therms of compute performance and graphics performance) and the price of a system.
Storage that retains data only if it is receiving power
26. What are the industry standard benchmarks to measure performance (e.g. - with different vendor chips)?
Computer speeds double every 18-24 months
Also called TLP. A form of parallelization of computer code across multiple processors in parallel computing environments - which focuses on distributing execution processes (threads) across different parallel computing nodes.
1- response time 2- throughput - response time and throughput are directly proportional or only interrelated - Interrelated only.
Procedure call. Copies PC to $ra - # push $t0 - Subu $sp - $sp - 4 - sw $t0 - ($sp) - # pop $t0 - Lw $t0 - ($sp) - addu $sp - $sp - 4
27. What is the $pc register used for?
Points to the next instruction to be executed.
Procedure call. Copies PC to $ra - # push $t0 - Subu $sp - $sp - 4 - sw $t0 - ($sp) - # pop $t0 - Lw $t0 - ($sp) - addu $sp - $sp - 4
When a segment of the application has an absolute maximum execution time.
Desktop computer / laptop computer - server - super computer - embedded computer - mobile computing
28. What is response time?
Magnetic disk - flash memory are examples of this type of memory.
The total time required for the computer to to complete a task. (Includes disk accesses - memory accesses - I/O activities - OS overhead - and CPU execution time.)
Points to the next instruction to be executed.
Storage that retains data even in the absence of a power source.
29. Moore's Law
Computer speeds double every 18-24 months
Also called TLP. A form of parallelization of computer code across multiple processors in parallel computing environments - which focuses on distributing execution processes (threads) across different parallel computing nodes.
10^9 cycles per sec
An abstract interface between the hardware and the lowest level software that encompasses all the information necessary to write a machine language program that will run correctly - including instructions - registers - memory access - I/O - etc.
30. How is CPU performance measured?
Points to the next instruction to be executed.
Computers that are lodged in other devices where their presence is not immediately obvious.
(1) pipelining (2) multiple instruction issue
Instructions/unit time (e.g. - instructions/sec) - equal to 1/execution time
31. What is thread- level parallelism?
Input - output - memory - datapath - control
Also called TLP. A form of parallelization of computer code across multiple processors in parallel computing environments - which focuses on distributing execution processes (threads) across different parallel computing nodes.
Computer speeds double every 18-24 months
Also called ILP. This is the potential overlap among instructions. There are two approaches: (1) hardware - and (2) software.
32. Stored Program Concept
Computers that are lodged in other devices where their presence is not immediately obvious.
Instructions and data are stored in memory as numbers
1- response time 2- throughput - response time and throughput are directly proportional or only interrelated - Interrelated only.
When it is possible to occasionally miss the time constraint on an event - as long as not too many are missed.
33. What is a supercomputer?
1- response time 2- throughput - response time and throughput are directly proportional or only interrelated - Interrelated only.
The most expensive computers - costing tens of millions of dollars. They emphasize floating- point performance.
The combination of performance (measured primarily in therms of compute performance and graphics performance) and the price of a system.
Also called DLP. A form of parallelization of computing across multiple processors in parallel computing environments - which focuses on distributing the data across different parallel computing nodes.
34. What are the five classic components of a computer?
1- response time 2- throughput - response time and throughput are directly proportional or only interrelated - Interrelated only.
Input - output - memory - datapath - control
The combination of performance (measured primarily in therms of compute performance and graphics performance) and the price of a system.
Add memory - additional processors to handle more tasks in a given time.
35. What is price performance?
Computer speeds double every 18-24 months
Also called ILP. This is the potential overlap among instructions. There are two approaches: (1) hardware - and (2) software.
The combination of performance (measured primarily in therms of compute performance and graphics performance) and the price of a system.
Instructions/unit time (e.g. - instructions/sec) - equal to 1/execution time
36. What are embedded computers?
Points to the address of an instruction that caused an exception
The performance enhancement possible with a given improvement is limited by the amount that the improvement feature is used.
The specifics of a computer - including the detailed logic design and the packaging technology of the computer
Computers that are lodged in other devices where their presence is not immediately obvious.
37. What is soft real- time?
Input - output - memory - datapath - control
When it is possible to occasionally miss the time constraint on an event - as long as not too many are missed.
Points to the current top of the stack
The performance enhancement possible with a given improvement is limited by the amount that the improvement feature is used.
38. How can you encode an ISA?
The most expensive computers - costing tens of millions of dollars. They emphasize floating- point performance.
10^9 cycles per sec
Using fixed or variable length encoding.
Desktop computer / laptop computer - server - super computer - embedded computer - mobile computing