图书介绍

计算机系统 集成方法 英文版2025|PDF|Epub|mobi|kindle电子书版本百度云盘下载

计算机系统 集成方法 英文版
  • (美)UmakishoreRamachandran等著 著
  • 出版社: 北京:机械工业出版社
  • ISBN:9787111319559
  • 出版时间:2011
  • 标注页数:741页
  • 文件大小:126MB
  • 文件页数:770页
  • 主题词:计算机体系结构-英文

PDF下载


点此进入-本书在线PDF格式电子书下载【推荐-云解压-方便快捷】直接下载PDF格式图书。移动端-PC端通用
种子下载[BT下载速度快]温馨提示:(请使用BT下载软件FDM进行下载)软件下载地址页直链下载[便捷但速度慢]  [在线试读本书]   [在线获取解压码]

下载说明

计算机系统 集成方法 英文版PDF格式电子书版下载

下载的文件为RAR压缩包。需要使用解压软件进行解压得到PDF格式图书。

建议使用BT下载工具Free Download Manager进行下载,简称FDM(免费,没有广告,支持多平台)。本站资源全部打包为BT种子。所以需要使用专业的BT下载软件进行下载。如BitComet qBittorrent uTorrent等BT下载工具。迅雷目前由于本站不是热门资源。不推荐使用!后期资源热门了。安装了迅雷也可以迅雷进行下载!

(文件页数 要大于 标注页数,上中下等多册电子书除外)

注意:本站所有压缩包均有解压码: 点击下载压缩包解压工具

图书目录

Chapter 1 Introduction1

1.1 What Is Inside a Box?2

1.2 Levels of Abstraction in a Computer System2

1.3 The Role of the Operating System5

1.4 What Is Happening Inside the Box?8

1.4.1 Launching an Application on the Computer10

1.5 Evolution of Computer Hardware11

1.6 Evolution of Operating Systems13

1.7 Roadmap of the Rest of the Book14

Exercises14

Bibliographic Notes and Further Reading15

Chapter 2 Processor Architecture18

2.1 What Is Involved in Processor Design?19

2.2 How Do We Design an Instruction Set?20

2.3 A Common High-Level Language Feature Set21

2.4 Expressions and Assignment Statements21

2.4.1 Where To Keep the Operands?22

2.4.2 How Do We Specify a Memory Address in an Instruction?26

2.4.3 How Wide Should Each Operand Be?27

2.4.4 Endianness30

2.4.5 Packing of Operands and Alignment of Word Operands32

2.5 High-Level Data Abstractions35

2.5.1 Structures35

2.5.2 Arrays35

2.6 Conditional Statements and Loops37

2.6.1 If-Then-Else Statement38

2.6.2 Switch Statement40

2.6.3 Loop Statement41

2.7 Checkpoint42

2.8 Compiling Function Calls42

2.8.1 State of the Caller43

2.8.2 Remaining Chores with Procedure Calling46

2.8.3 Software Convention47

2.8.4 Activation Record54

2.8.5 Recursion54

2.8.6 Frame Pointer55

2.9 Instruction-Set Architectural Choices58

2.9.1 Additional Instructions58

2.9.2 Additional Addressing Modes58

2.9.3 Architecture Styles59

2.9.4 Instruction Format59

2.10 LC-2200 Instruction Set62

2.10.1 Instruction Format63

2.10.2 LC-2200 Register Set65

2.11 Issues Influencing Processor Design66

2.11.1 Instruction Set66

2.11.2 Influence of Applications on Instruction Set Design67

2.11.3 Other Issues Driving Processor Design68

Summary70

Exercises70

Bibliographic Notes and Further Reading74

Chapter 3 Processor Implementation76

3.1 Architecture versus Implementation76

3.2 What Is Involved in Processor Implementation?77

3.3 Key Hardware Concepts78

3.3.1 Circuits78

3.3.2 Hardware Resources of the Datapath79

3.3.3 Edge-Triggered Logic80

3.3.4 Connecting the Datapath Elements82

3.3.5 Toward Bus-Based Design86

3.3.6 Finite State Machine(FSM)89

3.4 Datapath Design91

3.4.1 ISA and Datapath Width93

3.4.2 Width of the Clock Pulse94

3.4.3 Checkpoint95

3.5 Control Unit Design95

3.5.1 ROM Plus State Register96

3.5.2 FETCH Macro State99

3.5.3 DECODE Macro State102

3.5.4 EXECUTE Macro State:ADD Instruction(Part of R-Type)103

3.5.5 EXECUTE Macro State:NAND Instruction(Part of R-Type)106

3.5.6 EXECUTE Macro State:JALR Instruction(Part of J-Type)106

3.5.7 EXECUTE Macro State:LW Instruction(Part of I-Type)108

3.5.8 EXECUTE Macro State:SW and ADDI Instructions(Part of I-Type)111

3.5.9 EXECUTE Macro State:BEQ Instruction(Part of I-Type)112

3.5.10 Engineering a Conditional Branch in the Microprogram116

3.5.11 DECODE Macro State Revisited116

3.6 Alternative Style of Control Unit Design119

3.6.1 Microprogrammed Control119

3.6.2 Hardwired Control119

3.6.3 Choosing Between the Two Control Design Styles121

Summary122

Historical Perspective122

Exercises124

Bibliographic Notes and Further Reading128

Chapter 4 Interrupts,Traps,and Exceptions129

4.1 Discontinuities in Program Execution130

4.2 Dealing with Program Discontinuities132

4.3 Architectural Enhancements to Handle Program Discontinuities135

4.3.1 Modifications to FSM136

4.3.2 A Simple Interrupt Handler137

4.3.3 Handling Cascaded Interrupts138

4.3.4 Returning from the Handler141

4.3.5 Checkpoint143

4.4 Hardware Details for Handling Program Discontinuities143

4.4.1 Datapath Details for Interrupts143

4.4.2 Details of Receiving the Address of the Handler146

4.4.3 Stack for Saving/Restoring147

4.5 Putting It All Together149

4.5.1 Summary of Architectural/Hardware Enhancements149

4.5.2 Interrupt Mechanism at Work150

Summary152

Exercises154

Bibliographic Notes and Further Reading155

Chapter 5 Processor Performance and Pipelined Processor Design156

5.1 Space and Time Metrics156

5.2 Instruction Frequency160

5.3 Benchmarks161

5.4 Increasing the Processor Performance165

5.5 Speedup167

5.6 Increasing the Throughput of the Processor171

5.7 Introduction to Pipelining171

5.8 Toward an Instruction-Processing Assembly Line172

5.9 Problems with a Simple-Minded Instruction Pipeline175

5.10 Fixing the Problems with the Instruction Pipeline176

5.11 Datapath Elements for the Instruction Pipeline178

5.12 Pipeline-Conscious Architecture and Implementation180

5.12.1 Anatomy of an Instruction Passage Through the Pipeline181

5.12.2 Design of the Pipeline Registers184

5.12.3 Implementation of the Stages185

5.13 Hazards185

5.13.1 Structural Hazard186

5.13.2 Data Hazard187

5.13.3 Control Hazard200

5.13.4 Summary of Hazards209

5.14 Dealing with Program Discontinuities in a Pipelined Processor211

5.15 Advanced Topics in Processor Design214

5.15.1 Instruction-Level Parallelism214

5.15.2 Deeper Pipelines215

5.15.3 Revisiting Program Discontinuities in the Presence of Out-Of-Order Processing218

5.15.4 Managing Shared Resources219

5.15.5 Power Consumption221

5.15.6 Multicore Processor Design221

5.15.7 Intel Core Microarchitecture:An Example Pipeline222

Summary225

Historical Perspective225

Exercises226

Bibliographic Notes and Further Reading231

Chapter 6 Processor Scheduling233

6.1 Introduction233

6.2 Programs and Processes235

6.3 Scheduling Environments239

6.4 Scheduling Basics242

6.5 Performance Metrics245

6.6 Nonpreemptive Scheduling Algorithms247

6.6.1 First-Come First-Served(FCFS)247

6.6.2 Shortest Job First(SJF)252

6.6.3 Priority255

6.7 Preemptive Scheduling Algorithms256

6.7.1 Round Robin Scheduler259

6.8 Combining Priority and Preemption264

6.9 Meta-Schedulers264

6.10 Evaluation265

6.11 Impact of Scheduling on Processor Architecture267

Summary and a Look Ahead268

Linux Scheduler—A Case Study270

Historical Perspective273

Exercises275

Bibliographic Notes and Further Reading276

Chapter 7 Memory Management Techniques277

7.1 Functionalities Provided by a Memory Manager278

7.2 Simple Schemes for Memory Management280

7.3 Memory Allocation Schemes285

7.3.1 Fixed-Size Partitions286

7.3.2 Variable-Size Partitions287

7.3.3 Compaction289

7.4 Paged Virtual Memory290

7.4.1 Page Table293

7.4.2 Hardware for Paging295

7.4.3 Page Table Setup296

7.4.4 Relative Sizes of Virtual and Physical Memories296

7.5 Segmented Virtual Memory297

7.5.1 Hardware for Segmentation303

7.6 Paging versus Segmentation303

7.6.1 Interpreting the CPU-Generated Address306

Summary308

Historical Perspective309

MULTICS311

Intel's Memory Architecture312

Exercises314

Bibliographic Notes and Further Reading315

Chapter 8 Details of Page-Based Memory Management316

8.1 Demand Paging316

8.1.1 Hardware for Demand Paging317

8.1.2 Page Fault Handler318

8.1.3 Data Structures for Demand-Paged Memory Management318

8.1.4 Anatomy of a Page Fault320

8.2 Interaction Between the Process Scheduler and Memory Manager324

8.3 Page Replacement Policies326

8.3.1 Belady's Min326

8.3.2 Random Replacement327

8.3.3 First In First Out(FIFO)327

8.3.4 Least Recently Used(LRU)329

8.3.5 Second Chance Page Replacement Algorithm333

8.3.6 Review of Page Replacement Algorithms336

8.4 Optimizing Memory Management336

8.4.1 Pool of Free Page Frames336

8.4.2 Thrashing338

8.4.3 Working Set340

8.4.4 Controlling Thrashing341

8.5 Other Considerations343

8.6 Translation Lookaside Buffer(TLB)343

8.6.1 Address Translation with TLB344

8.7 Advanced Topics in Memory Management346

8.7.1 Multi-Level Page Tables346

8.7.2 Access Rights As Part of the Page Table Entry348

8.7.3 Inverted Page Tables349

Summary349

Exercises350

Bibliographic Notes and Further Reading352

Chapter 9 Memory Hierarchy353

9.1 The Concept of a Cache354

9.2 Principle of Locality355

9.3 Basic Terminologies355

9.4 Multilevel Memory Hierarchy357

9.5 Cache Organization360

9.6 Direct-Mapped Cache Organization360

9.6.1 Cache Lookup363

9.6.2 Fields of a Cache Entry365

9.6.3 Hardware for a Direct-Mapped Cache366

9.7 Repercussion on Pipelined Processor Design369

9.8 Cache Read/Write Algorithms370

9.8.1 Read Access to the Cache from the CPU370

9.8.2 Write Access to the Cache from the CPU370

9.9 Dealing with Cache Misses in the Processor Pipeline375

9.9.1 Effect of Memory Stalls Due to Cache Misses on Pipeline Performance376

9.10 Exploiting Spatial Locality to Improve Cache Performance377

9.10.1 Performance Implications of Increased Block Size383

9.11 Flexible Placement384

9.11.1 Fully Associative Cache385

9.11.2 Set Associative Cache387

9.11.3 Extremes of Set Associativity387

9.12 Instruction and Data Caches392

9.13 Reducing Miss Penalty393

9.14 Cache Replacement Policy394

9.15 Recapping Types of Misses396

9.16 Integrating TLB and Caches399

9.17 Cache Controller400

9.18 Virtually Indexed Physically Tagged Cache401

9.19 Recap of Cache Design Considerations404

9.20 Main Memory Design Considerations405

9.20.1 Simple Main Memory405

9.20.2 Main Memory and Bus to Match Cache Block Size406

9.20.3 Interleaved Memory407

9.21 Elements of Modern Main Memory Systems408

9.21.1 Page Mode DRAM413

9.22 Performance Implications of Memory Hierarchy415

Summary416

Memory Hierarchy of Modern Processors—An Example418

Exercises419

Bibliographic Notes and Further Reading422

Chapter 10 Input/Output and Stable Storage423

10.1 Communication Between the CPU and the I/O Devices423

10.1.1 Device Controller424

10.1.2 Memory Mapped I/O425

10.2 Programmed I/O427

10.3 DMA429

10.4 Buses432

10.5 I/O Processor433

10.6 Device Driver434

10.6.1 An Example436

10.7 Peripheral Devices438

10.8 Disk Storage440

10.8.1 The Saga of Disk Technology448

10.9 Disk Scheduling Algorithms451

10.9.1 First-Come-First-Served(FCFS)453

10.9.2 Shortest SeekTime First(SSTF)453

10.9.3 SCAN(Elevator Algorithm)453

10.9.4 C-SCAN(Circular Scan)455

10.9.5 LOOK and C-LOOK455

10.9.6 Disk Scheduling Summary457

10.9.7 Comparison of the Algorithms458

10.10 Solid State Drive459

10.11 Evolution of I/O Buses and Device Drivers461

10.11.1 Dynamic Loading of Device Drivers463

10.11.2 Putting it All Together463

Summary466

Exercises466

Bibliographic Notes and Further Reading468

Chapter 11 File System469

11.1 Attributes469

11.2 Design Choices in Implementing a File System on a Disk Subsystem476

11.2.1 Contiguous Allocation477

11.2.2 Contiguous Allocation with Overflow Area480

11.2.3 Linked Allocation480

11.2.4 File Allocation Table(FAT)481

11.2.5 Indexed Allocation483

11.2.6 Multilevel Indexed Allocation485

11.2.7 Hybrid Indexed Allocation486

11.2.8 Comparison of the Allocation Strategies490

11.3 Putting It All Together491

11.3.1 i-node497

11.4 Components of the File System498

11.4.1 Anatomy of Creating and Writing Files499

11.5 Interaction Among the Various Subsystems500

11.6 Layout of the File System on the Physical Media503

11.6.1 In Memory Data Structures507

11.7 Dealing with System Crashes508

11.8 File Systems for Other Physical Media508

11.9 A Glimpse of Modern File Systems509

11.9.1 Linux509

11.9.2 Microsoft Windows515

Summary517

Exercises518

Bibliographic Notes and Further Reading520

Chapter 12 Multithreaded Programming and Multiprocessors521

12.1 Why Multithreading?522

12.2 Programming Support for Threads523

12.2.1 Thread Creation and Termination523

12.2.2 Communication Among Threads526

12.2.3 Read-Write Conflict,Race Condition,and Nondeterminism528

12.2.4 Synchronization Among Threads533

12.2.5 Internal Representation of Data Types Provided by the Threads Library540

12.2.6 Simple Programming Examples541

12.2.7 Deadlocks and Livelocks546

12.2.8 Condition Variables548

12.2.9 A Complete Solution for the Video Processing Example553

12.2.10 Discussion of the Solution554

12.2.11 Rechecking the Predicate556

12.3 Summary of Thread Function Calls and Threaded Programming Concepts559

12.4 Points to Remember in Programming with Threads561

12.5 Using Threads as Software Structuring Abstraction561

12.6 POSIX pthreads Library Calls Summary562

12.7 OS Support for Threads565

12.7.1 User Level Threads567

12.7.2 Kernel-Level Threads570

12.7.3 Solaris Threads:An Example of Kernel-Level Threads572

12.7.4 Threads and Libraries573

12.8 Hardware Support for Multithreading in a Uniprocessor574

12.8.1 Thread Creation,Termination,and Communication Among Threads574

12.8.2 Inter-Thread Synchronization575

12.8.3 An Atomic Test-and-Set Instruction575

12.8.4 Lock Algorithm with Test-and-Set Instruction577

12.9 Multiprocessors578

12.9.1 Page Tables579

12.9.2 Memory Hierarchy580

12.9.3 Ensuring Atomicity582

12.10 Advanced Topics583

12.10.1 OS Topics583

12.10.2 Architecture Topics596

12.10.3 The Road Ahead:Multicore and Many-Core Architectures610

Summary612

Historical Perspective612

Exercises614

Bibliographic Notes and Further Reading617

Chapter 13 Fundamentals of Networking and Network Protocols620

13.1 Preliminaries620

13.2 Basic Terminologies621

13.3 Networking Software626

13.4 Protocol Stack628

13.4.1 Internet Protocol Stack628

13.4.2 OSI Model631

13.4.3 Practical Issues with Layering632

13.5 Application Layer632

13.6 Transport Layer634

13.6.1 Stop-and-Wait Protocols636

13.6.2 Pipelined Protocols640

13.6.3 Reliable Pipelined Protocol642

13.6.4 Dealing with Transmission Errors647

13.6.5 Transport Protocols on the Internet648

13.6.6 Transport Layer Summary651

13.7 Network Layer652

13.7.1 Routing Algorithms653

13.7.2 Internet Addressing658

13.7.3 Network Service Model663

13.7.4 Network Routing versus Forwarding668

13.7.5 Network Layer Summary668

13.8 Link Layer and Local Area Networks670

13.8.1 Ethernet670

13.8.2 CSMA/CD671

13.8.3 IEEE 802.3673

13.8.4 Wireless LAN and IEEE 802.11674

13.8.5 Token Ring675

13.8.6 Other Link-Layer Protocols676

13.9 Networking Hardware678

13.10 Relationship Between the Layers of the Protocol Stack683

13.11 Data Structures for Packet Transmission685

13.11.1 TCP/IP Header687

13.12 Message Transmission Time688

13.13 Summary of Protocol-Layer Functionalities694

13.14 Networking Software and the Operating System695

13.14.1 Socket Library695

13.14.2 Implementation of the Protocol Stack in the Operating System697

13.14.3 Network Device Driver697

13.15 Network Programming Using UNIX Sockets699

13.16 Network Services and Higher-Level Protocols706

Summary708

Historical Perspective709

From Telephony to Computer Networking709

Evolution of the Internet712

PC and the Arrival of LAN713

Evolution of LAN713

Exercises715

Bibliographic Notes and Further Reading718

Chapter 14 Epilogue:A Look Back at the Journey720

14.1 Processor Design720

14.2 Process720

14.3 Virtual Memory System and Memory Management721

14.4 Memory Hierarchy722

14.5 ParallelSystem722

14.6 Input/Output Systems722

14.7 Persistent Storage723

14.8 Network723

Concluding Remarks723

Appendix:Network Programming with UNIX Sockets724

Bibliography736

热门推荐