zl程序教程

您现在的位置是:首页 >  其他

当前栏目

Intel® Hyper-Threading Technolog 超线程技术

技术 Intel Hyper Threading
2023-09-14 09:09:27 时间
intel 手册 vol 1 第50 页 253665-sdm-vol-1.pdf

2.2.8 Intel® Hyper-Threading Technology
Intel Hyper-Threading Technology (Intel HT Technology) was developed to improve the performance of IA-32
processors when executing multi-threaded operating system and application code or single-threaded applications
under multi-tasking environments. The technology enables a single physical processor to execute two or more
separate code streams (threads) concurrently using shared execution resources.
Intel HT Technology is one form of hardware multi-threading capability in IA-32 processor families. It differs from
multi-processor capability using separate physically distinct packages with each physical processor package mated
with a physical socket. Intel HT Technology provides hardware multi-threading capability with a single physical
package by using shared execution resources in a processor core.
Architecturally, an IA-32 processor that supports Intel HT Technology consists of two or more logical processors,
each of which has its own IA-32 architectural state. Each logical processor consists of a full set of IA-32 data registers, segment registers, control registers, debug registers, and most of the MSRs. Each also has its own advanced
programmable interrupt controller (APIC).
Figure 2-5 shows a comparison of a processor that supports Intel HT Technology (implemented with two logical
processors) and a traditional dual processor system

在这里插入图片描述
Intel HT Technology leverages the process and thread-level parallelism found in contemporary operating systems
and high-performance applications by providing two or more logical processors on a single chip. This configuration
allows two or more threads1 to be executed simultaneously on each a physical processor. Each logical processor
executes instructions from an application thread using the resources in the processor core. The core executes
these threads concurrently, using out-of-order instruction scheduling to maximize the use of execution units during
each clock cycle.