PCIE SYSTEM ARCHITECTURE PDF

adminComment(0)

PCI Express. System. Architecture. MINDSHARE, INC. Ravi Budruk. Don Anderson. Tom Shanley. Technical Edit by Joe Winkles. ADDISON-WESLEY. PCI Express System Architecture Introduction to PCI Express: A Hardware and Software Developer's Guide solution manual computer system architecture. M. Morris Mano.1l Computer architecture is concerned with the structure and behav. is explained The Architecture of Computer Hardware and System.


Pcie System Architecture Pdf

Author:KASANDRA WORSTER
Language:English, Indonesian, Arabic
Country:Portugal
Genre:Environment
Pages:189
Published (Last):03.10.2015
ISBN:736-3-61007-470-5
ePub File Size:25.77 MB
PDF File Size:9.27 MB
Distribution:Free* [*Sign up for free]
Downloads:36859
Uploaded by: BERT

PCI uses a shared parallel bus. – Bandwidth shared between devices on the bus. – Only one device may own the bus at any time. Pci Express System Architecture - Free ebook download as PDF File .pdf), Text File .txt) or read book online for free. PCI and PCI Express Bus Architecture. Computer Science Peripherals can be moved between computer systems that use the same bus .. bestthing.info).

The core electrical feature of a PXI chassis is the communication bus.

For legacy instruments, PXI supports PCI communication—a bit bus commonly used for transmitting and receiving data in parallel. Multiple lanes can be grouped together to form x2, x4, x8, x16, and x32 links to increase bandwidth.

These links form a connection between a controller and a slot where an instrument is seated. Figure 7. Figure 9. A PXI chassis incorporates the latest communication buses while routing to a variety of slot options to accommodate the requirements of a peripheral module. In addition to the communication buses, the electrical specification also defines the timing and synchronization capabilities.

It includes the definition of the PXI 10 MHz system clock, which is distributed to all peripheral modules in a system. This common reference clock can be used to synchronize multiple modules in a measurement or control system.

For example, triggers can be used to synchronize the operation of several PXI peripheral modules. Figure For applications that require higher performance, the specification defines the PXI star trigger network, which adds a higher performance synchronization feature set to the PXI system.

The star trigger network implements a dedicated trigger line between the system timing slot denoted by a diamond or square glyph surrounding the slot number, PXI and PXI Express, respectively and the other peripheral slots. A timing and synchronization module—a star trigger controller—is installed in this slot to provide precise clocks and trigger signals to other peripheral modules. No two devices are assigned the same address range, thus ensuring the 'plug and play' nature of the PCI system.

A target device on the PCI bus claims the cycle and completes the transfer. These masters can communicate directly with any other devices, including system memory associated with the North bridge. A device driver executing on the CPU configures the device-specific configuration register space of an associated PCI device. A configured PCI device that is bus master capable can initiate its own transactions, which allows it to communicate with any other PCI target device including system memory associated with the North bridge.

Current high-end workstation and server applications require greater bandwidth. Figure shows an example of a later generation Intel PCI chipset. The two chips are connected by a proprietary Intel high throughput, low pin count bus called the Hub Link. The advantage of this architecture over previous architectures is that the IDE, USB, Ethernet and audio devices do not transfer their data through the PCI bus to memory as is the case with earlier chipsets.

Instead they do so through the Hub Link. Hub Link is a higher performance bus compared to PCI. In other words, these devices bypass the PCI bus when communicating with memory. The result is improved performance. This system has similar features to that described in Figure on page These buses each support 1 connector in which a high-end peripheral card may be installed. Recall that PCI supports reflected-wave signaling drivers that are weaker drivers, which have slower rise and fall times as compared to incident-wave signaling drivers.

It is a challenge to design a 66 MHz device or system that satisfies the signal timing requirements. A 66 MHz PCI based motherboard is routed with shorter signal traces to ensure shorter signal propagation delays. In addition, the bus is loaded with fewer loads in order to ensure faster signal rise and fall times. Taking into account typical board impedances and minimum signal trace lengths, it is possible to interconnect a maximum of four to five 66 MHz PCI devices.

This is a significant limitation for a system which requires multiple devices interconnected. The solution requires the addition of PCI bridges and hence multiple buses to interconnect devices. This solution is expensive and consumes additional board real estate.

In addition, transactions between devices on opposite sides of a bridge complete with greater latency because bridges implement delayed transactions. This requires bridges to retry all transactions that must cross to the other side with the exception of memory writes which are posted.

This is a result of the static clock method of driving and latching signals and because reflected-wave signaling is used. Some of the factors that contribute towards this reduced efficiency are listed below. The PCI specification allows master and target devices to insert wait-states during data phases of a bus cycle. Slow devices will add wait-states which reduces the efficiency of bus cycles.

PCI bus cycles do not indicate transfer size. This makes buffer management within master and target devices inefficient. If the master tries too soon, the target may retry the transaction again.

If the master waits too long to retry, the latency to complete a data transfer is increased. Similarly, if a target disconnects a transaction the master must guess when to resume the bus cycle at a later time. Doing so results in additional wait states during PCI bus master accesses of system memory.

The North bridge or MCH must assume all system memory address space is cachable even though this may not be the case. PCI bus cycles provide no mechanism by which to indicate an access to non-cachable memory address space.

PCI architecture observes strict ordering rules as defined by the specification. Even if a PCI application does not require observation of these strict ordering rules, PCI bus cycles do not provide a mechanism to allow relaxed ordering rule.

Observing relaxed ordering rules allows bus cycles especially those that cross a bridge to complete with reduced latency. PCI interrupt handling architecture is inefficient especially because multiple devices share a PCI interrupt signal. Additional software latency is incurred while software discovers which device or devices that share an interrupt signal actually generated the interrupt. Ultimately the system shuts down when an error is detected.

This is a severe response. A more appropriate response might be to detect the error and attempt error recovery. PCI does not require error recovery features, nor does it support an extensive register set for documenting a variety of detectable errors. This chipset has similarities to the 8XX chipset described earlier. Hub Link 2. PCI-X signals are registered.

A registered signal requires smaller setup time to sample the signal as compared with a non-registered signal employed in PCI. The time gained from reduced setup time and clock-to-out time is used towards increased clock frequency capability and the ability to support more devices on the bus at a given frequency compared to PCI.

Following the first data phase, the PCI-X bus does not allow wait states during subsequent data phases. Most PCI-X bus cycles are burst cycles and data is generally transferred in blocks of no less than Bytes.

This results in higher bus utilization. Further, the transfer size is specified in the attribute phase of PCI-X transactions. This allows for more efficient device buffer management. Figure is an example of a PCI-X burst memory read transaction. Consider an example of the split transaction protocol supported by PCI-X for delaying transactions.

This protocol is illustrated in Figure A requester initiates a read transaction. The completer that claims the bus cycles may be unable to return the requested data immediately. Rather than signaling a retry as would be the case in PCI protocol, the completer memorizes the transaction address, transaction type, byte count, requester ID are memorized and signals a split response. This prompts the requester to end the bus cycle, and the bus goes idle.

The PCI-X bus is now available for other transactions, resulting in more efficient bus utilization. Meanwhile, the requester simply waits for the completer to supply it the requested data at a later time. Once the completer has gathered the requested data, it then arbitrates and obtains bus ownership and initiates a split completion bus cycle during which it returns the requested data. The requester claims the split completion bus cycle and accepts the data from the completer.

The split completion bus cycle is very much like a write bus cycle. Exactly two bus transactions are needed to complete the entire data transfer.

T10 Working Drafts

In between these two bus transactions the read request and the split completion transaction the bus is utilized for other transactions. The requester also receives the requested data in a very efficient manner.

To generate an interrupt request, a PCI-X devices initiates a memory write transaction targeting the Host North bridge. The data written is a unique interrupt vector associated with the device generating the interrupt. With this vector, the CPU is immediately able to run an interrupt service routine to service the interrupting device. There is no software overhead in determining which device generated the interrupt.

Also, unlike in the PCI architecture, no interrupt pins are required. PCI Express architecture implements the MSI protocol, resulting in reduced interrupt servicing latency and elimination of interrupt signals. The result is improved performance during accesses to non-cachable memory. We will not get into the details here. Suffice it to say that transactions with the RO bit set can complete on the bus in any order with respect to other transactions that are pending completion.

The PCI-X 2. This bus is described next. This diagram is the author's best guess as to what a PCI-X 2. PCI-X 2. A PCI-X 2. With the aid of a strobe clock, data is transferred two times or four times per MHz clock. This allows auto-correction of single bit errors and detection and reporting of multi-bit errors. Some noteworthy points to remember are that with very fast signal timing, it is only possible to support one connector on the PCI-X 2. This implies that a PCI-X 2.

Data is transmitted from a device on one set of signals, and received on another set of signals. A Lane consists of signal pairs in each direction. A x1 Link consists of 1 Lane or 1 differential signal pair in each direction for a total of 4 signals.

A x32 Link consists of 32 Lanes or 32 signal pairs for each direction for a total of signals. The Link supports a symmetric number of Lanes in each direction. During hardware initialization, the Link is initialized for Link width and frequency of operation automatically by the devices on opposite ends of the Link. No OS or firmware is involved during Link level initialization. Figure shows the electrical characteristics of a PCI Express signal. The common mode voltage can be any voltage between 0 V and 3.

The differential driver is DC isolated from the differential receiver at the opposite end of the Link by placing a capacitor at the driver side of the Link. Two devices at opposite ends of a Link may support different DC common mode voltages. The differential impedance at the receiver is matched with the board impedance to prevent reflections from occurring. Switches Used to Interconnect Multiple Devices Switches are implemented in systems requiring multiple devices to be interconnected.

Switches can range from a 2-port device to an n-port device, where each port connects to a PCI Express Link. The specification does not indicate a maximum number of ports a switch can implement. A switch may be incorporated into a Root Complex device Host bridge or North bridge equivalent , resulting in a multi-port root complex.

Figure on page 52 and Figure on page 54 are examples of PCI Express systems showing multi-ported devices such as the root complex or switches. Packets are transmitted and received serially and byte striped across the available Lanes of the Link.

The more Lanes implemented on a Link the faster a packet is transmitted and the greater the bandwidth of the Link.

The packets are used to support the split transaction protocol for non-posted transactions. Various types of packets such as memory read and write requests, IO read and write requests, configuration read and write requests, message requests and completions are defined. Bandwidth and Clocking As is apparent from Table on page 14, the aggregate bandwidth achievable with PCI Express is significantly higher than any bus available today.

The PCI Express 1. No clock signal exists on the Link. Each packet to be transmitted over the Link consists of bytes of information. Each byte is encoded into a bit symbol. All symbols are guaranteed to have one-zero transitions. The receiver uses a PLL to recover a clock from the 0-to-1 and 1-to-0 transitions of the incoming bit stream. In addition, the maximum configuration address space per device function is extended from Bytes to 4 KBytes.

New OS, drivers and applications are required to take advantage of this additional configuration address space. Also, a new messaging transaction and address space provides messaging capability between devices. Some messages are PCI Express standard messages used for error reporting, interrupt and power management messaging. Other messages are vendor defined messages. These transactions are encoded using the packet-based PCI Express protocol described later.

Those transactions that are non-posted and those that are posted. Non-posted transactions, such as memory reads, implement a split transaction communication model similar to the PCI-X split transaction protocol.

For example, a requester device transmits a non-posted type memory read request packet to a completer. The completer returns a completion packet with the read data to the requester. Posted transactions, such as memory writes, consist of a memory write packet transmitted uni-directionally from requester to completer with no completion packet returned from completer to requester.

Packets transmitted over the Link in error are recognized with a CRC error at the receiver. The transmitter of the packet is notified of the error by the receiver.

The transmitter automatically retries sending the packet with no software involvement , hopefully resulting in auto-correction of the error. In addition, an optional CRC field within a packet allows for end-to-end data integrity checking required for high availability applications.

Error handling on PCI Express can be as rudimentary as PCI level error handling described earlier or can be robust enough for server-level requirements.

A rich set of error logging registers and error reporting mechanisms provide for improved fault isolation and recovery solutions required by RAS Reliable, Available, Serviceable applications. For example, it may be desirable to ensure that Isochronous applications, such as video data packets, move through the fabric with higher priority and guaranteed bandwidth, while control data packets may not have specific bandwidth or latency requirements.

Packets with different TCs can move through the fabric with different priority, resulting in varying performances. These packets are routed through the fabric by utilizing virtual channel VC buffers implemented in switches, endpoints and root complex devices.

The TC in each packet is used by the transmitting and receiving ports to determine which VC buffer to drop the packet into. Switches and devices are configured to arbitrate and prioritize between packets from different VCs before forwarding.

This arbitration is referred to as VC arbitration. In addition, packets arriving at different ingress ports are forwarded to their own VC buffers at the egress port. These transactions are prioritized based on the ingress port number when being merged into a common VC output buffer for delivery across the egress link.

This arbitration is referred to as Port arbitration. The result is that packets with different TC numbers could observe different performance when routed through the PCI Express fabric.

Flow Control A packet transmitted by a device is received into a VC buffer in the receiver at the opposite end of the Link. The receiver periodically updates the transmitter with information regarding the amount of buffer space it has available. The transmitter device will only transmit a packet to the receiver if it knows that the receiving device has sufficient buffer space to hold the next transaction.

The protocol by which the transmitter ensures that the receiving buffer has sufficient space available is referred to as flow control. The flow control mechanism guarantees that a transmitted packet will be accepted by the receiver, baring error conditions.

As such, the PCI Express transaction protocol does not require support of packet retry unless an error condition is detected in the receiver , thereby improving the efficiency with which packets are forwarded to a receiver via the Link. PCI Express device use a memory write packet to transmit an interrupt vector to the root complex host bridge device, which in-turn interrupts the CPU.

Only endpoint devices that must support legacy functions and PCI Express-to-PCI bridges are allowed to support legacy interrupt generation.

Power Management The PCI Express fabric consumes less power because the interconnect consists of fewer signals that have smaller signal swings. Each device's power state is individually managed. Devices can notify software of their current power state, as well as power management software can propagate a wake-up event through the fabric to power-up a device or group of devices. Devices can also signal a wake-up event using an in-band mechanism or a side-band signal. With no software involvement, devices place a Link into a power savings state after a time-out when they recognize that there are no packets to transmit over the Link.

This capability is referred to as Active State power management. PCI Express supports device power states: PCI Express also supports the following Link power states: Hot plug interrupt messages, communicated in-band to the root complex, trigger hot plug software to detect a hot plug or removal event. Rather than implementing a centralized hot plug controller as exists in PCI platforms, the hot plug controller function is distributed to the port logic associated with a hot plug capable port of a switch or root complex.

Updated OSs and device drivers are required to take advantage and access this additional configuration address space. PCI Express enhanced configuration mechanism which provides access to additional configuration space beyond the first Bytes and up to 4 KBytes per function.

Specifications for these are fully defined. Currently, x1, x4, x8 and x16 PCI-like connectors are defined along with associated peripheral cards. Desktop computers implementing PCI Express can have the same look and feel as current computers with no changes required to existing system form factors. PCI Express.

Product Descriptions

The form factor, as the name implies, is much smaller. This form factor targets the mobile computing market. Mechanical Form Factors Pending Release As of May , specifications for two new form factors have not been released. Below is a summary of publicly available information about these form factors.

There are two size form factors defined, a narrower version and a wider version though the thickness and depth remain the same. These are a family of modules that target the workstation and server market. They are designed with future support of larger PCI Express Lane widths and higher frequency bit rates beyond 2. Four form factors are under consideration. The base module with single- and double-width modules.

Also, the full height with single- and double-width modules. It may support one or more PCI Express ports. The root complex in this example supports 3 ports. Each port is connected to an endpoint device or a switch which forms a sub-hierarchy. The root complex generates transaction requests on the behalf of the CPU.

It is capable of initiating configuration transaction requests on the behalf of the CPU. It generates both memory and IO requests as well as generates locked transaction requests on the behalf of the CPU.

The root complex as a completer does not respond to locked requests. Root complex transmits packets out of its ports and receives packets on its ports which it forwards to memory. A multi-port root complex may also route packets from one port to another port but is NOT required by the specification to do so. Root complex implements central resources such as: The root complex initializes with a bus number, device number and function number which are used to form a requester ID or completer ID.

The root complex bus, device and function numbers initialize to all 0s. A Hierarchy is a fabric of all the devices and Links associated with a root complex that are either directly connected to the root complex via its port s or indirectly connected via switches and bridges. A Hierarchy Domain is a fabric of devices and Links that are associated with one port of the root complex. For example in Figure on page 48, there are 3 hierarchy domains. Endpoints are devices other than root complex and switches that are requesters or completers of PCI Express transactions.

They are peripheral devices such as Ethernet, USB or graphics devices. Endpoints initiate transactions as a requester or respond to transactions as a completer. Two types of endpoints exist, PCI Express endpoints and legacy endpoints. Legacy Endpoints may support IO transactions. They may support locked transaction semantics as a completer but not as a requester.

Interrupt capable legacy devices may support legacy style interrupt generation using message requests but must in addition support MSI generation using memory write transactions. Legacy devices are not required to support bit memory addressing capability.

PCI Express endpoints must support bit memory addressing capability in prefetchable memory address space, though their non-prefetchable memory address space is permitted to map the below 4GByte boundary. Both types of endpoints implement Type 0 PCI configuration headers and respond to configuration transactions as completers. Each endpoint is initialized with a device ID. Endpoints are always device 0 on a bus. Multi-Function Endpoints. Root complex and endpoints are requester type devices.

A Completer is a device addressed or targeted by a requester. A requester reads data from a completer or writes data to a completer. Root complex and endpoints are completer type devices. It consists of differential transmitters and receivers. An Upstream Port is a port that points in the direction of the root complex.

A Downstream Port is a port that points away from the root complex. An endpoint port is an upstream port. A root complex port s is a downstream port. An Ingress Port is a port that receives a packet. An Egress Port is a port that transmits a packet. Each bridge implements configuration header 1 registers. Configuration and enumeration software will detect and initialize each of the header 1 registers at boot time. A 4 port switch shown in Figure on page 48 consists of 4 virtual bridges.

These bridges are internally connected via a non-defined bus.

One port of a switch pointing in the direction of the root complex is an upstream port. All other ports pointing away from the root complex are downstream ports.

A switch forwards packets in a manner similar to PCI bridges using memory, IO or configuration address based routing. Switches must forward all types of transactions from any ingress port to any egress port. Switches forward these packets based on one of three routing mechanisms: The logical bridges within the switch implement PCI configuration header 1. The configuration header contains memory and IO base and limit address registers as well as primary bus number, secondary bus number and subordinate bus number registers.

These registers are used by the switch to aid in packet routing and forwarding. Switches implement two arbitration mechanisms, port arbitration and VC arbitration, by which they determine the priority with which to forward packets from ingress ports to egress ports.

Switches support locked requests. The Links are numbered in a manner similar to the PCI depth first search enumeration algorithm. An example of the bus numbering is shown in Figure on page In other words, each Link is assigned a bus number by the bus enumerating software.

The internal bus within a switch that connects all the virtual bridges together is also numbered. The first Link associated with the root complex is number bus 1. Bus 0 is an internal virtual bus within the root complex. Endpoints and PCI -X devices may implement up to 8 functions per device. Figure on page 52 is a block diagram of a low cost PCI Express based system.

As of the writing of this book April no real life PCI Express chipset architecture designs were publicly disclosed. Some of these Links can connect directly to devices on the motherboard and some can be routed to connectors where peripheral cards are installed. Remember that the specification does not require the root complex to support peer-to-peer packet routing between the multiple Links associated with the root complex. This design does not require the use of switches if the number of PCI Express devices to be connected does not exceed the number of Links available in this design.

Figure on page 53 is a block diagram of another low cost PCI Express system. One of these Links connects to a graphics controller. Figure shows a more complex system requiring a large number of devices connected together.

Multi-port switches are a necessary design feature to accomplish this. PCI Express packets can be routed from any device to any other device because switch support peer-to-peer packet routing Only multi-port root complex devices are not required to support peer-to-peer functionality. As of May , the specifications pending release are: To highlight these advantages, the chapter described performance characteristics and features of predecessor buses such as PCI and PCI-X buses with the goal of discussing the evolution of PCI Express from these predecessor buses.

The key features of a PCI Express system were described. The chapter in addition described some examples of PCI Express system topologies. It describes the layered approach to PCI Express device design while describing the function of each device layer.

Packet types employed in accomplishing data transfers are described without getting into packet content details. Finally, this chapter outlines the process of a requester initiating a transaction such as a memory read to read data from a completer across a Link.

Packets are routed based on a memory address, IO address, device ID or implicitly. A root complex can communicate with an endpoint. An endpoint can communicate with a root complex. An endpoint can communicate with another endpoint. Communication involves the transmission and reception of packets called Transaction Layer packets TLPs.

Transactions are defined as a series of one or more packet transmissions required to complete an information transfer between a requester and a completer. Table is a more detailed list of transactions. These transactions can be categorized into non-posted transactions and posted transactions.

For Non-posted transactions, a requester transmits a TLP request packet to a completer. At a later time, the completer returns a TLP completion packet back to the requester. Non-posted transactions are handled as split transactions similar to the PCI-X split transaction model described on page 37 in Chapter 1. In addition, non-posted read transactions contain data in the completion TLP. Non-Posted write transactions contain data in the write request TLP. For Posted transactions, a requester transmits a TLP request packet to a completer.

Posted transactions are optimized for best performance in completing the transaction at the expense of the requester not having knowledge of successful reception of the request by the completer. Posted transactions may or may not contain data in the request TLP. These packets are used in the transactions referenced in Table Our goal in this section is to describe how these packets are used to complete transactions at a system level and not to describe the packet routing through the PCI Express fabric nor to describe packet contents in any detail.

Figure shows the packets transmitted by a requester and completer to complete a non-posted read transaction. To complete this transfer, a requester transmits a non-posted read request TLP to a completer it intends to read data from. The packet makes its way to a targeted completer.

The completer can be a root complex, switches, bridges or endpoints. When the completer receives the packet and decodes its contents, it gathers the amount of data specified in the request from the targeted address. The completer can return up to 4 KBytes of data per CplD packet. The completion packet contains routing information necessary to route the packet back to the requester. This completion packet travels through the same path and hierarchy of switches as the request packet.

Requesters uses a tag field in the completion to associate it with a request TLP of the same tag value it transmitted earlier. Use of a tag in the request and completion TLPs allows a requester to manage multiple outstanding transactions. If a completer is unable to obtain requested data as a result of an error, it returns a completion packet without data Cpl and an error status indication. The requester determines how to handle the error at the software layer.

Figure on page 60 shows packets transmitted by a requester and completer to complete a non-posted locked read transaction. The requester can only be a root complex which initiates a locked request on the behalf of the CPU.

Endpoints are not allowed to initiate locked requests. The locked memory read request TLP is routed downstream through the fabric of switches using information in the header portion of the TLP. The completer can only be a legacy endpoint. The entire path from root complex to the endpoint for TCs that map to VC0 is locked including the ingress and egress port of switches in the pathway. The completion is sent back to the root complex requester via the path and hierarchy of switches as the original request.

The CplDLk packet contains routing information necessary to route the packet back to the requester. If the completer is unable to obtain the requested data as a result of an error, it returns a completion packet without data CplLk and an error status indication within the packet. The requester who receives the error notification via the CplLk TLP must assume that atomicity of the lock is no longer guaranteed and thus determine how to handle the error at the software layer.

The path from requester to completer remains locked until the requester at a later time transmits an unlock message to the completer.

Figure on page 61 shows the packets transmitted by a requester and completer to complete a non-posted write transaction. To complete this transfer, a requester transmits a non-posted write request TLP to a completer it intends to write data to. Memory write request and message requests are posted requests.

Requesters may be a root complex or endpoint device though not for configuration write requests.

Customers who viewed this item also viewed

A request packet with data is routed through the fabric of switches using information in the header of the packet. The packet makes its way to a completer.

When the completer receives the packet and decodes its contents, it accepts the data. The completer creates a single completion packet without data Cpl to confirm reception of the write request.

This is the purpose of the completion. This completion packet will propagate through the same hierarchy of switches that the request packet went through before making its way back to the requester. The requester gets confirmation notification that the write request did make its way successfully to the completer.

If the completer is unable to successfully write the data in the request to the final destination or if the write request packet reaches the completer in error, then it returns a completion packet without data Cpl but with an error status indication. The requester who receives the error notification via the Cpl TLP determines how to handle the error at the software layer.

Memory write requests shown in Figure are posted transactions. This implies that the completer returns no completion notification to inform the requester that the memory write request packet has reached its destination successfully.

No time is wasted in returning a completion, thus back-to-back posted writes complete with higher performance relative to non-posted transactions. The write request packet which contains data is routed through the fabric of switches using information in the header portion of the packet. The completer accepts the specified amount of data within the packet. Transaction over. If the write request is received by the completer in error, or is unable to write the posted write data to the final destination due to an internal error, the requester is not informed via the hardware protocol.

The completer could log an error and generate an error message notification to the root complex. Error handling software manages the error. Message requests are also posted transactions as pictured in Figure on page Some message requests propagate from requester to completer, some are broadcast requests from the root complex to all endpoints, some are transmitted by an endpoint to the root complex.

Message packets may be routed to completer s based on the message's address, device ID or routed implicitly. Message request routing is covered in Chapter 3. Message request support eliminates the need for side-band signals in a PCI Express system. They are used for PCI style legacy interrupt signaling, power management protocol, error signaling, unlocking a path in the PCI Express fabric, slot power support, hot plug protocol, and vender defined purposes.

Some Examples of Transactions This section describes a few transaction examples showing packets transmitted between requester and completer to accomplish a transaction. The examples consist of a memory read, IO write, and Memory write. Figure shows an example of packet routing associated with completing a memory read transaction. The root complex on the behalf of the CPU initiates a non-posted memory read from the completer endpoint shown.

The root complex transmits an MRd packet which contains amongst other fields, an address, TLP type, requester ID of the root complex and length of transfer in doublewords field.

Switch A which is a 3 port switch receives the packet on its upstream port. The switch logically appears like a 3 virtual bridge device connected by an internal bus. The logical bridges within the switch contain memory and IO base and limit address registers within their configuration space similar to PCI bridges. The switch internally forwards the MRd packet from the upstream ingress port to the correct downstream port the left port in this example.

Figure 2 shows the system address map of systems using Intel E chipset.

You can find a complementary address mapping explanation in Intel E chipset datasheet in Section 4. Despite that, the amount of memory space used depends on the memory controller in the system, which in this case located in the northbridge Intel E chipset. Therefore, this implies that the memory range consumed by the PCI devices is relocatable, i. In the—very old—ISA bus, you have to set the jumpers on the ISA device to the correct setting; otherwise there will be address usage conflict in your system.

Some of the special ranges above TOM are hardcoded and cannot be changed because the CPU reset vector and certain non-CPU chip registers always map to those special memory ranges, i. These hardcoded memory ranges cannot be used by PCI devices at all. Otherwise, the device will not be regarded as a valid PCI device. The PCI configuration register consists of bytes of registers, from byte offset 00h to byte offset FFh.

The byte PCI configuration register consists of two parts, the first 64 bytes are called PCI configuration register header and the rest are called device-specific PCI configuration register.

There are two types of PCI configuration register header, a type 0 and a type 1 header. Figure 3 shows the PCI configuration register type 0 header.

BARs span the range of six bit registers bytes , from offset 10h to offset 27h in the PCI configuration header type 0. It works like this: Different systems can have different main memory RAM size. Because both of them are modifiable, you can change the memory range occupied by the PCI device memory in the CPU memory space as required. We call this configuration the first system configuration from now on. The same system as in point 1. However, we add new mb RAM module. We call this configuration the second system configuration from now on.

Figure 4 shows the system address map for the first system configuration mb RAM and the system address map for the second system configuration mb RAM. The change also causes the base address of the AGP video card memory to change; in the first system configuration the base address is mb while in the second system configuration the base address is mb.

PCI Express System Architecture

In the first system configuration, the platform firmware initializes the video memory to be mapped in memory range mb to mb, because the video memory size is 32 mb—the first mb is mapped to RAM. The northbridge logic checks all of its registers related to address mapping to find the device that match the address requested.

Note that initialization of the four address mapping registers is the job of the platform firmware.

The northbridge forwards the result returned by the video card to the CPU. After this step, the CPU receives the result from the northbridge and the read transaction completes.

If you look at the system address map in Figure 2, you can see that there are two more memory ranges in the system address map that show up mysteriously. Both of these memory ranges are not accessible in normal operating mode, i.

This memory range is relocatable, depending on the size of main memory. At this point, everything regarding system address map in a typical Intel E-ICH2 system should be clear.

The one thing remaining to be studied is initialization of the BAR. There are some additional materials in the remaining sections, though. They are all related to system address map. The formats of these two types of BARs are quite different. Figure 6 and Figure 7 show the formats of both types of BAR.

Actually, only the lowest bit matters; it differentiates the type of the BAR, because the bit has a different hardcoded value when the BAR types differ. You can see this difference in Figure 6 and Figure 7.

This article deals with this type of BAR because the focus is on the system address map, particularly the system memory map. This needs a little bit of explanation. PCI bus protocol actually supports bit and bit memory space. If you ask: Why only bit? Prefetching in this context means that the CPU fetches the contents of memory addressed by the BAR before a request to that specific memory address is made, i.

This feature is used to improve the overall PCI device memory read speed. The number of writeable bits depends on the size of the memory range required by the PCI device onboard memory. When the bit is set to one, the access is enabled and when the bit is set to zero the access is disabled. You can see the details in the chip datasheet. This shows that, despite the presence of the bus protocol standard, some vendors prefer a quite different approach compared to what the standard suggests.

However, I think there might be an economic reason to do so. Perhaps, by doing that Realtek saved quite sizeable die area in the LAN card chip. This translated into cheaper production costs. Base address bits depend on the size of the memory range required by the PCI device.

The size of the memory range required by a PCI device is calculated from the number of writeable bits in the base address bits part of the BAR.The root complex on the behalf of the CPU initiates a non-posted memory read from the completer endpoint shown. Communication over the serial interconnect is accomplished using a packet-based communication protocol.

For example an endpoint attached to switch B can talk to an endpoint connected to switch C. VC buffers have configurable priorities. This configurable arbitration mechanism between ports supported by switches is referred to as Port arbitration. Updated OSs and device drivers are required to take advantage and access this additional configuration address space.

DICK from Kissimmee
I am fond of reading books safely . Feel free to read my other posts. One of my hobbies is flyak.
>