1 Introduction
1.1 Overview
NVM Express (NVMe) is an interface that allows host software to communicate with a non-volatile memory
subsystem. This interface is optimized for Enterprise and Client solid state drives, typically attached as a
register level interface to the PCI Express interface.
Note: During development, this specification was referred to as Enterprise NVMHCI. However, the name
was modified to NVM Express prior to specification completion. This interface is targeted for use in both
Client and Enterprise systems.
For an overview of changes from revision 1.2.1 to revision 1.3, refer to nvmexpress.org/changes for a
document that describes the new features, including mandatory requirements for a controller to comply with
revision 1.3.
1.1.1 NVMe over PCIe and NVMe over Fabrics
NVM Express 1.3 and prior revisions define a register level interface for host software to communicate with
a non-volatile memory subsystem over PCI Express (NVMe over PCIe). The NVMe over Fabrics
specification defines a protocol interface and related extensions to NVMe that enable operation over other
interconnects (e.g., Ethernet, InfiniBand™, Fibre Channel). The NVMe over Fabrics specification has an
NVMe Transport binding for each NVMe Transport (either within that specification or by reference).
In this specification a requirement/feature may be documented as specific to NVMe over Fabrics or to a
particular NVMe Transport binding. In addition, support requirements for features and functionality may
differ between NVMe over PCIe and NVMe over Fabrics.
To comply with NVM Express 1.2.1, a controller shall support the NVM Subsystem NVMe Qualified Name
in the Identify Controller data structure in Figure 109.
1.2 Scope
The specification defines a register interface for communication with a non-volatile memory subsystem. It
also defines a standard command set for use with the NVM subsystem.
1.3 Outside of Scope
The register interface and command set are specified apart from any usage model for the NVM, but rather
only specifies the communication interface to the NVM subsystem. Thus, this specification does not specify
whether the non-volatile memory system is used as a solid state drive, a main memory, a cache memory, a
backup memory, a redundant memory, etc. Specific usage models are outside the scope, optional, and
not licensed.
This interface is specified above any non-volatile memory management, like wear leveling. Erases and
other management tasks for NVM technologies like NAND are abstracted.
This specification does not contain any information on caching algorithms or techniques.
The implementation or use of other published specifications referred to in this specification, even if required
for compliance with the specification, are outside the scope of this specification (for example, PCI, PCI
Express and PCI-X).
1.4 Theory of Operation
NVM Express is a scalable host controller interface designed to address the needs of Enterprise and Client
systems that utilize PCI Express based solid state drives. The interface provides optimized command
submission and completion paths. It includes support for parallel operation by supporting up to 65,535 I/O
Queues with up to 64K outstanding commands per I/O Queue. Additionally, support has been added for
NVM Express 1.3
7
many Enterprise capabilities like end-to-end data protection (compatible with SCSI Protection Information,
commonly known as T10 DIF, and SNIA DIX standards), enhanced error reporting, and virtualization.
The interface has the following key attributes:
Does not require uncacheable / MMIO register reads in the command submission or completion
path.
A maximum of one MMIO register write is necessary in the command submission path.
Support for up to 65,535 I/O queues, with each I/O queue supporting up to 64K outstanding
commands.
Priority associated with each I/O queue with well-defined arbitration mechanism.
All information to complete a 4KB read request is included in the 64B command itself, ensuring
efficient small I/O operation.
Efficient and streamlined command set.
Support for MSI/MSI-X and interrupt aggregation.
Support for multiple namespaces.
Efficient support for I/O virtualization architectures like SR-IOV.
Robust error reporting and management capabilities.
Support for multi-path I/O and namespace sharing.
This specification defines a streamlined set of registers whose functionality includes:
Indication of controller capabilities
Status for controller failures (command status is processed via CQ directly)
Admin Queue configuration (I/O Queue configuration processed via Admin commands)
Doorbell registers for scalable number of Submission and Completion Queues
An NVM Express controller is associated with a single PCI Function. The capabilities and settings that
apply to the entire controller are indicated in the Controller Capabilities (CAP) register and the Identify
Controller data structure.
A namespace is a quantity of non-volatile memory that may be formatted into logical blocks. An NVM
Express controller may support multiple namespaces that are referenced using a namespace ID.
Namespaces may be created and deleted using the Namespace Management and Namespace Attachment
commands. The Identify Namespace data structure indicates capabilities and settings that are specific to a
particular namespace. The capabilities and settings that are common to all namespaces are reported by
the Identify Namespace data structure for namespace ID FFFFFFFFh.
NVM Express is based on a paired Submission and Completion Queue mechanism. Commands are placed
by host software into a Submission Queue. Completions are placed into the associated Completion Queue
by the controller. Multiple Submission Queues may utilize the same Completion Queue. Submission and
Completion Queues are allocated in memory.
An Admin Submission and associated Completion Queue exist for the purpose of controller management
and control (e.g., creation and deletion of I/O Submission and Completion Queues, aborting commands,
etc.). Only commands that are part of the Admin Command Set may be submitted to the Admin Submission
Queue.
An I/O Command Set is used with an I/O queue pair. This specification defines one I/O Command Set,
named the NVM Command Set. The host selects one I/O Command Set that is used for all I/O queue
pairs.
Host software creates queues, up to the maximum supported by the controller. Typically the number of
command queues created is based on the system configuration and anticipated workload. For example,
on a four core processor based system, there may be a queue pair per core to avoid locking and ensure
data structures are created in the appropriate processor core’s cache. Figure 1 provides a graphical
representation of the queue pair mechanism, showing a 1:1 mapping between Submission Queues and
Completion Queues. Figure 2 shows an example where multiple I/O Submission Queues utilize the same
NVM Express 1.3
8
I/O Completion Queue on Core B. Figure 1 and Figure 2 show that there is always a 1:1 mapping between
the Admin Submission Queue and Admin Completion Queue
没有回复内容