Copyright(c) 2016 - 2022 Intel Corporation

This release includes the native RDMA VMware ESXi Driver for Intel(R) Ethernet Controllers E810-C, E810-XXV and X722 families.

Driver version: 1.4.4.0

Supported ESXi release: 7.0,
                        8.0.

========================================================================================================================

Contents:
------------------------------------------------------------------------------------------------------------------------

	- Overview
	- Important Notes
	- Supported Hardware
	- Supported Features
	- New Features
	- New Hardware
	- Bug Fixes
	- Installation
	- Driver setup on the host


========================================================================================================================

Overview:
------------------------------------------------------------------------------------------------------------------------

	- The ESXi RDMA driver (irdman) enables RDMA protocol on RDMA-capable Intel NICs in VMware ESXi environment.

	- RoCEv2 and iWARP RDMA protocols are both supported by this driver.
		- Intel(R) Ethernet 800 Series devices support both iWARP and RoCEv2 protocols.
		- Intel(R) Ethernet X722 devices support only iWARP protocol.

	- Intel(R) Ethernet 800 Series and Intel(R) Ethernet X722 each have a corresponding LAN driver that must also
	  be installed:
		- icen for Intel(R) Ethernet 800 Series (minimum required version 1.4.0.20)
		- i40en for Intel(R) Ethernet X722 (minimum required version 1.12.3.0)



Important Notes:
------------------------------------------------------------------------------------------------------------------------

	- RDMA is unavailable when ENS mode in LAN is active


	- RDMA capability should be turned on in LAN driver using module parameter RDMA, see Driver setup chapter.
	  The change takes effect after system reboot


	- irdman driver does not support RoCEv1


	- RoCEv2 is default RDMA protocol, iWARP can be set via module parameters


	- iWARP and RoCEv2 can be activated on a per-port basis (though not on the same port at the same time),
	  and work simultaneously in multi-vmnic environment


	- In RoCEv2 mode, Flow Control is recommended for best performance.
	  If RoCEv2 is on and flow control is not detected (either Link-level flow control (LFC) or
	  Priority flow control (PFC)), the driver automatically de-tunes. This is an intentional
	  design to allow RoCEv2 to operate without flow control, but with lower performance.

	  PFC and LFC are mutually exclusive. Only one type at a time may activated on a device.
	  PFC is generally recommended. It has greater flexibility to handle multiple traffic streams
	  and enhanced QoS capabilities. LFC may be used in limited testing situations but is not recommended.

	  For iWARP, flow control is optional but may be beneficial.



Supported Hardware:
------------------------------------------------------------------------------------------------------------------------

	- Intel(R) Ethernet Controllers E810-CAM1
	- Intel(R) Ethernet Controllers E810-CAM2
	- Intel(R) Ethernet Controllers E810-XXVAM2
	- Intel(R) Ethernet Controllers X722



Native Mode Supported Features:
------------------------------------------------------------------------------------------------------------------------

	- iSCSI Extensions for RDMA (iSER) for RoCEv2 (engineering tests with iWARP possible)


	- NVMe over Fabrics (NVMEoF)
	  Note: NVMEoF support is not certified. It has been tested against Linux SPDK (Storage Performance Development
	  Kit) target.


	- Virtual SAN RDMA (vSAN)


	- Paravirtual RDMA (PVRDMA)



New Features:
------------------------------------------------------------------------------------------------------------------------

	- None



New Hardware Supported:
------------------------------------------------------------------------------------------------------------------------

	- None



Bug Fixes:
------------------------------------------------------------------------------------------------------------------------

	1.4.4.0
	- Fix DDP loading scenario with RDMA devices being up and running.
	  See Known Issues section for details.

	1.4.3.0
	- PFC configuration now support priority 3 and others.
	- Increase stability for traffic with many (>100) QPs involved.
	- Fix loopback traffic using the same port group.
	- Fix for RDMA traffic between 2 VMs on the same hypervisor.
	- Fix traffic issues with non-default CWND parameter and change
	  CWND default value to 1024.

	1.3.8.0
	- Fix a race condition between early Queue Pair destruction and completions poll procedures.
	- Simultaneous RDMA driver unload and handling of PF reset procedure are processed correctly and no longer
	  crash the system.
	- Simultaneous change of the MTU on multiple hosts or DCB settings during RDMA traffic will no longer cause an
	  unrecoverable fault.

	1.3.6.0
	- Fixed firmware update procedure with NVM Update Tool. Special update procedure, provided as a workaround is no
	  longer needed.
	- Improvement handling of PF reset during RDMA traffic or state changing

	1.3.4.56
	- When the second physical interface of the E810 device (PF1) is configured with IEEE mode DCB PFC and ETS,
	  the RDMA traffic will no longer pass on that interface.

	1.3.4.23
	- Fixed QP creation when MTU was higher than 2048



Known Issues and Workarounds:
------------------------------------------------------------------------------------------------------------------------

	- VLAN priority for Unreliable Datagram (UD) traffic is incorrect if supplied ToS is not set to Priority 0.

	- Running Unreliable Datagram (UD) RDMA mixed traffic with more than 2 QPs may lead to a receiver side UD
	  application hang. To recover, restart the RDMA UD application. This is not expected to impact storage (NVMeoF,
	  iSER, vSAN) applications since they do not rely on UD communication.

	- Loading DDP package when connected to iSER/NVMe target might break the connection.
	  Workaround: Close connection (disconnect from target) before loading DDP package.
	  If connection is still broken, reload driver.
	  Requires icen-1.11.0.50 or later.

	- Before loading a non-default DDP package, all RDMA traffic must be stopped and the irdman driver must be
	  unloaded on the system. If the recommendation is not followed, the following failures may occur:
	  Loading a DDP package during RDMA traffic may lead to system hang that requires a server reset to recover.
	  Loading a DDP package with RDMA enabled (without RDMA traffic running) might fail and the device may
	  become unusable for RDMA traffic until recovered by reboot.

	  The issue is fixed in irdman 1.4.4.0 and icen 1.11.0.50 or later. The fix involved LAN-RDMA interface extension.
	  If an older icen driver is used, the warning about interface mismatch will be prompted,
	  however other RDMA functionality will not be affected.

	- If host is equipped with X722 and E810 cards, unloading one of the LAN drivers (i40en, icen) and trying to use
	  RDMA with the other may result in a PSOD(*).

	- Irdman 1.3.1.19 driver provided with ESXi 7.0 or later may crash with error "irdma_hw_flush_wqes_callback"
	  after upgrading icen driver with RoCEv2 mode.
	  Workaround: To avoid this, please ensure the irdman driver is upgraded along with icen.

	- A NIC in passthrough mode will not have RDMA capability when assigned to different Virtual Machines
	  NIC ports cannot be split up and assigned to different VMs in passthrough mode, otherwise only the first one
	  will be detected and recognized as a RDMA device.
	  Workaround: If user wants to utilize all ports of device they must be added to the same VM.

	- In case DCB is not present and no VLAN ID is set explicitly there is no 802.1Q tag generated which results in not
	  fully operational DCB during RDMA traffic. User should not use vlan0 with DCB. In such scenario user will be
	  informed with a log message: "DCB with vlan 0, no vlan tag present. The connection will not take benefit from DCB"

	- For PVRDMA, when 2 VMs are on the same hypervisor and they are using same underlying HCA, connection cannot
	  be established. It works when VMs are deployed on differrent hosts.
	  The issue is fixed in 1.4.3.0 and later.

	- Starting from 1.4.3.0 the PFC configuration is fixed. It is now compliant with the requirement to use priority 3.
	  Compare:
	     https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-B764140D-BCF3-4C99-8169-E5B058757518.html
	  for recommended PFC configuration.
	  Prior to the fix, priority 4 should be configured for irdman 1.3.8.0 or earlier and priority 2 for 1.4.1.0.

	- Simultaneous RDMA driver unload and handling of PF reset procedure might cause a system crash (PSOD might appear).
	  The issue is fixed since 1.3.7.0
	  Workaround: PF reset procedure must be completed before unload of RDMA driver is started.

	- Simultaneous change of the MTU on multiple hosts or DCB settings during RDMA traffic may cause an unrecoverable
	  fault. This issue is fixed since irdman 1.3.6.18.
	  Workaround: For MTU change apply a time span of at least 1 second between applying new settings or ensure
	  there's no traffic performed.
	  In case of DCB change issues time span of 30s is more appropriate, but better results are seen without traffic.

	- VMware ESX 7.0 operating system might experience a PSoD during NVM update process.
	  Issue will occur if installed RDMA driver is older than 1.3.6.0
	  Workaround: Unload RDMA driver before NVM update process. Alternatively one can turn off RDMA in icen module
	  parameters, perform platform reboot and then start NVM update process.

	*PSOD - Purple Screen of Death


Installation:
------------------------------------------------------------------------------------------------------------------------

	- Desired irdman package with vib extension e.g. irdman have to be copied to the host. Then:
	  [host~:] esxcli software vib install -v /irdman-<version>-<os_version>.vib

	  If driver was built for a different ESXi version, and ESXi warns about
	  acceptance level verification results, the "--no-sig-check" option may be used:
	  [host~:] esxcli software vib install -v /irdman-<version>-<os_version>.vib --no-sig-check

	Notes:
	- Displaying current version of installed icen/i40en/irdman:
	  [host~:] esxcli software vib list | grep "icen\|i40en\|irdman"

	Needs reboot to take effect.



Driver setup on the host:
------------------------------------------------------------------------------------------------------------------------

	- RDMA capability activation
	  To activate RDMA for E810-C/E810-XXV (icen) or X722 (i40en) "RDMA=1" must be set with module command.
	  That parameter is an array of int, each value corresponds to next vmrdma device,
	  e.g. in order to activate RDMA on 1st and 4th device while 4 devices are present RDMA parameter should
	  be set to RDMA="1,0,0,1".

	  If RDMA capability is claimed NIC works with default protocol (RoCEv2 for E810-C/E810-XXV and iWARP for X722)
	  if not explicitly set via ROCE parameter of irdman module. Note that ROCE parameter is valid only for
	  E810-C/E810-XXV and it is an array of int. See paragraph "Switching between iWARP and RoCE" for more
	  information.

	  - To run RDMA in iWARP mode on all ports of 2-port E810-C/E810-XXV:
	    - Set RDMA parameter on icen
	      [host~:] esxcfg-module -s "RDMA=1,1" icen
	      or
	      [host~:] esxcli system module parameters set -m icen -p "RDMA=1,1"
	    - Set ROCE parameter on irdman (0 means iWARP will be used)
	      [host~:] esxcfg-module -s "ROCE=0,0" irdman
	      or
	      [host~:] esxcli system module parameters set -m irdman -p "ROCE=0,0"

	  - To run RDMA in RoCEv2 mode on all ports of 2-port E810-C/E810-XXV:
	    - Set RDMA parameter on icen
	      [host~:] esxcfg-module -s "RDMA=1,1" icen
	      or
	      [host~:] esxcli system module parameters set -m icen -p "RDMA=1,1"
	    - Set ROCE parameter on irdman
	      [host~:] esxcfg-module -s "ROCE=1,1" irdman
	      or
	      [host~:] esxcli system module parameters set -m irdman -p "ROCE=1,1"

	  - To run RDMA in iWARP mode on all ports of 2-port X722:
	    - Set RDMA parameter on i40en
	      [host~:] esxcfg-module -s "RDMA=1,1" i40en
	      or
	      [host~:] esxcli system module parameters set -m i40en -p "RDMA=1,1"
	    As X722 works only in iWARP mode there is no need to change irdman parameters.

	  - Note: Flow Control must be turned on to ensure high performance if working in RoCEv2 mode.
	          For iWARP flow control is optional but beneficial. See "Important Notes" section for
	          more information.

	All above commands require a reboot to take effect.


	- To view discovered RDMA devices, use the following command in the CLI:
	  [host~:] esxcli rdma device list
	  Name     Driver  State   *MTU  Speed     Paired Uplink  Description
	  -------  ------  ------  ----  --------  -------------  -----------
	  vmrdma0  irdman  Active  1024  100 Gbps  vmnic4         RDMA Support for uplink vmnic4
	  vmrdma1  irdman  Active  1024  100 Gbps  vmnic5         RDMA Support for uplink vmnic5

	*MTU - Displayed value is valid for RoCEv2, iWARP relies on vmnic value.


	- Changing irdman settings

		- Displaying irdman information and available parameters:
		  [host~:] esxcfg-module -i irdman
		  esxcfg-module module information
		   input file: /usr/lib/vmware/vmkmod/irdman
		   License: ThirdParty:Intel
		   Version: <version>
		   Name-space:
		   Required name-spaces:
		    com.vmware.vmkapi@v2_6_0_0
		   Parameters:
		    Ackcreds: int
		     ACK credit syndrome: 0 - 31, (default = 1, max = 30, deactivate = 31). RoCE only.
		    Cwnd: int
		     Sender congestion window. 0 - 65535, (default = 1024). RoCE only
		    DebugFlags: int
		     [Obsolete] debug flags: 0 = deactivate (default), 0x7fffffff = all
		    FenceRate: int
		     Read Fence rate. 0 - 255, (default = 0)
		    ROCE: array of int
		     Activate RoCE support 0 = deactivate, 1 = activate, (default = 1)
		    ReducedMaxQpInitRdAtom: int
		     [Obsolete] Activate limit maxQpInitRdAtom to 0x10: 0 = deactivate (default), 1 = activate

		  or

		  [host~:] esxcli system module get -m irdman
		     Module: irdman
		     Module File: /usr/lib/vmware/vmkmod/irdman
		     License: ThirdParty:Intel
		     Version: <version>
		     Build Type: release
		     Provided Namespaces:
		     Required Namespaces: com.vmware.vmkapi@v2_6_0_0
		     Containing VIB: irdman
		     VIB Acceptance Level: certified
		  [host~:] esxcli system module parameters list -m irdman
		     Name                    Type          Value       Description
		     ----------------------  ------------  ----------  -----------
		     Ackcreds                int                       ACK credit syndrome: 0 - 31, (default = 1, max = 30, deactivate = 31). RoCE only.
		     Cwnd                    int                       Sender congestion window. 0 - 65535, (default = 1024). RoCE only
		     DebugFlags              int                       [Obsolete] debug flags: 0 = deactivate (default), 0x7fffffff = all
		     FenceRate               int           0           Read Fence rate. 0 - 255, (default = 0)
		     ROCE                    array of int  0,0         Activate RoCE support 0 = deactivate, 1 = activate, (default = 1)
		     ReducedMaxQpInitRdAtom  int                       [Obsolete] Activate limit maxQpInitRdAtom to 0x10: 0 = deactivate (default), 1 = activate

		- irdman settings can be changed by cli command:
		  [host~:] esxcfg-module -s "PARAM=VALUE" irdman
		  or
		  [host~:] esxcli system module parameters set -m irdman -p "PARAM=VALUE"

		- Few settings should be set all together:
		  [host~:] esxcfg-module -s "PARAM1=VAL1,VAL2,VAL3 PARAM2=VAL1" irdman
		  or
		  [host~:] esxcli system module parameters set -m irdman -p "PARAM1=VAL1,VAL2,VAL3 PARAM2=VAL1"

		- Current settings can by displayed by:
		  [host~:] esxcfg-module -g irdman
		  or
		  [host~:] esxcli system module parameters list -m irdman

	  Parameters must be separated by a blank space.
	  Needs reboot to take effect.


	- Switching between iWARP and RoCE
		- Displaying current iWARP/RoCEv2 settings (RoCEv2 on two vmnics):
		  [host~:] esxcli rdma device protocol list

		  Device   RoCEv1   RoCEv2  iWARP
		  -------  -------  -------  -----
		  vmrdma0    false     true  false
		  vmrdma1    false     true  false

		- Changing current settings (iWARP on the first vmnic, RoCEv2 on the second one):
		  [host~:] esxcfg-module -s "ROCE=0,1" irdman
		  or
		  [host~:] esxcli system module parameters set -m irdman -p "ROCE=0,1"

		  [host~:] esxcli rdma device protocol list
		  Device   RoCEv1   RoCEv2   iWARP
		  -------  -------  -------  -----
		  vmrdma0  false    false    true
		  vmrdma1  false    true     false

		Notes:
		  RoCEv2 setting is an array of int. Switched off devices are not taken into account. X722 devices are omitted.
		  It should be set only for devices which support RoCEv2.

		  Sample configuration:
		  Name   Driver
		  -------------
		  vmnic0 i40en (X722)
		  vmnic1 i40en (X722)
		  vmnic2 icen  (E810)
		  vmnic3 icen  (E810)
		  vmnic4 i40en (X722)
		  vmnic5 i40en (X722)

		  [host~:] esxcli rdma device protocol list
		  Device   RoCE v1  RoCE v2  iWARP
		  -------  -------  -------  -----
		  vmrdma0    false    false   true
		  vmrdma1    false    false   true
		  vmrdma2    false    false   true
		  vmrdma3    false    false   true
		  vmrdma4    false    false   true
		  vmrdma5    false    false   true

		  - Attempt to set RoCEv2 on 4th and 5th vmnic
		    [host~:] esxcfg-module -s "ROCE=0,0,0,1,1,0" irdman
		    or
		    [host~:] esxcli system module parameters set -m irdman -p "ROCE=0,0,0,1,1,0"

		    Has no effect as only first two indexes are considered because there are only two icen devices in the list.
		    The rest of the values are skipped.

		    [host~:] esxcli rdma device protocol list
		    Device   RoCE v1  RoCE v2  iWARP
		    -------  -------  -------  -----
		    vmrdma0    false    false   true
		    vmrdma1    false    false   true
		    vmrdma2    false    false   true
		    vmrdma3    false    false   true
		    vmrdma4    false    false   true
		    vmrdma5    false    false   true

		  - Activating RoCEv2 only on second port of 2-port NIC.
		    In this example vmrdma2 and vmrdma3 are ports of one NIC.
		    [host~:] esxcfg-module -s "ROCE=0,1" irdman
		    or
		    [host~:] esxcli system module parameters set -m irdman -p "ROCE=0,1"

		    [host~:] esxcli rdma device protocol list
		    Device   RoCE v1  RoCE v2  iWARP
		    -------  -------  -------  -----
		    vmrdma0    false    false   true
		    vmrdma1    false    false   true
		    vmrdma2    false    false   true
		    vmrdma3    false    true    false
		    vmrdma4    false    false   true
		    vmrdma5    false    false   true

	- Flow Control configuration
		- Link-level Flow Control (LFC)
			- Set LFC on both NICs
			  [host~:] esxcli network nic pauseParams set -r true -t true -n vmnic0

			  Verify if FC was activated
			  [host~:] esxcli network nic pauseParams list -n vmnic0
			  NIC     Pause Params Supported  Pause RX  Pause TX
			  ------  ----------------------  --------  --------
			  vmnic0                    true      true      true

			  Note: Flow control is inactive on all network interfaces by default.

			- If hosts are connected through a switch, flow control must be activated on the switch ports
		- Priority flow control (PFC)
			- PFC is a part of the Data Center Bridging (DCB). By default the E810-C/E810-XXV are in
			  willing mode, and will apply PFC configuration from any link partner (for example a switch)
			  advertising PFC settings automatically. If hosts are connected through a switch, PFC must be
			  configured on the connected switch ports (refer to switch vendor documentation for details).
			  PFC is not supported by X722 devices. See "Notes" for more information.

			  PFC configuration can be displayed with following commands:
			  [host~:] esxcli network nic dcb status get -n vmnic0
			  Nic Name: vmnic0
			  Mode: 2 - CEE Mode
			  Enabled: true
			  Capabilities:
			        Priority Group: true
			        Priority Flow Control: true
			        UP to TC Map: false
			        PG Traffic Classes: 8
			        PFC Traffic Classes: 8
			  PFC Enabled: true
			  PFC Configuration: 0 0 0 1 1 0 0 0
			  IEEE ETS Configuration:
			        Willing Bit In ETS Config TLV: 0
			        Supported Capacity: 0
			        Credit Based Shaper ETS Algorithm Supported:
			        TX Bandwidth Per TC: 0 0 0 0 0 0 0 0
			        RX Bandwidth Per TC: 0 0 0 0 0 0 0 0
			        TSA Assignment Table Per TC: 0 0 0 0 0 0 0 0
			        Priority Assignment Per TC: 0 0 0 0 0 0 0 0
			        Recommended TC Bandwith Per TC: 0 0 0 0 0 0 0 0
			        Recommended TSA Assignment Per TC: 0 0 0 0 0 0 0 0
			        Recommended Priority Assignment Per TC: 0 0 0 0 0 0 0 0
			  IEEE PFC Configuration:
			        Number Of Traffic Classes: 0
			        PFC Configuration: 0 0 0 0 0 0 0 0
			        Macsec Bypass Capability Is Enabled: 0
			        Round Trip Propagation Delay Of Link: 0
			        Sent PFC Frames: 0 0 0 0 0 0 0 0
			        Received PFC Frames: 0 0 0 0 0 0 0 0
			  DCB Apps:
			        App Type: L2 Ethertype
			        Protocol ID: 0x8906
			        User Priority: 0x3

			  As the E810 supports both IEEE and CEE version of DCBX standard the command above
			  shows data for both modes even if only part of them are relevant at a given time.

			  For CEE mode following data are relevant:
			  [host~:] esxcli network nic dcb status get -n vmnic0
			  Nic Name: vmnic0
			  Mode: 2 - CEE Mode
			  Enabled: true
			  Capabilities:
			        Priority Group: true
			        Priority Flow Control: true
			        UP to TC Map: false
			        PG Traffic Classes: 8
			        PFC Traffic Classes: 8
			  PFC Enabled: true
			  PFC Configuration: 0 0 0 1 1 0 0 0
			  [...]

			  or

			  [host:~] vsish -e get /net/pNics/vmnic0/dcbx/pfcEnabled
			  1
			  [host:~] vsish -e get /net/pNics/vmnic0/dcbx/pfcCfg
			  [0]: 0
			  [1]: 0
			  [2]: 0
			  [3]: 1
			  [4]: 1
			  [5]: 0
			  [6]: 0
			  [7]: 0

			  For IEEE mode following data are relevant:
			  [host~:] esxcli network nic dcb status get -n vmnic0
			  Nic Name: vmnic0
			  Mode: 3 - IEEE Mode
			  [...]
			  IEEE ETS Configuration:
			        Willing Bit In ETS Config TLV: 0
			        Supported Capacity: 8
			        Credit Based Shaper ETS Algorithm Supported: 0x0
			        TX Bandwidth Per TC: 60 40 0 0 0 0 0 0
			        RX Bandwidth Per TC: 60 40 0 0 0 0 0 0
			        TSA Assignment Table Per TC: 2 2 0 0 0 0 0 0
			        Priority Assignment Per TC: 1 1 1 0 1 1 1 1
			        Recommended TC Bandwidth Per TC: 60 40 0 0 0 0 0 0
			        Recommended TSA Assignment Per TC: 2 2 0 0 0 0 0 0
			        Recommended Priority Assignment Per TC: 1 1 1 0 1 1 1 1
			  IEEE PFC Configuration:
			        Number Of Traffic Classes: 8
			        PFC Configuration: 0 0 0 1 0 0 0 0
			        Macsec Bypass Capability Is Enabled: 0
			        Round Trip Propagation Delay Of Link: 0
			        Sent PFC Frames: 0 0 0 0 0 0 0 0
			        Received PFC Frames: 0 0 0 0 0 0 0 0

			  or

			  [host:~] vsish -e get /net/pNics/vmnic4/dcbx/IEEEPfcCfg
			  DCBX IEEE Priority Flow Control settings {
			    number of traffic classes:0x08
			    pfc enabled traffic classes:0x08
			    macsec bypass capability is enabled:0x00
			    round-trip propagation delay of link:0x00
			    count of the sent pfc frames:[0]: 0
			    [1]: 0
			    [2]: 0
			    [3]: 0
			    [4]: 0
			    [5]: 0
			    [6]: 0
			    [7]: 0
			    count of the received pfc frames:[0]: 0x00
			    [1]: 0x00
			    [2]: 0x00
			    [3]: 0x00
			    [4]: 0x00
			    [5]: 0x00
			    [6]: 0x00
			    [7]: 0x00
			  }

			  [host:~] vsish -e get /net/pNics/vmnic4/dcbx/IEEEEtsCfg
			  DCBX IEEE Enhanced Transmission Selection settings {
			    willing bit in ETS config TLV:0x00
			    supported capacity of ets feature:0x08
			    credit based shaper ets algorithm supported:0x00
			    tc tx bandwidth indexed by traffic class:[0]: 60
			    [1]: 40
			    [2]: 0
			    [3]: 0
			    [4]: 0
			    [5]: 0
			    [6]: 0
			    [7]: 0
			    tc rx bandwidth indexed by traffic class:[0]: 60
			    [1]: 40
			    [2]: 0
			    [3]: 0
			    [4]: 0
			    [5]: 0
			    [6]: 0
			    [7]: 0
			    TSA Assignment table per traffic class:[0]: 2
			    [1]: 2
			    [2]: 0
			    [3]: 0
			    [4]: 0
			    [5]: 0
			    [6]: 0
			    [7]: 0
			    priority assignment table mapping per traffic class:[0]: 1
			    [1]: 1
			    [2]: 1
			    [3]: 0
			    [4]: 1
			    [5]: 1
			    [6]: 1
			    [7]: 1
			    recommended tc bandwidth per traffic class:[0]: 60
			    [1]: 40
			    [2]: 0
			    [3]: 0
			    [4]: 0
			    [5]: 0
			    [6]: 0
			    [7]: 0
			    recommended TSA assignment per traffic class:[0]: 2
			    [1]: 2
			    [2]: 0
			    [3]: 0
			    [4]: 0
			    [5]: 0
			    [6]: 0
			    [7]: 0
			    recommended priority assignment per traffic class:[0]: 1
			    [1]: 1
			    [2]: 1
			    [3]: 0
			    [4]: 1
			    [5]: 1
			    [6]: 1
			    [7]: 1


			Notes:
			- VLAN tagging is required to configure PFC as Priority is determined by the 3-bit 802.1p
			  Priority Code Point (PCP) field in a frame's VLAN tag.

			- DCB and thus PFC is not supported by X722 LAN driver.
			  Check of DCB status of X722 device returns following message:
			  [host~:] esxcli network nic dcb status get -n vmnic0
			  DCB not supported for NIC vmnic0: Instance(vmnic0) Input(): Not supported: VSI node (591:VSI_NODE_net_pNics_dcbx_dcbMode)

			  Lists of LAN supported features are available in i40en (X722) and icen (E810) ReleaseNotes documents.


			See: Network Requirements for RDMA over Converged Ethernet doc from VMware
			     https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.networking.doc/GUID-E4ECDD76-75D6-4974-A225-04D5D117A9CF.html

	- [Obsolete] Turning on debugs flags
		- DebugsFlags setting allows to catch more verbose logs from irdman and other modules.
		  Activate debug flags:
		  [host~:] esxcfg-module -s "DebugFlags=0x7001" irdman
		  or
		  [host~:] esxcli system module parameters set -m irdman -p "DebugFlags=0x7001"

		Notes:
		- By default, DebugsFlags are switched off.
		- This option is obsolete and has been deactivated.
		- This option is planned to be removed in future releases.

		- Max DebugFlags setting is 0x7fffffff. DebugFlags value is a bit sum of different debug flags.
		  ERR = 0x00000001
		  CM  = 0x00000008
		  ILQ = 0x00000040
		  IEQ = 0x00000080
		  WQE = 0x00001000
		  AEQ = 0x00002000
		  CQP = 0x00004000
		  DCB = 0x00040000
		  WS  = 0x02000000

		- In case of issues that must be consulted with Intel engineers,
		  setting 0x7001 (sum of ERR, WQE, AEQ, CQP) should be most helpful.

	- Flexible debug log verbosity
		- During runtime user can set loglevel for log components independently
		  Driver has following log components:
		  irdman, ERR, INIT, DEV, CM, VERBS, PUDA, ILQ, IEQ, QP, CQ, MR, PBLE,
		  WQE, AEQ, CQP, HMC, USER, VIRT, DCB, CQE, CLNT, WS, STATS

		- Loglevel can be set from 0 (most urgent) to 4 (most verbose)

		- Default loglevel for each components equals to 0

		- Loglevel can be changed by vsish command:
		  [host~:] vsish -e set /system/modules/irdman/loglevels/irdman 9
		  [host~:] vsish -e set /system/modules/irdman/loglevels/VERBS 2
		  [host~:] vsish -e set /system/modules/irdman/loglevels/QP 1

		- Current loglevel can be read by command:
		  [host~:] vsish -e get /system/modules/irdman/loglevels/irdman

		  logLevel {
		  current:0
		  default:0
		  }

		  In case of issues that must be consulted with Intel following log components
		  should be most helpful:
		  [host~:] vsish -e set /system/modules/irdman/loglevels/irdman 2
		  [host~:] vsish -e set /system/modules/irdman/loglevels/AEQ 2
		  [host~:] vsish -e set /system/modules/irdman/loglevels/ERR 2
		  [host~:] vsish -e set /system/modules/irdman/loglevels/WQE 2
		  [host~:] vsish -e set /system/modules/irdman/loglevels/CQP 2

	- Cwnd
		- Sender congestion window can be chosen from 0 to 65535 with 1024 being the default.
		  This setting applies only to RoCEv2 mode and does not have an effect in iWARP.
		  It corresponds to the amount of data the sender can send without notification on
		  arrival on remote side. Higher number has positive impact on general performance
		  however low numbers are much more effective on stability especially during very
		  high traffic.
		  [host~:] esxcfg-module -s "Cwnd=1024" irdman
		  or
		  [host~:] esxcli system module parameters set -m irdman -p "Cwnd=1024"

	- FenceRate
		- Applies read fence mechanism for every n-th subsequent WR. 0 - OFF, 1 - ON.
		  Default value is 0. Max value is 255.
		  Note: Low value (other than 0) may have significant impact on performance.
		  [host~:] esxcfg-module -s "FenceRate=1" irdman
		  or
		  [host~:] esxcli system module parameters set -m irdman -p "FenceRate=1"

	- Ackreds
		- ACK credit syndrome field. The accepted values are between 0 and 31, with 1 being
		  the default and 31 meaning to turn off this feature. This setting applies only to
		  RoCEv2 mode and does not have an effect in iWARP. When user supplies value larger
		  than 31 it will be reduced to 30. Higher number has positive impact on general
		  performance however low numbers are much more effective on stability especially
		  during very high traffic.
		  [host~:] esxcfg-module -s "Ackreds=30" irdman
		  or
		  [host~:] esxcli system module parameters set -m irdman -p "Ackreds=30"

	- [Obsolete] ReducedMaxQpInitRdAtom
		- Max depth of inbound RDMA queue depth can be limited to 0x10 by setting ReducedMaxQpInitRdAtom:
		  [host~:] esxcfg-module -s "ReducedMaxQpInitRdAtom=1" irdman
		  or
		  [host~:] esxcli system module parameters set -m irdman -p "ReducedMaxQpInitRdAtom=1"

		Notes:
		- This parameter was introduced for compatibility with different cards and applications.
		  Mismatch or failure in negotiation of the inbound RDMA queue depth may lead to connection
		  establishment issue.
		- ReducedMaxQpInitRdAtom is inactive by default.



========================================================================================================================

Previously Released Versions:
------------------------------------------------------------------------------------------------------------------------

- Driver Version 1.4.3.0
                 1.4.1.0
                 1.3.8.0
                 1.3.6.0
                 1.3.4.23
                 1.3.3.7
