Copyright(c) 2013 - 2024 Intel Corporation

This release includes the native ixgben VMware ESX Driver for Intel(R) Ethernet
Controllers 82599, x520, x540, x550, x552, x553 and E610 family

Driver version: 1.19.3.0

Supported ESXi release: 8.0

=================================================================================

Contents
--------

- Important Notes
- Supported Features
- New Features
- New Hardware Supported
- Bug Fixes
- Known Issues and Workarounds
- Command Line Parameters
- Previously Released Versions

=================================================================================

Important Notes:
----------------

- VMware vSphere Hypervisor (ESXi) 8.0 Support:

   Added VMware vSphere Hypervisor (ESXi) 8.0 support.

   Consult VMware vSphere Hypervisor (ESXi) 8.0 release notes for a complete
   list of new features and changes. To avoid serious problems, carefully review
   VMware hardware requirements before installing or upgrading to VMware vSphere
   Hypervisor (ESXi) 7.0.

- Upgrading from VMware vSphere Hypervisor (ESXi) 7.0 to 8.0:

   Uninstall device drivers for Intel Ethernet Adapters from VMware vSphere
   Hypervisor (ESXi) 7.0 host prior to starting VMware vSphere Hypervisor
   (ESXi) 8.0 upgrade process (failing to do so causes device drivers to stop
   loading on VMware vSphere Hypervisor (ESXi) 8.0). Upgrade to VMware vSphere
   Hypervisor (ESXi) 8.0. Install device drivers compiled with VMware vSphere
   Hypervisor (ESXi) 8.0 DDK.

   To download device drivers for VMware vSphere Hypervisor (ESXi) 8.0, visit
   the VMware VCG download site at:

   https://www.vmware.com/resources/compatibility/search.php?deviceCategory=io.

- VMware vSphere Hypervisor (ESXi) 7.0 Support:

   Added VMware vSphere Hypervisor (ESXi) 7.0 support. VMware vSphere
   Hypervisor (ESXi) 7.0 introduces changes related to:

   - How SR-IOV VFs are created.
   - How driver module parameters function.
   - How driver modules are upgraded.

   Consult VMware vSphere Hypervisor (ESXi) 7.0 release notes for a complete
   list of new features and changes. To avoid serious problems, carefully review
   VMware hardware requirements before installing or upgrading to VMware vSphere
   Hypervisor (ESXi) 7.0.

- Upgrading from VMware vSphere Hypervisor (ESXi) 6.5 to 7.0:

   Uninstall device drivers for Intel Ethernet Adapters from VMware vSphere
   Hypervisor (ESXi) 6.5 host prior to starting VMware vSphere Hypervisor
   (ESXi) 7.0 upgrade process (failing to do so causes device drivers to stop
   loading on VMware vSphere Hypervisor (ESXi) 7.0). Upgrade to VMware vSphere
   Hypervisor (ESXi) 7.0. Install device drivers compiled with VMware vSphere
   Hypervisor (ESXi) 7.0 DDK.

   To download device drivers for VMware vSphere Hypervisor (ESXi) 7.0, visit
   the VMware VCG download site at:

   https://www.vmware.com/resources/compatibility/search.php?deviceCategory=io.

- SR-IOV Virtual Function (VF) Creation:

   VMware vSphere Hypervisor (ESXi) 7.0 WebGUI allows users to instantiate
   VFs for each Network Adapter port. Creating VFs using VMware vSphere
   Hypervisor (ESXi) 7.0 WebGUI triggers immediate device driver reload that
   removes other device driver settings, like LLDP, RSS, VMDQ, etc. that
   might be enabled. This might cause loss of network connectivity. The
   device driver might fail to reload if SR-IOV VFs are configured and at
   least one VF is assigned to an active VM. Reboot the server after
   creating VFs to avoid this scenario. VMware vSphere Hypervisor (ESXi)
   7.0 ignores VF creation using "max_vfs" module parameter if the VFs are
   created using VMware vSphere Hypervisor (ESXi) 7.0 WebGUI. For more
   details reference:
   https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html

- Receive Side Scaling (RSS):
   To enable parallel traffic processing, RSS allows the driver to spread
   ingress network traffic over multiple receive queues associated with
   individual CPUs. RSS is enabled by default. RSS can be managed using
   the "esxcli system module parameters" command with RSS, DRSS and
   DevRSS parameters. If the admin configure only DRSS – then RSS is disabled
   despite having higher priority. If the admin tries to configure RSS and
   DRSS at the same time – then NetQ RSS is given precedence over DefQ RSS since
   those flavors are mutually exclusive. Once DevRSS is set, the following RSS
   connected config is ignored and virtualization (VMDQ / SR-IOV) is disabled
   on the PF. If any of RSS connected parameters is not enabled directly
   (default load) - then the RSS is enabled.

   NOTE: Reboot is needed after setting RSS mode.

- Recovery Mode:
   A device will enter recovery mode if a device's NVM becomes corrupted.
   If a device enters recovery mode because of an interrupted NVM update,
   you should attempt to finish the update.
   If the device is in recovery mode because of a corrupted NVM, use
   the nvmupdate utility to reset the NVM back to factory defaults.

   NOTE: You must power cycle your system after using Recovery Mode
   to completely reset the firmware and hardware.

- Backplane devices:
   Backplane devices are operating in auto mode only, and thus the user
   cannot manually change speed settings.

- Supported VF drivers
   Due to changes concerned with Mailbox API, minimal supported ixgbevf
   driver version is 4.11.0.29.

- Trusted Virtual Function

   Setting a Virtual Function (VF) to be trusted using the Intel extended
   esxcli tool (intnetcli) allows the VF to request unicast/multicast
   promiscuous mode. Additionally, a trusted mode VF can request more MAC
   addresses and VLANs, subject to hardware limitations. Using intnetcli,
   it is required to set a VF to the desired mode every time after rebooting a
   VM or host since ESXi kernel may assign a different VF to the VM after
   reboot. It is possible to set all VFs trusted, persistently between VM or
   host reboot/power cycle by setting 'trust_all_vfs' module parameter.

   To enable trusted virtual function use:
   esxcfg-module -s -a trust_all_vfs=1 ixgben, or
   esxcli system module parameters set -a -m ixgben -p trust_all_vfs=1

   To disable trusted virtual function (default setting) use:
   esxcfg-module -s -a trust_all_vfs=0 ixgben, or
   esxcli system module parameters set -a -m ixgben -p trust_all_vfs=0

   NOTE1: Above commands will replace current module parameter settings. Please
   refer to "Command Line Parameters" section on how to append new value to current
   settings.

   NOTE2: Using this feature may impact performance.

Supported Features:
-------------------

- Rx, Tx, TSO checksum offload,
- Netqueue (VMDQ),
- Default Queue, NetQueue and Device RSS,
- Hardware VLAN filtering,
- Rx Hardware VLAN stripping,
- Tx Hardware VLAN inserting,
- Interrupt moderation,
- SR-IOV (support one queue per VF, VF MTU, and VF VLAN),
   Valid range for max_vfs:
   1-61 (VMDQ default)
   1-63 (VMDQ set to 0 or 1)
   When VMDQ is set to 2 or more, the maximum number of VFs supported is 63 – VMDQ,
- VMDQ and SR-IOV co-existence,
- Link Auto-negotiation,
- Flow Control,
- Management APIs for CIM Provider, OCSD/OCBB,
- Firmware recovery mode

New Features:
-------------

- Added Wake on LAN (WoL) support
- Added Linkville E610 Recovery Mode and Rollback Support
- Added support for Linkville E610 TX Hang detection due to an unhandled MDD Event
- Added Linkville E610 Thermal Sensor Support
- Added Linkville E610 Firmware Logging Support
- Added Linkville E610 Firmware Version Compatibility (any-to-any)
- Added Linkville E610 Debug Dump support

New Hardware Supported:
-----------------------
- E610

Bug Fixes:
----------

- None

Known Issues and Workarounds:
----------------------------

- On 82599 adapters emulated interfaces in a VLAN portgroup (VST or VGT) may not be able to communicate with VF interfaces of
  the same PF. 82599 adapters lack a HW feature that allows the driver to configure VLAN mode for loopback
  traffic in the emulated data path.
   Workaround: none

- Incoming VLAN traffic is dropped after enabling software emulation of VLAN tagging and untagging for given PF. This
  has been introduced intentionally to drop VLAN tagged packets which do not have an active destination endpoint.
   Workaround: none

- Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
   Workaround: Please look at the VMware Knowledge Base 2057874

- The VF VM using ixgbevf driver may crash if the user is constantly changing the MTU size for an adapter that is
  being used for heavy traffic. The issue was being observed when decreasing the MTU from higher values to the default 1500B.
   Workaround: Disable the "large buffers" feature on the VF driver by issuing the below command on the guest OS:
               "ethtool --set-priv-flags eth0 legacy-rx on"

- In certain conditions VF driver may receive incorrect state of the PF. It may also result with a lost frame before
  executing a command or a lost answer for the command. The issue is visible in stress test setups with many operations
  on the VMs. In such environment the VF reset procedure may fail causing the traffic to break for some amount of time
  or until the next VF reset is performed. The user may see the information about TX hanging in the system logs.
   Workaround: none

- Lack of communication between VFs after changing RX and TX ring size on PF few times
   Workaround: Do not set both RX and TX at the same time or wait 10-15 seconds between reconfigurations

- Packet loss between PF and VF, when over 52 VFs are used, but only one is up on VM in a given time.
   Workaround: Set up another VF up


Command Line Parameters:
------------------------

Ethtool is not supported for native driver.
Please use esxcli, vsish, or esxcfg-* to set or get the driver information, for example:

Setting driver module parameter:

- Setting a new driver module parameter while clearing other driver module parameters:
  esxcli system module parameters set -m ixgben -p VMDQ=4,16

- Appending a driver module parameter while leaving other driver module parameters unchanged:
  esxcli system module parameters set -m ixgben -a -p VMDQ=4,16

Get commands:

- Get the driver supported module parameters
  esxcli system module parameters list -m ixgben

- Get the driver info
  esxcli network  nic get -n vmnic1

- Get an uplink stats
  esxcli network nic stats -n vmnic1

Other:

- Disable kernel VLAN issue workaround
  esxcli system module parameters set -m ixgben -a -p "VlanRemoveWorkaround=0,0"

- Set VMDQ on 1st port, disable on the 2nd port
  esxcfg-module -s "VMDQ=4,0" ixgben


The extended esxcli tool allows users to set device specific configurations, for example:

- Dump Optical Module Information
  esxcli intnet module read -n vmnic1


Features Supported in the Intnetcli Tool:
-----------------------------------------

- Link privilege
- Dump Optical Module Information
- FEC Configuration
- Enable/Disable Firmware LLDP Engine
- RSS Configuration

The tool is available at the following link: https://downloadcenter.intel.com/download/28479

=================================================================================

Previously Released Versions:
-----------------------------

- Driver Version: 1.18.2.0
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 7.0 and 8.0
   New Features Supported:
      - None
   New Hardware Supported:
      - None
   Bug Fixes:
      - Re-trigger link setup when a fiber module is present, but the link is down. 

- Driver Version: 1.15.1.0
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 7.0 and 8.0
   New Features Supported:
- Added support for turning link flapping on and off based on NVM
- Added support for shimming layer
   New Hardware Supported:
      - None
   Bug fixes:
- Fixed wrong packages segmentation with TCP IPv6 with extension header


- Driver Version: 1.14.1.0
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 7.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - None
   New Hardware Supported:
      - None
   Bug fixes:
      - Fixed PSOD when trying to boot system with more than maximum number of PFs
      - Fixed PSOD when wrong queue were passed by os to the driver
      - Changed behavior of interrupt throttling parameters setting to be unified with 40G driver.
      - Removed wrong warning message when enabling ENS Interrupt Mode without SR-IOV
      - Extended log about RSS and DRSS usage

- Driver Version: 1.13.1.0
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 7.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - None
   New Hardware Supported:
      - None
   Bug fixes:
      - Removed link flapping messages from vobd logs on the unused vmnic ports of X520 adapter
      - Restored an option to set maximum number of TX/RX descriptors through TxDesc/RxDesc module parameters
      - Fixed serial number not being displayed on multiple adapters when using Intel(R) Ethernet NVM Update Tool

- Driver Version: 1.12.3
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 7.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - Added access to flash memory for NVM update and QV tools
   New Hardware Supported:
      - None
   Bug fixes:
      - Corrected maximum number of RSS and DRSS queues being printed in system log
      - Added "Detaching the ixgben driver" to system log during driver's detach phase
      - Fixed validation of driver module parameters

- Driver Version: 1.12.2
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 7.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - None
   New Hardware Supported:
      - None
   Bug fixes:
      - Fixed issue with RSS traffic not being spread equally on all RSS queues
      - Fixed problem with VLAN tagged VF interface not working after rebooting the Virtual Machine
      - Removed 'Software semaphore SMBI between device drivers not granted' warning from system log
      - Fixed communication issues after setting adapter down/up in VGT configuration
      - Removed link flapping messages on the unused vmnic ports of X520 adapter
      - Corrected wrong number of DRSS queues being printed in system log

- Driver Version: 1.11.4
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 7.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - Added ESXi support of simplified hardware access for NVM update.
   New Hardware Supported:
      - None
   Bug fixes:
      - Fixed issue with administrative link down on a vmnic being enabled after reset.
      - Fixed issue with inconsistent allocation of DRSS queues with low number of CPU cores.
      - Removed warning message 'invalid interrupt' while reloading the driver.
      - Fixed VLAN headers missing from IPv6 packet body after turning off VLAN offloads.
      - Removed VF's ability to send LFC pause frames which could cause denial of service.

- Driver Version: 1.10.3
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 7.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - None
   New Hardware Supported:
      - None
   Bug fixes:
      - Fixed TX hang detected sporadically during bidirectional traffic.
      - Fixed speed settings issues on back-to-back connected hosts.
      - Fixed incorrect values in VF statistics counters.
      - Fixed 'esxcli network nic get' command output for Sage Pond adapter.
      - Fixed working of MAC anti-spoofing.
      - Enable VLAN Anti-Spoofing by default.
      - Fixed unexpected PCI exception log.
      - Fixed NetQ RSS being enabled while VMDQ is disabled.
      - Fixed link going down after a few cycles of ejecting/inserting SFP adapter.
      - Fixed RSS configuration when VMDQ is set to 0.
      - Fixed incorrect cable type shown in esxcli.
      - Fixed issues with bringing up VF interfaces.
      - Fixed Rx queue allocation when number of VMs is greater than number of VMDQs.
      - Fixed problems with MTU setting when SRIOV is enabled in DPDK environment.
      - Fixed issue with no VLAN header in packet body after turning off the VLAN offload.

- Driver Version: 1.9.12
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 6.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - None
   New Hardware Supported:
      - None
   Bug fixes:
      - Fixed VLAN setting issues on VF.

- Driver Version: 1.9.11
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 6.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - 2.5Gbps and 5Gbps link speed support for Sageville
   New Hardware Supported:
      - None
   Bug fixes:
      - Fixed failed communication while setting VGT VLAN twice on the same portgroup.
      - Fixed User is not able to create 63 VFs for one port on the adapter, 62 are available.
      - Fixed X550 PHY not powering down
      - Fixed Tx hang on VF while resetting PF interface
      - Fixed VF dropping untagged traffic when configured in VLAN trunk mode.
      - Fixed link state propagation to VFs
      - Fixed VF connection issues when VGT is active and a VF reset occurs

- Driver Version: 1.8.9
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 6.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - None
   New Hardware Supported:
      - None
   Bug fixes:
      - Fixed MTU value shown in vmkernel.log

- Driver Version: 1.8.7
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 6.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - None
   New Hardware Supported:
      - None
   Bug fixes:
      - Added priority field to VLAN header
      - Added notification of link partner when Link Flow Control settings are changed
      - Reduced initialization time after uplink is set down then up


- Driver Version: 1.7.20
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 6.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - None
   New Hardware Supported:
      - None
   Bug fixes:
      - Fixed high CPU usage when SFP+ module is not inserted in the NIC.


- Driver Version: 1.7.17
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 6.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - Added Wake on LAN (WoL) support.
   New Hardware Supported:
      - None
   Bug Fixes:
      - Fixed driver version reporting as unavailable in iDRAC.
      - Fixed incorrect branding strings for specific supported devices.
   Known Issues:
      - If the VF's guest VLAN interface and the VF's portgroup have the same VLAN ID, packets appear on the VF's guest VLAN
        interface for 82599 and x540 adapters due to the HW limitation.
         Workaround: none
      - On 82599 adapters emulated interfaces in a VLAN portgroup (VST or VGT) may not be able to communicate with VF interfaces of
        the same PF. 82599 adapters lack a HW feature that allows the driver to configure VLAN mode for loopback
        traffic in the emulated data path.
         Workaround: none
      - Incoming VLAN traffic is dropped after enabling software emulation of VLAN tagging and untagging for given PF. This
        has been introduced intentionally to drop VLAN tagged packets which do not have an active destination endpoint.
         Workaround: none
      - Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
         Workaround: Please look at the VMware Knowledge Base 2057874


- Driver Version: 1.7.15
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 6.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - None
   New Hardware Supported:
      - None
   Bug Fixes:
      - Fixed VLAN tagged packets accepted after turning off last VM
      - Fixed VF connection issues when MAC address is changed
      - Fixed missing OROM version in 'esxcli network nic get -n <vmnic>' output
      - Fixed untagged packets being received by a VF in a VLAN-tagged portgroup
      - Fixed VF being able to send VLAN-tagged packets despite being in an untagged portgroup
      - Fixed intermittent Tx hang due to a race condition
   Known issues:
      - If the VF's guest VLAN interface and the VF's portgroup have the same VLAN ID, packets appear on the VF's guest VLAN
        interface for 82599 and x540 adapters due to the HW limitation.
         Workaround: none
      - On 82599 adapters emulated interfaces in a VLAN portgroup (VST or VGT) may not be able to communicate with VF interfaces of
        the same PF. 82599 adapters lack a HW feature that allows the driver to configure VLAN mode for loopback
        traffic in the emulated data path.
         Workaround: none
      - Incoming VLAN traffic is dropped after enabling software emulation of VLAN tagging and untagging for given PF. This
        has been introduced intentionally to drop VLAN tagged packets which do not have an active destination endpoint.
         Workaround: none
      - Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
         Workaround: Please look at the VMware Knowledge Base 2057874


- Driver Version: 1.7.10
   Hardware Supported: Intel(R) Ethernet Controllers 82599, x520, x540, x550, and x552 family
   Supported ESXi releases: 6.0 and 6.7
   Compatible ESXi version: 6.5
   New Features Supported:
      - Added firmware recovery mode.
   New Hardware Supported:
      - None
   Bug Fixes:
      - Fixed traffic hang between VM with VF adapter and VM with vmxnet3 after disabling/enabling vmnic.
      - Fixed an issue with VM's being able to communicate over VLANs in VGT mode when Port Group had VLANs set to None(0)
      - Fix for silicon errata #26: TX hang observed on some queues during regular traffic with VFLR on the fly.
  Please see X550 specification update for more information.
      - Fix for dropped tagged loopback traffic originated from VF.
      - Fix for lost VLAN connectivity once the last VM in a port group has been shut down.
      - Reduced driver's memory footprint
      - Fix for duplicate multicast / broadcast packets during heavy traffic.
      - Fix for X552/X557-AT adapters linking at 1G speed after a NIC down/up cycle.
      - Fix for the NIC down procedure hanging when heavy traffic is running.
      - Fix for lost connectivity when VMDQ is disabled and SR-IOV is enabled
   Known Issues:
      - Unable to reload VF driver on SLES 12SP2 and ESXi 6.0 update 3.
         Workaround: upgrade to ESXi 6.0 update 3a or ESXi 6.5
      - Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
         Workaround: Please look at the VMware Knowledge Base 2057874
