A switch port using 802.1x authentication will send EAP
Request-Identity packets once the physical link is up, and will not be
forwarding packets until the port identity has been established.
We do not currently support 802.1x authentication. However, a
reasonably common configuration involves using a preset list of
permitted MAC addresses, with the "authentication" taking place
between the switch and a RADIUS server. In this configuration, the
end device does not need to perform any authentication step, but does
need to be prepared for the switch port to fail to forward packets for
a substantial time after physical link-up. This exactly matches the
"blocked link" semantics already used when detecting a non-forwarding
switch port via LACP or STP.
Treat a received EAP Request-Identity as indicating a blocked link.
Unlike LACP or STP, there is no way to determine the expected time
until the next EAP packet and so we must choose a fixed timeout.
Erroneously assuming that the link is blocked is relatively harmless
since we will still attempt to transmit and receive data even over a
link that is marked as blocked, and so the net effect is merely to
prolong DHCP attempts. In contrast, erroneously assuming that the
link is unblocked will potentially cause DHCP to time out and give up,
resulting in a failed boot.
The default EAP Request-Identity interval in Cisco switches (where
this is most likely to be encountered in practice) is 30 seconds, so
choose 45 seconds as a timeout that is likely to avoid gaps during
which we falsely assume that the link is unblocked.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Replace the GPL2+-only EAPoL code (currently used only for WPA) with
new code licensed under GPL2+-or-UBDL.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Continue to transmit DHCPDISCOVER while waiting for a blocked link, in
order to support mechanisms such as Cisco MAC Authentication Bypass
that require repeated transmission attempts in order to trigger the
action that will result in the link becoming unblocked.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Record the root of trust used at the point that a certificate is
validated, redefine validation as checking a certificate against a
specific root of trust, and pass an explicit root of trust when
creating a TLS connection.
This allows a custom TLS connection to be used with a custom root of
trust, without causing any validated certificates to be treated as
valid for normal purposes.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Use the existing certificate store to automatically append any
available issuing certificates to the selected client certificate.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Restructure the use of add_tls() to insert a TLS filter onto an
existing interface. This allows for the possibility of using
add_tls() to start TLS on an existing connection (as used in several
protocols which will negotiate the choice to use TLS before the
ClientHello is sent).
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Unlike netdev_rx_err(), there is no valid circumstance under which
netdev_rx() may be called with a null I/O buffer, since a call to
netdev_rx() represents the successful reception of a packet. Fix the
code comment to reflect this.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
netdev_tx_err() may be called with a null I/O buffer (e.g. to record a
transmit error with no associated buffer). Avoid a potential null
pointer dereference in the DMA unmapping code path.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Include a potential DMA mapping within the definition of an I/O
buffer, and move all I/O buffer DMA mapping functions from dma.h to
iobuf.h. This avoids the need for drivers to maintain a separate list
of DMA mappings for each I/O buffer that they may handle.
Network device drivers typically do not keep track of transmit I/O
buffers, since the network device core already maintains a transmit
queue. Drivers will typically call netdev_tx_complete_next() to
complete a transmission without first obtaining the relevant I/O
buffer pointer (and will rely on the network device core automatically
cancelling any pending transmissions when the device is closed).
To allow this driver design approach to be retained, update the
netdev_tx_complete() family of functions to automatically perform the
DMA unmapping operation if required. For symmetry, also update the
netdev_rx() family of functions to behave the same way.
As a further convenience for drivers, allow the network device core to
automatically perform DMA mapping on the transmit datapath before
calling the driver's transmit() method. This avoids the need to
introduce a mapping error handling code path into the typically
error-free transmit methods.
With these changes, the modifications required to update a typical
network device driver to use the new DMA API are fairly minimal:
- Allocate and free descriptor rings and similar coherent structures
using dma_alloc()/dma_free() rather than malloc_phys()/free_phys()
- Allocate and free receive buffers using alloc_rx_iob()/free_rx_iob()
rather than alloc_iob()/free_iob()
- Calculate DMA addresses using dma() or iob_dma() rather than
virt_to_bus()
- Set a 64-bit DMA mask if needed using dma_set_mask_64bit() and
thereafter eliminate checks on DMA address ranges
- Either record the DMA device in netdev->dma, or call iob_map_tx() as
part of the transmit() method
- Ensure that debug messages use virt_to_phys() when displaying
"hardware" addresses
Signed-off-by: Michael Brown <mcb30@ipxe.org>
The UEFI specification provides a partial definition of an Infiniband
device path structure. Use this structure to construct what may be a
plausible path containing at least some of the information required to
identify an SRP target device.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
There is no standard defined for AoE device paths in the UEFI
specification, and it seems unlikely that any standard will be adopted
in future.
Choose to construct an AoE device path using a concatenation of the
network device path and a SATA device path, treating the AoE major and
minor numbers as the HBA port number and port multiplier port number
respectively.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Now that IPv6 is enabled by default for UEFI builds, it is important
that iPXE does not delay unnecessarily in the (still relatively
common) case of a network that lacks IPv6 routers.
Apply the timeout values used for neighbour discovery to the router
discovery process.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
The LACP responder reuses the received I/O buffer to construct the
response LACP (or marker) packet. Any received padding will therefore
be unintentionally included within the response.
Truncate the received I/O buffer to the expected length (which is
already defined in a way to allow for future protocol expansion)
before reusing it to construct the response.
Reported-by: Tore Anderson <tore@fud.no>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Some external drivers (observed with the UEFI NII driver provided by
an HPE-branded Mellanox ConnectX-3 Pro) seem to cause LACP packets
transmitted by iPXE to be looped back as received packets. Since
iPXE's trivial LACP responder will send one response per received
packet, this results in an immediate LACP packet storm.
Detect looped back LACP packets (based on the received LACP actor MAC
address), and refuse to respond to such packets.
Reported-by: Tore Anderson <tore@fud.no>
Tested-by: Tore Anderson <tore@fud.no>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Split debug message since eth_ntoa() uses a static result buffer.
Originally-fixed-by: Michael Bazzinotti <bazz@bazz1.com>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
When no response is obtained from the first configured DNS server,
fall back to attempting the other configured servers.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
All implemented socket openers provide definitions for both IPv4 and
IPv6 using exactly the same opener method. Simplify the logic by
omitting the address family from the definition.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
The supported ciphers and digest algorithms may already be specified
via config/crypto.h. Extend this to allow a minimum TLS protocol
version to be specified.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Eliminate an unnecessary variable-length stack allocation and memory
copy by allowing TFTP option processors to modify the option string
in-place.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Allow a PeerDist hosted cache server to be specified via the
${peerhost} setting, e.g.:
# Use 192.168.0.1 as hosted cache server
set peerhost 192.168.0.1
Note that this simply treats the hosted cache server as a permanently
discovered peer for all segments.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Allow the use of PeerDist content encoding to be enabled or disabled
via the ${peerdist} setting, e.g.:
# Disable PeerDist
set peerdist 0
Signed-off-by: Michael Brown <mcb30@ipxe.org>
The restart of negotiation triggered by a HelloRequest currently does
not call tls_tx_resume() and so may end up leaving the connection in
an idle state in which the pending ClientHello is never sent.
Fix by calling tls_tx_resume() as part of tls_restart(), since the
call to tls_tx_resume() logically belongs alongside the code that sets
bits in tls->tx_pending.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Raw block downloads are expensive if the origin server uses HTTPS,
since each concurrent download will require local TLS resources
(including potentially large received encrypted data buffers).
Raw block downloads may also be prohibitively slow to initiate when
the origin server is using HTTPS and client certificates. Origin
servers for PeerDist downloads are likely to be running IIS, which has
a bug that breaks session resumption and requires each connection to
go through the full client certificate verification.
Limit the total number of concurrent raw block downloads to ameliorate
these problems.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Move the responsibility for starting the block download timers from
peerblk_expired() to peerblk_raw_open() and peerblk_retrieval_open(),
in preparation for adding the ability to defer calls to
peerblk_raw_open() via a block download queue.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
The Hermon driver uses vlan_find() to identify the appropriate VLAN
device for packets that are received with the VLAN tag already
stripped out by the hardware. Generalise this capability and expose
it for use by other network card drivers.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Provide increased visibility into the progress of TCP connections by
displaying an explicit "connecting" status message while waiting for
the TCP handshake to complete.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
TLS connections will almost always create background connections to
perform cross-signed certificate downloads and OCSP checks. There is
currently no direct visibility into which checks are taking place,
which makes troubleshooting difficult in the absence of either a
packet capture or a debug build.
Use the job progress message buffer to report the current cross-signed
certificate download or OCSP status check, where applicable.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Record the session ID (if any) provided by the server and attempt to
reuse it for any concurrent connections to the same server.
If multiple connections are initiated concurrently (e.g. when using
PeerDist) then defer sending the ClientHello for all but the first
connection, to allow time for the first connection to potentially
obtain a session ID (and thereby speed up the negotiation for all
remaining connections).
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Devices that support jumbo frames will currently default to the
largest possible MTU. This assumption is valid for virtual adapters
such as virtio-net, where the MTU must have been configured by a
system administrator, but is unsafe in the general case of a physical
adapter.
Default to the standard Ethernet MTU, unless explicitly overridden
either by the driver or via the ${netX/mtu} setting.
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Avoid calling rndis_halt() and rndis->op->close() twice if the call to
register_netdev() fails.
Reported-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
register_netdev expects ->hw_addr and ->ll_addr to be already filled,
so move it towards the end of register_rndis, after the respective
fields have been successfully queried from the underlying device.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Modified-by: Michael Brown <mcb30@ipxe.org>
Signed-off-by: Michael Brown <mcb30@ipxe.org>