Vmware esxi подключение iscsi

Базовая настройка хранилища iSCSI

Для работы таких «вкусных» технологий VMWare, таких как HA, DRS, VMotion и Storage VMotion, необходимо общее SAN хранилище, и одной из технологий сетевой организации такого типа хранилищ является iSCSI SAN.

В данной статье рассмотрим базовые настройки, необходимые для подключения и использования iSCSI SAN на вашем ESX хосте.

В случае, если вы планируете использовать CHAP аутентификацию, то с расширенными настройками iSCSI хранилищ можно познакомится в статье Настройка iSCSI Storage (Advanced CHAP ).

1. Во-первых вам необходимо убедится в том, что у вас есть порт VMkernel. Если на вашем виртуальном коммутаторе (vSwitch) такого порта нет, необходимо его создать, для чего познакомьтесь со статьей « Создаем порт VMkernel ».

По умолчанию на хосте ESX создается 2 группы портов(port group): для виртуальных машин (Virtual Machine) и для сервис консоли (Service Console).

2. Теперь адаптер устройства хранения нужно настроить. Это программный адаптер iSCSI, заметьте, что использование софтварного (программного) iSCSI добавляет дополнительные накладные вычислительные расходы на сервер.

3. В секции «Storage Adaptors» щелкните по адаптеру iSCSI (iSCSI Software Adaptor), а затем зайдите в его свойства («Properties»).

4. Нажмите кнопку «Configure» в диалоговом окне свойств ISCSI инициатора.

5. В разделе статусов отметьте галочкой «Enabled» и нажмите кнопку ОК.

6.Также будут созданы имя iSCSI инициатора и его псевдоним (alias).

7. В том случае, если используемый метод поиска таргетов «send targets», перейдите на вкладку «Dynamic Discovery».

8. Нажмите кнопку » Add «.Введите IP адрес iSCSI сервера и порт.

9. Нажмите кнопку ОК, а затем Close.

10. Появится окно с предложение выполнить рескан (rescan). Нажмите Yes.

11. В результате вы увидите все настройки ISCSI адаптера и список всех доступных LUN на хосте SAN.

12. Теперь нам осталось создать хранилище VMFS datastore (статья Создаем VMFS Datastore ) или же RDM (статья Подключаем диск типа raw device mapping (RDM)).

Источник

VMware как подключить iscsi LUN c СХД к хосту Esxi.

Эта статья о том, как подключить iscsi LUN c СХД к хосту Esxi.

Итак, надеюсь у вас уже есть LUN на СХД, который мы будем подключать к хосту esxi. Как создать LUN на СХД NetApp я уже описывал.

Прежде чем подключить iscsi LUN к хосту Esxi, нужно создать на хосте software iscsi адаптер. Для этого на вкладке «Storage Adapters» хоста esxi жмем «Add», чобы добавить scsi адаптер.

После этого, выбираем созданный нами адаптер и жмем «Properties». На этой вкладке вы можете видеть WWN адаптера, который понадобится при настройке доступа к LUN на СХД.

На открывшемся экране, на вкладке «General Properties» можно указать понятный алиас для нашего инициатора.

На вкладке «Network Configuration» добавляем сетевой адаптер, который будет использоваться для передачи iscsi трафика. Жмем «Add»

и выбираем нужный интерфейс.

На вкладке «Dynamic Discovery» указываем IP адрес нашей СХД.

После этого необходимо пересканировать все storage адаптеры на хосте.

После этого вы увидите в списке доступных устройств ваш LUN.

Теперь необходимо добавить storage на хост. Для этого на вкладке «Storage» жмем «Add Storage».

В открывшемся окне выбираем «Disk/LUN» и жмем «Next».

Выбираем наш LUN.

Выбираем тип файловой системы.

На следующем экране жмем «Next».

Далее выбираем имя для нашего Storage.

Выбираем объем доступного пространства.

На заключительном экране жмем «Finish» и ждем завершения создания хранилища.

После этого вы можете видеть добавленное хранилище в списке доступных для размещения виртуальных машин.

Вот так можно добавить iscsi LUN к хосту Esxi.

О том, как правильно отключить datastore(LUN) от хоста(кластера) написано здесь.

Источник

Подключение iSCSI хранилища (LUN) в VMWare ESXi

В VMware vSphere вы можете использовать iSCSI диски в качестве общего дискового хранилища для ваших ESXi хостов. ESXi хост получает доступ к таким дискам по вашей локальной сети с помощью протокола TCP. В этой статье мы рассмотрим, как подключить iSCSI LUN с вашей СХД (или сервера) к хосту VMWare ESXi и создать на нам общее VMFS хранилище.

Предполагаем, что вы создали, настроили и опубликовали iSCSI таргет (диск) на вашей СХД (в Windows Server вы можете в качестве iSCSI таргет использовать виртуальный vhdx диск).

В данном примере мы используем отдельно стоящий хост с ESXi 6.7 (можно использовать и бесплатный ESXi Hypervisor). Это может быть физический хост или виртуальная машина (например, вот пример использования ESXi с помощью nested virtualization в Hyper-V). На хосте есть два сетевых интерфейса (один будет использоваться для управления, второй для трафика к iSCSI LUN).

Зайдите на веб-интерфейс управления ESXi хостом ( _https://192.168.13.50/ui/#/login ).

Настройка сети для iSCSI трафика в VMWare ESXi

Сначала нужно создать отдельный VMkernel сетевой интерфейс, который будет испоьзоваться ESXi хостом для доступа к iSCSI хранилищу. Перейдите в раздел Networking -> VMkernel NICs -> Add VMkernel NIC.

Кроме vmk порта нужно сразу создать новая группа портов (New port group). Укажите имя для этой группы – iSCSI и назначьте статический IP адрес для вашего интерфейса vmkernel.

Теперь перейдите в настройки вашего стандартного коммутатора vSwitch0 (Networking -> Virtual Switches). Проверьте, что второй физический интерфейс сервера vmnic1 добавлен в конфигурацию и активен (если нет, нажмите кнопку Add uplink и добавьте его).

Проверьте в секции Nic Teaming что оба физических сетевых интерфейса находятся в статусе Active.

Теперь в настройки группу портов iSCSI вам нужно разрешить использовать для iSCSI трафика только второй интерфейс. Перейдите в Networking -> Port groups -> iSCSI> Edit settings. Разверните секцию NIC teaming, выберите Override failover order = Yes. Оставьте активной только vmnic1, порт vmnic0 переведите в состояние Unused.

В результате ваш ESXi хост будет использовать для доступа к вашему iSCSI LUN только один интерфейс сервера.

Настройка программного iSCSI адаптера в VMWare ESXi

По умолчанию в ESXi отключен программный адаптер iSCSI. Чтобы включить его, перейдите в раздел Storage -> Adapters. Нажмите на кнопку Software iSCSi.

Измените iSCSI enable на Enabled.

Затем в секции Dynamic targets добавьте IP адрес вашего iSCSI хранилища и порт подключения (по-умолчанию для iSCSI трафика используется порт TCP 3260). ESXi просканирует все iSCSI таргеты на этом хосте и выведет их в списке Static Targets.

Сохраните настройки. Обратите внимание, что на вкладке Storage -> Adapters появился новый HBA vmhba65 типа iSCSI Software Adapter.

Если вы не видите список iSCSI таргетов на СХД, можно продиагностировать доступность iSCSI диска через консоль ESXi.

Включите SSH на VMware ESXi хосте и подключитесь к нему с помощью любого SSH клиента (я использую встроенный SSH клиент Windows 10)

С помощью следующей команды можно выполнить проверку доступности вашего iSCSI хранилища (192.168.13.10) с указанного vmkernel порта (vmk1) :

# vmkping -I vmk1 192.168.13.10

В этом примере iSCSI хранилище отвечает на ping.

Теперь нужно проверить, что на хранилище доступен iSCSI порт TCP 3260 (в этом примере 192.168.13.60 это IP адреса интерфейса vmk1):

# nc -s 192.168.13.60 -z 192.168.13.10 3260

Проверьте, что на хосте включен программный iSCSI:

# esxcli iscsi software get

Если нужно, включите его:

# esxcli iscsi software set -e true

Также можно получить текущие параметры программного HBA адаптера iSCSI:

# esxcli iscsi adapter get -A vmhba65

Создаем VMFS хранилище на iSCSI LUN в VMWare ESXi

Теперь на доступном iSCSI диске можно создать VMFS (Virtual Machine File System) хранилище для размещения файлов виртуальных машин.

Перейдите в раздел Storage -> Datastores -> New datastore.

Задайте имя VMFS хранилища и выберите iSCSI LUN, на котором его создать.

Выберите тип файловой системы VMFS 6 и укажите, что для хранилища нужно использовать весь объем iSCSI диска. Через несколько секунд новое VMFS хранилище станет доступно из ESXi.

Итак, вы подключили iSCSI диск к вашему ESXi хосту и создали на нем VMFS хранилище. Это хранилище могут одновременно использовать несколько ESXi серверов. Теперь у вас есть общее хранилище, и если вы настроите VMware vCenter server, вы сможете использовать vMotion для перемещения запущенных ВМ между хостами.

Рекомендуем:  Как найти значение емкости конденсатора

Источник

Best Practices For Running VMware vSphere On iSCSI

Introduction

VMware offers and supports a number of different storage technologies and protocols for presenting external storage devices to VMware vSphere hosts. In recent years, the iSCSI protocol has gained popularity as a method for presenting block storage devices over a network to vSphere hosts. VMware has provided support for iSCSI storage since Virtual Infrastructure 3. This paper can help you understand the design considerations and deployment options for deploying vSphere infrastructures using iSCSI storage. It highlights trade-offs and factors to consider when deploying iSCSI storage to support vSphere environments. It is a complement to, not a replacement for, VMware product documentation.

iSCSI Overview

iSCSI is a protocol that uses the TCP/IP to transport SCSI commands, enabling the use of the existing TCP/IP networking infrastructure as a SAN. As with SCSI over Fibre Channel (FC), iSCSI presents SCSI targets and devices to iSCSI initiators (requesters). Unlike NAS, which presents devices at the file level, iSCSI makes block devices available via the network. Block devices are presented across an IP network to your local system. These can be consumed in the same way as any other block storage device.

iSCSI Considerations

For datacenters with centralized storage, iSCSI offers customers many benefits. It is comparatively inexpensive and it is based on familiar SCSI and TCP/IP standards. In comparison to FC and Fibre Channel over Ethernet (FCoE) SAN deployments, iSCSI requires less hardware, it uses lower-cost hardware, and more IT staff members might be familiar with the technology. These factors contribute to lower-cost implementations.

One major difference between iSCSI and FC relates to I/O congestion. When an iSCSI path is overloaded, the TCP/IP protocol drops packets and requires them to be resent. FC communication over a dedicated path has a built-in pause mechanism when congestion occurs. When a network path carrying iSCSI storage traffic is substantially oversubscribed, a bad situation quickly grows worse and performance further degrades as dropped packets must be resent. There can be multiple reasons for an iSCSI path being overloaded, ranging from oversubscription (too much traffic), to network switches that have a low port buffer. Although some iSCSI storage vendors have implemented Delayed Ack and Congestion Avoidance as part of their TCP/IP stack, not all have. Various iSCSI array vendors even recommend disabling DelayedAck for iSCSI adapter. VMware recommends consulting the iSCSI array vendor for specific recommendations around DelayedAck. For more details on this issue please refer to: https://kb.vmware.com/s/article/1002598

Another consideration is the network bandwidth. Network bandwidth is dependent on the Ethernet standards used (1Gb or 10Gb). There are other mechanisms such as port aggregation and bonding links that deliver greater network bandwidth. When implementing software iSCSI that uses network interface cards rather than dedicated iSCSI adapters, gigabit Ethernet interfaces are required. These interfaces tend to consume a significant amount of CPU Resource.

One way of overcoming this demand for CPU resources is to use a feature called a TOE (TCP/IP offload engine). TOEs shift TCP packet processing tasks from the server CPU to specialized TCP processors on the network adaptor or storage device. Most enterprise-level networking chipsets today offer TCP offload or checksum offload, which vastly improves CPU overhead.

iSCSI was considered a technology that did not work well over most shared wide-area networks. It has prevalently been approached as a local area network technology. However, this is changing. For synchronous replication writes (in the case of high availability) or remote data writes, iSCSI might not be a good fit. Latency introductions bring greater delays to data transfers and might impact application performance. Asynchronous replication, which is not dependent upon latency sensitivity, makes iSCSI an ideal solution. For example, VMware vCenter™ Site Recovery Manager™ may build upon iSCSI asynchronous storage replication for simple, reliable site disaster protection.

iSCSI Architecture

iSCSI initiators must manage multiple, parallel communication links to multiple targets. Similarly, iSCSI targets must manage multiple, parallel communications links to multiple initiators. Several identifiers exist in iSCSI to make this happen, including iSCSI Name, ISID (iSCSI session identifiers), TSID (target session identifier), CID (iSCSI connection identifier), and iSCSI portals. These will be examined in the next section.

iSCSI Names

iSCSI nodes have globally unique names that do not change when Ethernet adapters or IP addresses change. iSCSI supports two name formats as well as aliases. The first name format is the Extended Unique Identifier (EUI). An example of a EUI name might be eui.02004567A425678D. The second name format is the iSCSI Qualified Name (IQN). An example of an IQN name might be iqn.1998-01. com.vmware:tm-pod04-esx01-6129571c.

iSCSI Initiators and Targets

A storage network consists of two types of equipment: initiators and targets. Initiators, such as hosts, are data consumers. Targets, such as disk arrays or tape libraries, are data providers. In the context of vSphere, iSCSI initiators fall into three distinct categories. They can be software, hardware dependent, or hardware independent.

Software iSCSI Adapter

A software iSCSI adapter is VMware code built into the VMkernel. It enables your host to connect to the iSCSI storage device through standard network adapters. The software iSCSI adapter handles iSCSI processing while communicating with the network adapter. With the software iSCSI adapter, you can use iSCSI technology without purchasing specialized hardware.

Dependent Hardware iSCSI Adapter

This hardware iSCSI adapter depends on VMware networking and iSCSI configuration and management interfaces provided by VMware. This type of adapter can be a card that presents a standard network adapter and iSCSI offload functionality for the same port. The iSCSI offload functionality depends on the host’s network configuration to obtain the IP and MAC addresses, as well as other parameters used for iSCSI sessions. An example of a dependent adapter is the iSCSI licensed Broadcom 5709 NIC.

Independent Hardware iSCSI Adapter

This type of adapter implements its own networking and iSCSI configuration and management interfaces. An example of an independent hardware iSCSI adapter is a card that presents either iSCSI offload functionality only or iSCSI offload functionality and standard NIC functionality. The iSCSI offload functionality has independent configuration management that assigns the IP address, MAC address, and other parameters used for the iSCSI sessions. This section examines the features and issues connected with each of these technologies.

iSCSI Sessions and Connections

iSCSI initiators and targets use TCP to create relationships called sessions. These sessions are identified by iSCSI session IDs (ISIDs). Session IDs are not tied to the hardware and can persist across hardware swaps. The initiator sees one logical connection to the target, as shown in Figure 1.

Figure 1 — iSCSI Session

An iSCSI session might also contain multiple logical connections. From a vSphere host perspective, the sessions might also be thought of in terms of paths between the initiator and target. Having multiple connections per session enables the aggregation of bandwidth and can also provide load balancing. An example of multiple logical connections to the target (identified by connection IDs, or CIDs) is shown in Figure 2.

Figure 2 — Multiple connections per session

However, a vSphere host does not support multiple connections per session at this time.

iSCSI Portals — iSCSI nodes keep track of connections via portals, enabling separation between names and IP addresses. A portal manages an IP address and a TCP port number. Therefore, from an architectural perspective, sessions can be made up of multiple logical connections, and portals track connections via TCP/IP port/address, as shown in Figure 3.

Figure 3 — iSCSI Portals

In earlier versions of vSphere, the VMware iSCSI driver sent I/O over one portal only (a single session per connection), and only when that failed did the vSphere host try to use other portals in a Round Robin fashion.

In more recent versions, this behavior changed so that the driver now logs in to all the portals that are returned in the SendTarget discovery response. The reason for this enhancement was to enable support for new active/passive iSCSI arrays that required support. With active/passive arrays, the vSphere host storage stack was required to recognize each of the portals as different paths (targets) to effectively do multipath failovers.

Рекомендуем:  Как на ноутбуке сделать вацап видеосвязь

NOTE: Not all iSCSI arrays behave like this. Some arrays still require an administrator to add additional paths manually

iSCSI Implementation Options

VMware supports iSCSI with both software initiator and hardware initiator implementations. The software initiator iSCSI plugs into the vSphere host storage stack as a device driver in just the same way as other SCSI and FC drivers. This means that it implicitly supports the flagship file system of VMware, VMware vSphere VMFS, and also Raw Device Mappings (RDMs).

As previously mentioned, hardware iSCSI adapters fall into two categories – hardware dependent and hardware independent. Booting from iSCSI is also supported for both software and hardware iSCSI. Figure 4 shows the basic differences between an iSCSI hardware and iSCSI software implementation.

As of vSphere 6.5 iSCSI boot is also supported under UEFI boot mode. Note that the UEFI BIOS must have iSCSI support and that IPv6 is not supported at the time of writing.

Figure 4 — Software and hardware iSCSI initiators

With the hardware-initiator iSCSI implementation, the iSCSI HBA provides the translation from SCSI commands to an encapsulated format that can be sent over the network. A TCP offload engine (TOE) does this translation on the adapter.

The software-initiator iSCSI implementation leverages the VMkernel to perform the SCSI to IP translation and requires extra CPU cycles to perform this work. As mentioned previously, most enterprise-level networking chip sets offer TCP offload or checksum offloads, which vastly improve CPU overhead.

Mixing iSCSI Options

Having both software iSCSI and hardware iSCSI enabled on the same host is supported. However, use of both software and hardware adapters on the same vSphere host to access the same target is not supported. One cannot have the host access the same target via hardware-dependent/hardware-independent/software iSCSI adapters for multi-pathing purposes. The reason for this support statement is that the different adapter types relate primarily to performance and management. For example, each adapter can generate different speeds.

Also, vSphere manages the software iSCSI adapters, but the hardware adapters have different management interfaces.

Finally, there can be differences in the offloading mechanism whereby the hardware adapters can offload by default, but for software iSCSI it will depend on the NIC. You might or might not have offload capabilities.

It’s similar in many ways to presenting the same LUN from the same array via iSCSI and FC. You can see it over multiple paths and you can send I/O to it over multiple paths, but it would not be supported due to the differences highlighted previously.

However, different hosts might access the same iSCSI LUN via different methods. For example, host 1 might access the LUN using the software iSCSI adapter of VMware, host 2 might access it via a hardware-dependent iSCSI adapter and host 3 might access it via a hardware-independent iSCSI adapter.

Networking Settings

Network design is the key to making sure iSCSI works. In a production environment, gigabit Ethernet is essential for software iSCSI. Hardware iSCSI, in a VMware Infrastructure environment, is implemented with dedicated HBAs.

iSCSI should be considered a local-area technology, not a wide-area technology, because of latency issues and security concerns. You should also separate iSCSI traffic from general traffic. Layer-2 VLANs are a particularly good way to implement this separation.

Beware of oversubscription. Oversubscription occurs when more users are connected to a system than can be fully supported at the same time. Networks and servers are almost always designed with some amount of oversubscription, assuming that users do not all need the service simultaneously. If they do, delays are certain and outages are possible. Oversubscription is permissible on general-purpose LANs, but you should not use an oversubscribed configuration for iSCSI.

Best practice is to have a dedicated LAN for iSCSI traffic and not share the network with other network traffic. It is also best practice not to oversubscribe the dedicated LAN.

Finally, because iSCSI leverages the IP network, VMkernel NICs can be placed into teaming configurations. VMware’s recommendation however is to use port binding rather than NIC teaming. Port binding will be explained in detail later in this paper but suffice to say that with port binding, iSCSI can leverage VMkernel multipath capabilities such as failover on SCSI errors and Round Robin path policy for performance.

In the interest of completeness, both methods will be discussed. However, port binding is the recommended best practice.

VMkernel Network Configuration

A VMkernel network is required for IP storage and thus is required for iSCSI. A best practice would be to keep the iSCSI traffic separate from other networks, including the management and virtual machine networks.

IPv6 Supportability Statements

Starting with vSphere 6.0 support for IPv6 was introduced for both hardware iSCSI and software iSCSI adapters leveraging static and automatic assignment of IP addresses.

Throughput Options

There are a number of options available to improve iSCSI performance.

  1. 10GbE – This is an obvious option to begin with. If you can provide a larger pipe, the likelihood is that you will achieve greater throughput. Of course, if there is not enough I/O to fill a 1GbE connection, then a larger connection isn’t going to help you. But let’s assume that there are enough virtual machines and enough datastores for 10GbE to be beneficial.
  2. Jumbo frames – This feature can deliver additional throughput by increasing the size of the payload in each frame from a default MTU of 1,500 to an MTU of 9,000. However, great care and consideration must be used if you decide to implement it. All devices sitting in the I/O path (iSCSI target, physical switches, network interface cards and VMkernel ports) must be able to implement jumbo frames for this option to provide the full benefits. For example, if the MTU is not correctly set on the switches, the datastores might mount but I/O will fail. A common issue with jumbo-frame configurations is that the MTU value on the switch isn’t set correctly. In most cases, this must be higher than that of the hosts and storage, which are typically set to 9,000. Switches must be set higher, to 9,198 or 9,216 for example, to account for IP overhead. Refer to switch-vendor documentation as well as storage-vendor documentation before attempting to configure jumbo frames.
  3. Round Robin path policy – Round Robin uses an automatic path selection rotating through all available paths, enabling the distribution of load across the configured paths. This path policy can help improve I/O throughput. For active/passive storage arrays, only the paths to the active controller will be used in the Round Robin policy. For active/active storage arrays, all paths will be used in the Round Robin policy. For ALUA arrays (Asymmetric Logical Unit Assignment), Round Robin uses only the active/optimized (AO) paths. These are the paths to the disk through the managing controller. Active/non-optimized (ANO) paths to the disk through the non-managing controller are not used. Not all arrays support the Round Robin path policy. Refer to your storage-array vendor’s documentation for recommendations on using this Path Selection Policy (PSP).

Minimizing Latency

Because iSCSI on VMware uses TCP/IP to transfer I/O, latency can be a concern. To decrease latency, one should always try to minimize the number of hops between the storage and the vSphere host. Ideally, one would not route traffic between the vSphere host and the storage array, and both would coexist on the same subnet.

NOTE: If iSCSI port bindings are implemented for the purposes of multipathing, you could not route your iSCSI traffic pre- vSphere 6.5. With vSphere 6.5, routing of iSCSI traffic with port binding is supported.

Routing

A vSphere host has a single routing table for each TCP/IP stack. This imposes some limits on network communication for VMkernel interfaces using the same TCP/IP Stack. Consider a configuration that uses two Ethernet adapters with one VMkernel TCP/IP stack. One adapter is on the 10.17.1.1/24 IP network and the other on the 192.168.1.1/24 network. Assume that 10.17.1.253 is the address of the default gateway. The VMkernel can communicate with any servers reachable by routers that use the 10.17.1.253 gateway. It might not be able to talk to all servers on the 192.168 network unless both networks are on the same broadcast domain.

Рекомендуем:  Как подключение реверсного выключателя

Another consequence of the single routing table affects one approach you might otherwise consider for balancing I/O. Consider a configuration in which you want to connect to iSCSI storage and also want to enable NFS mounts. It might seem that you can use one Ethernet adapter for iSCSI and a separate Ethernet adapter for NFS traffic to spread the I/O load. This approach does not work because of the way the VMkernel TCP/IP stack handles entries in the routing table.

For example, you might assign an IP address of 10.16.156.66 to the VMkernel adapter you want to use for NFS. The routing table then contains an entry for the 10.16.156.x network for this adapter. If you then set up a second adapter for iSCSI and assign it an IP address of 10.16.156.25, the routing table contains a new entry for the 10.16.156.x network for the second adapter. However, when the TCP/IP stack reads the routing table, it never reaches the second entry, because the first entry satisfies all routes to both adapters. Therefore, no traffic ever goes out on the iSCSI network, and all IP storage traffic goes out on the NFS network.

The fact that all 10.16.156.x traffic is routed on the NFS network causes two types of problems. First, you do not see any traffic on the second Ethernet adapter. Second, if you try to add trusted IP addresses both to iSCSI arrays and NFS servers, traffic to one or the other comes from the wrong IP address.

Using Static Routes or set a Gateway

As mentioned before, for vSphere hosts, the management network is on a VMkernel port and therefore uses the default VMkernel gateway. Only one VMkernel default gateway can be configured on a vSphere host per TCP/IP Stack. You can, however, add static routes from the command line or configure a gateway for each individual VMkernel port.

Setting a gateway on a per VMkernel port granular level has been introduced in vSphere 6.5 and allows for a bit more flexibility. The gateway for a VMkernel port can simply be defined using the vSphere Web Client during the creation of the VMkernel interface. It is also possible to configure it using esxcli.

Note: At the time of writing the use of a custom TCP/IP Stack is not supported for iSCSI!

Availability Options – Multipathing or NIC Teaming

To achieve high availability, the local-area network (LAN) on which the iSCSI traffic runs must be designed with availability, downtime avoidance, isolation and no single point of failure (SPOF) in mind. Multiple administrators must be involved in designing for high availability. These are the virtualization administrator and the network administrator (and maybe the storage administrator). This section outlines these steps and investigates several options, which can be utilized to make your iSCSI datastores highly available.

In both cases that follow, at least two network interface cards are required. Whereas 1Gb interfaces will meet the requirements for a highly available network, 10Gb network adaptors will also improve performance.

NIC Teaming for Availability

A best practice for iSCSI is to avoid the vSphere feature called teaming (on the network interface cards) and instead use port binding. Port binding introduces multipathing for availability of access to the iSCSI targets and LUNs. If for some reason this is not suitable, then teaming might be an alternative.

If you plan to use teaming to increase the availability of your network access to the iSCSI storage array, you must turn off port security on the switch for the two ports on which the virtual IP address is shared. The purpose of this port security setting is to prevent spoofing of IP addresses. Thus, many network administrators enable this setting. However, if you do not change it, the port security setting prevents failover of the virtual IP from one switch port to another and teaming cannot fail over from one path to another. For most LAN switches, the port security is enabled on a port level and thus can be set on or off for each port.

iSCSI Multipathing via Port Binding for Availability

Another way to achieve availability is to create a multi-path configuration. This is a more preferred method over NIC teaming, because this method will fail over I/O to alternate paths based on SCSI sense codes and not just network failures. Also, port bindings give administrators the opportunity to load-balance I/O over multiple paths to the storage device. Additional advantages around port binding will be discussed later in this paper.

Error Correction Digests

iSCSI header and data digests check the end-to-end, non-cryptographic data integrity beyond the integrity checks that other networking layers provide, such as TCP and Ethernet. They check the entire communication path, including all elements that can change the network-level traffic, such as routers, switches and proxies.

Enabling header and data digests does require additional processing for both the initiator and the target and can affect throughput and CPU usage.

Some systems can offload the iSCSI digest calculations to the network processor, thus reducing the impact on performance.

Flow Control

The general consensus from our storage partners is that hardware-based flow control is recommended for all network interfaces and switches.

iSCSI Port Binding Best Practices

With port binding, the SCSI protocol will not only load balance across all bound ports and failover to other bound ports on link failure, but it will also use SCSI sense code errors to trigger failover as well. When not using Port Binding, you are relying on vSphere and the network stack to determine the best path to use for iSCSI traffic. If paths are not clearly defined, other issues can arise such as longer scan times, and inconsistent connectivity. This isn’t always the case but is something to consider.

Reliable storage should always be your priority. This overview is one of the most reliable and common configurations for iSCSI. A few key configurations for this setup are your virtual infrastructure VMkernels for iSCSI storage are on the same VLAN/subnet as your storage array, and the storage array controllers are also on that same subnet/VLAN.

For simplicity, let’s say each vSphere host has two 25Gb NICs (NIC0, NIC1), and you are using SW iSCSI adapter. SW iSCSI adapters are the most common adapters used in vSphere environments and are capable of achieving near line rate. With modern CPUs, the minimal overhead of SW iSCSI is easily handled. This also reduces complexity as you no longer have to maintain HBAs and their firmware.

First, you must configure your virtual switches and VMkernels. There should be a portgroup and VMkernel for ever NIC to be used for iSCSI. The configuration is similar for a standard or distributed vSwitch, the difference being whether you configure on each host or on vCenter. For this example, you will need to setup two portgroups with specific teaming configurations for each NIC. Each portgroup on a dVS, or when setting up a VMkernel on a standard vSwitch, must be setup as follows to support port binding.

For a distributed switch: You first create the portgroups, and then you create a VMkernel and associate it with a specific portgroup.

For a standard switch: You create the VMkernel first, the Portgroup is automatically created, then you go into each VMkernel’s portgroup properties and change the Teaming and failover settings.

When configuring Teaming and failover, you may have to check Override to change the setting different than the default vSwitch. To be able to use port binding, there must be only one NIC active in the VMkernel/portgroup configuration. If there is a NIC in standby, instead of Unused, you will not be able to bind that VMkernel to iSCSI.

iSCSI-P1

iSCSI-P2

Use explicit failover order

Источник

Adblock
detector