DRBD (original) (raw)

The DRBD kernel driver presents virtual block devices to the system. It is an important building block of the DRBD. It reads and writes data to optional local backing devices.

The DRBD kernel driver mirrors data writes to one (or multiple) peer(s). In synchronous mode it will signal completion of a write request after it receives completion events from the local backing storage device and from the peer(s).

The illustration above shows the path the data takes within the DRBD kernel driver. Please note that the data path is very efficient. No user space components involved. Read requests can be carried out locally, not causing network traffic.

drbdadm processes configuration declarative configuration files. Those files are identical on all nodes of an installation. drbdadm extracts the necessary information for the host it is invoked on.

drbdsetup is the low level tool that interacts with the DRBD kernel driver. It manages the DRBD objects (resources, connections, devices, paths). It can modify all properties, and can dump the kernel driver’s active configuration. It displays status and status updates.

drbdmeta is used to prepare meta-data on block devices before they can be used for DRBD. You can use it to dump and inspect this meta-data as well. It is comparable to mkfs or pvcreate.

DRBD® has a command line interface (CLI) and can also be used independently of a cloud, virtualization, or container platform to manage large DRBD clusters.

DRBD® has an abstraction for network transport implementations.

TCP/IP is the natural choice. It is the protocol of the Internet. Usually it is used on top of ethernet hardware (NICs and switches) in the data center. While it is the lingua franca of the network it has started to become outdated and is not the best choice to achieve the highest possible performance.

Compared to TCP/IP a young alternative is RDMA. It requires NICs that are RDMA capable. It can run over InfiniBand networks, which come with their own cables and switches. It can run over enhanced ethernet (DCB) or on top of TCP/IP via an iWARP NIC. It is all about enhancing performance while reducing load on the CPUs of your machines.

Long distance links often expose varying bandwidth, due to the side effects of other traffic sharing parts of the path. They often have higher latency than LANs.

It might be peaks in write load on DRBD, it might be temporal setback in the available link bandwidth, it may happen that the link bandwidth becomes lower than the necessary bandwidth to mirror the data stream.

Disaster Recovery‘s main task is to mitigate these issues, otherwise DRBD would slow down the writing application by delivering IO completion events later. Disaster Recovery does that by buffering the data.