YARP
Yet Another Robot Platform
mpi carriers

The MPI carriers are supposed be a high-performance equivalent to YARPs standard carriers (Shared memory, TCP, Mcast) while also adding the possibility for using Infiniband (Highspeed node interconnect).

But they do not allow for process-local connections.

Note
They are still work in progress and should be considered experimental. Please report any problems.

Using the MPI carriers

If you have compiled one or both mpi carriers (see Compiling YARP with MPI support), you can connect as usual:

yarp connect /port1 /port2 mpi
// or
yarp connect /port1 /port2 bcast
mpi
  • The 'mpi' carrier is using point-to-point communication (like tcp), i.e. for every receiver there is a sender-thread which sends a copy of the data. This leads to a linear transmission time, O(n).
  • It supports replies.
  • The disconnect works as usual
    yarp disconnect /port1 /port2
bcast
  • The 'bcast' carrier is using collective broadcasting (similar to mcast but reliable ), i.e. the data is only send once by the sender and passed on by the receivers as needed. This leads to a decreased transmission time, roughly O(log_2(n)).
  • It does not support replies.
  • As of SVN rev. 8752, the disconnect (should) works as usual.
  • Older versions: the disconnect does not works properly: it is not possible to disconnect only one receiver. And in order to disconnect all at once you need to disconnect the connection which was created first.

What is MPI?

Quote from the MPI-1 standard: MPI-1 provides an interface that allows processes in a parallel program to communicate with one another. MPI-1 specifies neither how the processes are created, nor how they establish communication. Moreover, an MPI-1 application is static; that is, no processes can be added to or deleted from an application after it has been started.

In the specification of MPI-2 (http://www.mpi-forum.org/docs/mpi-20-html/node89.htm#Node89), these issues were addressed with the process management model. So MPI-2 can be used as an interface for port communication in YARP, where processes need to establish a connection at runtime.

Benefits of using MPI

  • It has become a de facto standard for communication among processes that model a parallel program running on a distributed memory system
  • Is widely used and has many open-source implementations
  • Due to the tight binding between processes it allows for high-performance communication. According to some first bandwidth tests, it is more efficient than YARPs standard TCP implementation using ACE
  • Offers different channels/protocols for communication: Shared memory, Ethernet sockets, Infiniband (implementation dependent)
  • Broadcast functionality: Increased efficiency for one-sender-mulitple-receiver scenario

Sideeffects of using MPI

  • MPI creates a tight binding between processes. Therefore, terminating a process without a proper 'yarp disconnect' can (or probably will) crash all other processes to which it was connected.
  • Induces a high load onto the processor due to busy polling (at least for broadcasting). This can hurt the performance in the oversubscription case (more compute-processes than cores).

Compiling YARP with MPI support

Enable the creation of one or both carriers in cmake:

ENABLE_yarpcar_mpi_carrier => ON
ENABLE_yarpcar_mpibcast_carrier => ON

If you have MPI installed in a standard location, cmake-configure should find all the relevant information. Otherwise explicitly specify the location of your mpiexec

MPIEXEC => /path/to/mpi_binaries/mpiexec

and the other cmake-variables should be deduced. BUT in this case, you probably will either need to start your program explicitly with mpi (i.e. mpiexec your_program) or change the $PATH variable such that your custom mpiexec will be the first one to be found!!

Finally, configure and recompile.

Requirements
There are two requirements for the MPI implementation you would like to use:
  • It needs to support the MPI-2 standard. This should be fulfilled for any recent version of the major implementations.
    If not it probably will fail with some 'unknown function' error for MPI_Open_port. Haven't tested that....
  • It needs to provide highest thread-safety support, i.e. MPI_THREAD_MULTIPLE This is maybe not available in all implementations, or at least it needs to be enabled during the compilation of the MPI library.
    If not available, the connection attempt will fail with 'MpiComm: MPI implementation doesn't provide required thread safety'.

Open source MPI implementations

OpenMPI
http://www.open-mpi.org/
If you use precompiled binaries (usually available for all major Linux distributions), please check availability of thread safety with
ompi_info | grep -i thread
# => Thread support: posix (mpi: yes/no)
Compiler flag:
--enable-mpi-threads Enable threads for MPI applications (default:
disabled)
MPICH2
http://www.mcs.anl.gov/research/projects/mpich2/index.php
Compiler flag:
--enable-threads - Build MPICH2 with support for multi-threaded
applications. Only the sock and nemesis channels
support MPI_THREAD_MULTIPLE.
MVAPICH2
http://mvapich.cse.ohio-state.edu/overview/mvapich2/
Compiler flag (use default):
--enable-threads=level - Control the level of thread support in the
MPICH implementation. The following levels
are supported.
default - Choose thread level at runtime based on parameters
passed to MPI_Init_thread (default)
But again, only the ch3:sock and ch3:nemesis channels support MPI_THREAD_MULTIPLE.

When running an application which should use MPI connections, you need to enable dynamic process management with an environment variable:
export MV2_SUPPORT_DPM=1
yarp
The main, catch-all namespace for YARP.
Definition: environment.h:18
implementation
RandScalar * implementation(void *t)
Definition: RandnScalar.cpp:20