# Consensus Labeled Random Finite Set Filtering

for Distributed Multi-Object Tracking

###### Abstract

This paper addresses distributed multi-object tracking over a network of heterogeneous and geographically dispersed nodes with sensing, communication and processing capabilities. The main contribution is an approach to distributed multi-object estimation based on labeled Random Finite Sets (RFSs) and dynamic Bayesian inference, which enables the development of two novel consensus tracking filters, namely a Consensus Marginalized -Generalized Labeled Multi-Bernoulli and Consensus Labeled Multi-Bernoulli tracking filter. The proposed algorithms provide fully distributed, scalable and computationally efficient solutions for multi-object tracking. Simulation experiments via Gaussian mixture implementations confirm the effectiveness of the proposed approach on challenging scenarios.

## I Introduction

Multi-Object Tracking (MOT) involves the on-line estimation of an unknown and time-varying number of objects and their individual trajectories from sensor data [1, 2, 3, 4, 5, 6, 7, 8]. In a multiple object scenario, the sensor observations are affected by misdetection (e.g., occlusions, low radar cross section, etc.) and false alarms (e.g., observations from the environment, clutter, etc.), which is further compounded by association uncertainty, i.e. it is not known which object generated which measurement. The key challenges in MOT include detection uncertainty, clutter, and data association uncertainty. Numerous multi-object tracking algorithms have been developed in the literature and most of these fall under the three major paradigms of Multiple Hypothesis Tracking (MHT) [9, 6], Joint Probabilistic Data Association (JPDA) [4], and Random Finite Set (RFS) [7].

Recent advances in wireless sensor technology inspired the development of large sensor networks consisting of radio-interconnected nodes (or agents) with sensing, communication and processing capabilities [10]. The main goal of such a net-centric sensing paradigm is to provide a more complete picture of the environment by combining information from many individual nodes (usually with limited observability) using a suitable information fusion procedure, in a way that is scalable (with the number of nodes), flexible and reliable (i.e. resilient to failures) [10]. Reaping the benefits of a sensor network calls for distributed architectures and algorithms in which individual agents can operate with neither central fusion node nor knowledge of the information flow in the network [11].

The wide applicability of MOT together with the emergence of net-centric sensing motivate the investigation of Distributed Multi-Object Tracking (DMOT). Scalability with respect to network size, lack of a fusion center as well as knowledge of the network topology call for a consensus approach to achieve a collective information fusion over the network [11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. In fact, consensus has recently emerged as a powerful tool for distributed computation over networks [12, 11], including parameter/state estimation [13, 14, 15, 16, 17, 18, 19, 20]. Furthermore, a robust (possibly suboptimal) information fusion procedure is needed to combat the data incest problem that causes double counting of information. To this end, Chernoff fusion [21, 22], also known as Generalized Covariance Intersection [23, 24] (that encompasses Covariance Intersection [25, 26]) or Kullback-Leibler average [20, 27], is adopted to fuse multi-object densities computed by various nodes of the network. Furthermore, it was proven in [26] for the single-object case, and subsequently in [28] for the multi-object case, that Chernoff fusion is inherently immune to the double counting of information, thereby justifying its use in a distributed setting wherein the nodes operate without knowledge about their common information.

While the challenges in MOT are further compounded in a distributed architecture, the notion of multi-object probability density in the RFS formulation enables consensus for distributed state estimation to be directly applied to multi-object systems [29, 27, 28, 30, 31]. Indeed, a robust and tractable multi-object fusion solution based on Kullback-Leibler averaging, together with the Consensus Cardinalized Probability Hypothesis Density (CPHD) filter have been proposed in [27]. However, this RFS-based filtering solution does not provide estimates of the object trajectories and suffers from the so-called “spooky effect” [32]. Note that one of the original intents of the RFS formulation is to propagate the distribution of the set of tracks via the use of labels, see [5, p. 135, pp. 196-197], [7, p. 506]. However, this capability was overshadowed by the popularity of unlabeled RFS-based filters such as PHD, CPHD, and multi-Bernoulli [35, 36, 7, 33, 34].

This paper proposes the first consensus DMOT algorithms based on the recently introduced labeled RFS formulation [37]. This formulation admits a tractable analytical MOT solution called the -Generalized Labeled Multi-Bernoulli (-GLMB) filter [38] that does not suffer from the “spooky effect”, and more importantly, outputs trajectories of objects. Furthermore, efficient approximations that preserve key summary statistics such as the Marginalized -GLMB (M-GLMB) and the Labeled Multi-Bernoulli (LMB) filters have also been developed [39, 40]. In this paper, it is shown that the M-GLMB and LMB densities are algebraically closed under Kullback-Leibler averaging, and novel consensus DMOT M-GLMB and LMB filters are developed.

The rest of the paper is organized as follows. Section II presents notation, the network model, and background on Bayesian filtering, RFSs, and distributed estimation. Section III presents the Kullback-Leibler average based fusion rules for M-GLMB and LMB densities. Section IV describes the multi-object Bayesian recursion with labeled RFSs and presents the novel Consensus M-GLMB and Consensus LMB filters with Gaussian Mixture (GM) implementation. Section V provides a performance evaluation of the proposed DMOT filters via simulated case studies. Concluding remarks and perspectives for future work are given in Section VI.

## Ii Background and problem formulation

### Ii-a Notation

Throughout the paper, we use the standard inner product notation and the multi-object exponential notation , where is a real-valued function, with by convention [7]. The cardinality (or number of elements) of a finite set is denoted by . Given a set , denotes the indicator function of , the class of finite subsets of , and the -fold Cartesian product of with the convention . We introduce a generalization of the Kronecker delta that takes arguments such as sets, vectors, etc., i.e.

A Gaussian Probability Density Function (PDF) with mean and covariance is denoted by . Vectors are represented by lowercase letters, e.g. , , while finite sets are represented by uppercase letters, e.g. , ; spaces are represented by blackboard bold letters e.g. , , .

### Ii-B Network model

The network considered in this work consists of heterogeneous and geographically dispersed nodes having processing, communication and sensing capabilities as depicted in Fig. 1. From a mathematical viewpoint, the network is described by a directed graph where is the set of nodes and the set of arcs, representing links (or connections). In particular, if node can receive data from node . For each node , denotes the set of in-neighbours (including itself), i.e. the set of nodes from which node can receive data.

Each node performs local computations, exchanges data with the neighbors and gathers measurements (e.g., angles, distances, Doppler shifts, etc.) of objects present in the surrounding environment (or surveillance area). The network of interest has no central fusion node and its agents operate without knowledge of the network topology.

We are interested in networked estimation algorithms that are scalable with respect to network size, and permit each node to operate without knowledge of the dependence between its own information and the information from other nodes.

### Ii-C Distributed Single-Object Filtering and Fusion

For single-object filtering, the problem of propagating information throughout a sensor network with neither central fusion node nor knowledge of the network topology can be formalized as follows.

The system model is described by the following Markov transition density and measurement likelihood functions

(3) | |||

(4) |

The measurement at time is a vector of measurements from all sensors, which are assumed to be conditionally independent given the state. Hence the likelihood function of the measurement is given by

(5) |

Let denote the prediction density of the state at time given , and similarly the posterior density of the state at time given . For simplicity we omit the dependence on the measurements and write the prediction and posterior densities respectively as and .

In a centralized setting, i.e. when the central node has access to all measurements, the solution of the state estimation problem is given by the Bayesian filtering recursion starting from a suitable initial prior :

(6) | |||||

(7) |

On the other hand, in a distributed setting each agent updates its own posterior density by appropriately fusing the available information provided by the subnetwork (including node ). Thus, central to networked estimation is the capability to fuse the posterior densities provided by different nodes in a mathematically consistent manner. In this respect, the information-theoretic notion of Kullback-Leibler Average (KLA) provides a consistent way of fusing PDFs [20].

Given the PDFs , , and normalized non-negative weights (i.e. non-negative weights that sum up to ), , the weighted Kullback-Leibler Average (KLA) is defined as

(8) |

where

(9) |

is the Kullback-Leibler Divergence (KLD) of from . In [20] it is shown that the weighted KLA in (8) is the normalized weighted geometric mean of the PDFs, i.e.

(10) |

Indeed, (10) defines the well-known Chernoff fusion rule [21, 22]. Note that in the unweighted KLA , i.e.

(11) |

###### Remark 1.

The weighted KLA of Gaussians is also Gaussian [20]. More precisely, let denote the information (matrix-vector) pair associated with , then the information pair of the KLA is the weighted arithmetic mean of the information pairs of . This is indeed the well-known Covariance Intersection fusion rule [25].

Having reviewed the fusion of PDFs via KLA, we next outline distributed computation of the KLA via consensus.

### Ii-D Consensus on PDFs

The idea behind consensus is to reach a collective agreement (over the entire network), by allowing each node to iteratively update and pass its local information to neighbouring nodes [11]. Such repeated local operations provide a mechanism for propagating information throughout the whole network. In the context of this paper, consensus is used (at each time step ) to perform distributed computation of the collective unweighted KLA of the posterior densities over all nodes .

Given the consensus weights relating agent to its in-neighbour nodes , satisfying , suppose that, at time , each agent starts with the posterior as the initial iterate , and computes the consensus iterate by

(12) |

Then, using the properties of the operators and , it can be shown that [20]

(13) |

where is the -th entry of the square matrix , and is the consensus matrix with -th entry given by (it is understood that is omitted from the fusion whenever ). Notice that (13) expresses the local PDF in each node at consensus iteration as a weighted geometric mean of the initial local PDFs of all nodes. More importantly, it was shown in [11, 12] that if the consensus matrix is primitive (i.e. with all non-negative entries and such that there exists an integer such that has all positive entries) and doubly stochastic (all rows and columns sum up to 1), then for any

(14) |

In other words, if the consensus matrix is primitive and doubly stochastic, then the consensus iterate of each node approaches the collective unweighted KLA of the posterior densities over the entire network as the number of consensus steps tends to infinity [16, 20].

A necessary condition for to be primitive [16] is that the associated network be strongly connected, i.e. for any pair of nodes , there exists a directed path from to and vice versa. This condition is also sufficient when for all and . Further, when is undirected (i.e. whenever node receives information from node , it also sends information to ), choosing the Metropolis weights

(15) |

In most tracking applications, the number of objects is unknown and varies with time, while measurements are subjected to misdetection, false alarms and association uncertainty. This more general setting can be conveniently addressed by a rigorous mathematical framework for dealing with multiple objects. Such a framework is reviewed next, followed by the extension of the consensus methodology to the multi-object realm.

### Ii-E Labeled Random Finite Sets

The RFS formulation of MOT provides the notion of multi-object probability density (for an unknown number of objects) [42] that conceptually allows direct extension of the consensus methodology to multi-object systems. Such a notion of multi-object probability density is not available in the MHT or JPDA approaches [9, 1, 2, 4, 6].

From a Bayesian estimation viewpoint the multi-object state is naturally represented as a finite set, and subsequently modeled as an RFS [34]. In this paper, unless otherwise stated we use the Finite Set STatistics (FISST) notion of integration/density to characterize RFSs [7]. While not a probability density [7], the FISST density is equivalent to a probability density relative to an unnormalized distribution of a Poisson RFS [42].

Let be a discrete space, and be the projection defined by . Then is called the
label of the point ,
and a finite subset of
is said to have *distinct labels* if and only if and its
labels have the same cardinality. We define the *distinct label
indicator *of as .

A *labeled RFS* is an RFS over
such that each realization has distinct labels. These distinct labels
provide the means to identify trajectories or tracks of individual objects
since a track is a time-sequence of states with the same label [37].
The distinct label property ensures that at any time no two points can
share the same label, and hence no two trajectories can share any common
point in the extended space . Hereinafter, symbols for labeled states and their distributions are
bolded to distinguish them from unlabeled ones, e.g. , , .

#### Ii-E1 Generalized Labeled Multi-Bernoulli (GLMB)

A GLMB [37] is a labeled RFS with state space and (discrete) label space distributed according to

(16) |

where is a given discrete index set, each is a PDF on , and each is non-negative with

(17) |

Each term in the mixture (16) consists of: a weight that only depends on the labels of the multi-object state ; a multi-object exponential that depends on the entire multi-object state.

The cardinality distribution and intensity function (which is also the first moment) of a GLMB are respectively given by

(18) | ||||

(19) |

The GLMB is often written in the so-called -GLMB form by using the identity

(20) |

For the standard multi-object system model that accounts for thinning, Markov shifts and superposition, the GLMB family is a conjugate prior, and is also closed under the Chapman-Kolmogorov equation [37]. Moreover, the GLMB posterior can be tractably computed to any desired accuracy in the sense that, given any , an approximate GLMB within from the actual GLMB in distance, can be computed (in polynomial time) [38].

#### Ii-E2 Marginalized -Glmb (M-Glmb)

An M-GLMB [39] is a special case of a GLMB with and density:

(21) | |||||

(22) |

An M-GLMB is completely characterized by the parameter set : , and for compactness we use the abbreviation for its density.

In [39], an M-GLMB of the form (21) was proposed to approximate a -GLMB of the form (20), by marginalizing (summing) over the discrete space , i.e. setting

(23) | |||||

(24) |

Moreover, using a general result from [43] it was shown that such M-GLMB approximation mimimizes the KLD from the -GLMB while preserving the first moment and cardinality distribution [39]. The M-GLMB approximation was used to develop a multi-sensor MOT filter that is scalable with the number of sensors [39].

#### Ii-E3 Labeled Multi-Bernoulli (LMB)

An LMB [37] is another special case of a GLMB with density

(25) |

An LMB is completely characterized by the (finite) parameter set : , where , is the existence probability of the object with label , and is the PDF (on ) of the object’s state. For convenience we use the abbreviation for the density of an LMB. In [40], an approximation of a -GLMB by an LMB with matching unlabeled first moment was proposed together with an efficient MOT filter known as the LMB filter.

## Iii Information Fusion with labeled RFS

In this section, it is shown that the M-GLMB and LMB densities are algebraically closed under KL averaging, i.e. the KLAs of M-GLMBs and LMBs are respectively M-GLMB and LMB. In particular we derive closed form expressions for KLAs of M-GLMBs and LMBs, which are then used to develop consensus fusion of M-GLMB and LMB posterior densities.

### Iii-a Multi-Object KLA

The concept of probability density for the multi-object state allows direct extension of the KLA notion to multi-object systems [27].

Given the labeled multi-object densities on , , and the normalized non-negative weights , (i.e. non-negative weights that sum up to ):

#### Iii-A1 The weighted Kla

is defined by

(26) |

where

(27) |

is the KLD of from [35, 7], and the integral is the FISST set integral defined for any function on by

(28) |

Note that the integrand has unit of , where is the unit of hyper-volume on . For compactness, the inner product notation will be used also for the set integral , when has unit independent of .

#### Iii-A2 The normalized weighted geometric mean

is defined by

(29) |

Note that since the exponents , , sum up to unity, the product in the numerator of (29) has unit of , and the set integral in the denominator of (29) is well-defined and unitless. Hence, the normalized weighted geometric mean (29), originally proposed by Mahler in [23] as the multi-object Chernoff fusion rule, is well-defined.

Similar to the single object case, the weighted KLA is given by the normalized weighted geometric mean.

###### Theorem 1.

[27] - Given multi-object densities , , and normalized non-negative weights , ,

(30) |

Note that the label space has to be the same for all the densities for the KLA to be well-defined. In [28, Theorem 5.1], it has been mathematically proven that, due to the weight normalization , the weighted geometric mean (30) ensures immunity to the double counting of information irrespective of the unknown common information in the densities .

In [27], it was shown that Poisson and independently identically distributed cluster (IID-cluster) RFSs are algebraically closed under KL averaging. While the GLMB family is algebraically closed under the Bayes recursion for the standard multi-object system model and enjoys a number of useful analytical properties, it is not algebraically closed under KL averaging. Nonetheless, there are versatile subfamilies of the GLMBs that are algebraically closed under KL averaging.

### Iii-B Weighted KLA of M-GLMB Densities

The following result states that the KLA of M-GLMB densities is also an M-GLMB density. The proof is provided in Appendix A.

###### Proposition 1.

Given M-GLMB densities , , and normalized non-negative weights , , the normalized weighted geometric mean , and hence the KLA, is an M-GLMB given by:

(31) |

where

(32) | |||||

(33) |

### Iii-C Weighted KLA of LMB Densities

The following result states that the KLA of LMB densities is also an LMB density. The proof is provided in Appendix B.

###### Proposition 2.

Given LMB densities , , and normalized non-negative weights , , the normalized weighted geometric mean , and hence the KLA, is an LMB given by:

(36) |

where

(37) | |||||

(38) |

### Iii-D Consensus Fusion for Labeled RFSs

Consider a sensor network with multi-object density at each node , and non-negative consensus weights relating node to nodes , such that . The global KLA over the entire network can be computed in a distributed and scalable way by using the consensus algorithm [27, 20, Section III.A]. Starting with , each node carries out the consensus iteration

(39) |

As shown in [27, Section III-B], the consensus iteration (39)—which is the multi-object counterpart of equation (12)—enjoys some nice convergence properties. In particular, if the consensus matrix is primitive and doubly stochastic, the consensus iterate of each node in the network converges to the global unweighted KLA of the multi-object posterior densities as tends to infinity. Convergence analysis for the multi-object case follows along the same line as in [16, 20] since is a metric space [7]. In practice, the iteration is stopped at some finite . Further, as pointed out in [28, Remark 1], the consensus iterations (39) always generate multi-object densities that mitigate double counting irrespectively of the number of iterations.

Starting with -GLMBs, the consensus iteration (39) always returns M-GLMBs, moreover the M-GLMB parameter set can be computed by the M-GLMB fusion rules (34) and (35). Similarly, for LMBs the consensus iteration (39) always returns LMBs whose parameter set can be computed by the LMB fusion rules (37) and (38). The fusion rules (35) and (38) involve consensus of single-object PDFs.

A typical choice for representing each single-object density is a Gaussian Mixture (GM) [44, 45]. In this case, the fusion rules (35) and (38) involve exponentiation and multiplication of GMs where the former, in general, does not provide a GM. Hence, in order to preserve the GM form, a suitable approximation of the GM exponentiation has to be devised. The in-depth discussion and efficient implementation proposed in [27, Section III.D] for generic GMs can also be applied to the location PDF fusion (35) and (38). Considering, for the sake of simplicity, the case of two GMs

for , (35) and (38) can be approximated as follows:

(40) |

where

(41) | |||||

(42) | |||||

(43) | |||||

(44) |

The fusion (40) can be extended to agents by sequentially applying the pairwise fusion rule (40)-(44) times. By the associative and commutative properties of multiplication, the ordering of pairwise fusions is irrelevant. Notice that (40)-(44) amounts to performing a Chernoff fusion on any possible pair formed by a Gaussian component of agent and a Gaussian component of agent . Moreover, the coefficient of the resulting (fused) component includes a factor that measures the separation of the two fusing components and . The approximation (40)-(44) is reasonably accurate for well-separated Gaussian components but might easily deteriorate in presence of closely located components. In this respect, merging of nearby components before fusion has been exploited in [27] to mitigate the problem. Further, a more accurate, but also more computationally demanding, approximation has been proposed in [46].

The other common approach for approximating a single object PDF is via particles, i.e. weighted sums of Dirac delta functions, which can address non-linear, non-Gaussian dynamics and measurements as well as non-uniform field of view. However, computing the KLA requires multiplying together powers of relevant PDFs, which cannot be performed directly on weighted sums of Dirac delta functions. While this problem can be addressed by further approximating the particle PDFs by continuous PDFs (e.g. GMs) using techniques such as kernel density estimation [30], least square estimation [47, 48] or parametric model approaches [49], such approximations increase the in-node computational burden. Moreover, the local filtering steps are also more resource demanding compared to a GM implementation. Hence, at this developmental stage, it is more efficient to work with GM approximations.

## Iv Consensus DMOT

In this section, we present two novel fully distributed and scalable multi-object tracking algorithms based on Propositions 1 and 2 along with consensus [11, 12, 16, 20] to propagate information throughout the network.

### Iv-a Bayesian Multi-Object Filtering

We begin this section with the Bayes MOT filter that propagates the multi-object posterior/filtering density. In this formulation the multi-object state is modeled as a labeled RFS in which a label is an ordered pair of integers , where is the time of birth, and is a unique index to distinguish objects born at the same time. The label space for objects born at time is . An object born at time has, therefore, state . Hence, the label space for objects at time (including those born prior to ), denoted as , is constructed recursively by (note that and are disjoint). A multi-object state at time , is a finite subset of . For convenience, we denote , , and .

Let denote the multi-object filtering density at time , and the multi-object prediction density (for compactness, the dependence on the measurements is omitted). Then, starting from , the multi-object Bayes recursion propagates in time according to the following update and prediction [35, 7]

(45) | |||||

(46) |

where multi-object transition density from time to time , and multi-object likelihood function at time . The multi-object likelihood function encapsulates the underlying models for detections and false alarms while the multi-object transition density encapsulates the underlying models of motion, birth and death. The multi-object filtering (or posterior) density captures all information on the number of objects, and their states [7]. is the is the

Note that the recursions (45)-(46) are the multi-object counterpart of (6)-(7), which admit a closed form solution, under the standard multi-object system model, known as the GLMB filter [37] (see also [38] for implementation details). However, the GLMB family is not closed under KL averaging. Consequently, we look towards approximations such as the M-GLMB and LMB filters for analytic solutions to DMOT.

### Iv-B The M-GLMB Filter

In the following we outline the prediction and update steps for the M-GLMB filter. Additional details can be found in [38].

#### Iv-B1 M-GLMB Prediction

Given the previous multi-object state , each state either continues to exist at the next time step with probability and evolves to a new state with probability density , or dies with probability . The set of new objects born at the next time step is distributed according to the LMB

(47) |

It is assumed that when . Note that if contains any element with . The multi-object state at the next time is the superposition of surviving objects and new born objects, and the multi-object transition density can be found in [37, Subsection IV.D].

###### Remark 4.

The LMB birth model assigns unique labels to objects in the following sense. Consider two objects born at time with kinematic states and . In birth models such as labeled Poisson [37], could be assigned label and label , i.e. the multi-object state is , or conversely assigned label and label , i.e. the multi-object state is . Such non-uniqueness arises because the kinematic state of an object is generated independently of the label. This does not occur in the LMB model because an object with label , has kinematic state generated from . If kinematic states and are drawn respectively from and , then the labeled multi-object state is uniquely .

Given the M-GLMB multi-object posterior density , the multi-object prediction density is the M-GLMB , where

(48) | |||||

(49) | |||||

(50) | |||||

(51) | |||||

(52) |

#### Iv-B2 M-GLMB Update

Given a multi-object state , each state is either detected with probability and generates a measurement with likelihood , or missed with probability . The multi-object observation is the superposition of the detected points and Poisson clutter with intensity function . Assuming that, conditional on , detections are independent, and that clutter is independent of the detections, the multi-object likelihood is given by [37, Subsection IV.D]

(53) |

where is the set of mappings such that implies , and

Note that an association map specifies which tracks generated which measurements, i.e. track generates measurement , with undetected tracks assigned to . The condition “ implies ”, means that, at any time, a track can generate at most one measurement, and a measurement can be assigned to at most one track.

Given the M-GLMB multi-object prediction density , the M-GLMB updated density is given by , where

(54) | |||||

(55) | |||||

(56) | |||||

(57) | |||||