Abstract
We study posterior contraction behaviors for parameters of interest in the context of Bayesian mixture modeling, where the number of mixing components is unknown while the model itself may or may not be correctly specified. Two representative types of prior specification will be considered: one requires explicitly a prior distribution on the number of mixture components, while the other places a nonparametric prior on the space of mixing distributions. The former is shown to yield an optimal rate of posterior contraction on the model parameters under minimal conditions, while the latter can be utilized to consistently recover the unknown number of mixture components, with the help of a fast probabilistic postprocessing procedure. We then turn the study of these Bayesian procedures to the realistic settings of model misspecification. It will be shown that the modeling choice of kernel density functions plays perhaps the most impactful roles in determining the posterior contraction rates in the misspecified situations. Drawing on concrete posterior contraction rates established in this paper we wish to highlight some aspects about the interesting tradeoffs between model expressiveness and interpretability that a statistical modeler must negotiate in the rich world of mixture modeling.
On posterior contraction of parameters and
interpretability in Bayesian mixture modeling
Aritra Guha  Nhat Ho  XuanLong Nguyen 
Department of Statistics, University of Michigan 
Department of EECS, University of California, Berkeley 
February 10, 2021
1 Introduction
Mixture models are one of the most useful tools in a statistician’s toolbox for analyzing heterogeneous data populations. They can be a powerful blackbox modeling device to approximate the most complex forms of density functions. Perhaps more importantly, they help the statistician express the data population’s heterogeneous patterns and interpret them in a useful way [33, 30, 34]. The following are common, generic and meaningful questions a practitioner of mixture modeling may ask: (I) how many mixture components are needed to express the underlying latent subpopulations, (II) how efficiently can one estimate the parameters representing these components and, (III) what happens to a mixture model based statistical procedure when the model is actually misspecified?
How to determine the number of mixture components is a question that has long fascinated mixture modelers. Many proposed solutions approached this as a model selection problem. The number of model parameters, hence the number of mixture components, may be selected by optimizing with respect to some regularized loss function; see, e.g., [30, 26, 6] and the references therein. A Bayesian approach to regularization is to place explicitly a prior distribution on the number of mixture components [39, 41, 40, 36]. A convenient aspect of separating out the modeling and inference questions considered in (I) and (II) is that once the number of parameters is determined, the model parameters concerned by question (II) can be estimated and assessed via any standard parametric estimation methods.
In a number of modern applications of mixture modeling to heterogeneous data, such as in topic modeling, the number of mixture components (the topics) may be very large and not necessarily a meaningful quantity [3, 49]. In such situations, it may be appealing for the modeler to consider a nonparametric approach, where both (I) and (II) are considered concurrently. The object of inference is now the mixing measure which encapsulates all unknowns about the mixture density function. There were numerous works exemplifying this approach [29, 13, 24]. In particular, the field of Bayesian nonparametrics (BNP) has offered a wealth of prior distributions on the mixing measure based on which one can arrive at the posterior distribution of any quantity of interest related to the mixing measure [21].
A common choice of such priors is the Dirichlet process [12, 2, 45], resulting in the famous Dirichlet process mixture models [1, 31, 9]. Dirichlet process (DP) and its variants have also been adopted as a building block for more sophisticated hierarchical modeling, thanks to the ease with which computational procedures for posterior inference via Markov Chain Monte Carlo can be implemented [50, 42]. Moreover, there is a wellestablished asymptotic theory on how such Bayesian nonparametric mixture models result in asymptotically optimal estimation procedures for the population density. See, for instance, [15, 18, 46] for theoretical results specifically on DP mixtures, and [16, 47, 52] for general BNP models. The rich development in both algorithms and theory in the past decades has contributed to the widespread adoption of these models in a vast array of application domains.
For quite some time there was a misconception among quite a few practitioners in various application domains, a misconception that may have initially contributed to their enthusiasm for Bayesian nonparametric modeling, that the use of such nonparametric models eliminates altogether the need for determining the number of mixture components, because the learning of such a quantity is ”automatic” from the posterior samples of the mixing measure. The implicit presumption here is that a consistent estimate of the mixing measure may be equated with a consistent estimate of the number of mixture components. This is not correct, as has been noted, for instance, by [29] in the context of mixing measure estimation. More recently, [35] explicitly demonstrated that the common practice of drawing inference about the number of mixture components via the DP mixture, specifically by reading off the number of support points in the Dirichlet’s posterior sample, leads to an asymptotically inconsistent estimate.
Despite this inconsistency result, it will be shown in this paper that it is still possible to obtain a consistent estimate of the number of mixture components using samples from a Dirichlet process mixtures, or any Bayesian nonparametric mixtures, by applying a simple and fast postprocessing procedure on samples drawn from the DP mixture’s posterior. On the other hand, the parametric approach of placing an explicit prior on the number of components yields both a consistent estimate of the number mixture component, and more notably, an optimal posterior contraction rate for component parameters, under a minimal set of conditions. It is worth emphasizing that all these results are possible only under the assumption that the model is wellspecified, i.e., the true but unknown population density lies in the support of the induced prior distribution on the mixture densities.
As George Box has said, ”all models are wrong”, but more relevant to us, all mixture models are misspecified in some way. The statistician has a number of modeling decision to make when it comes to mixture models, including the selection of the class of kernel densities, and the support of the space of mixing measures. The significance of question (III) comes to the fore, because if the posterior contraction behavior of model parameters is very slow due to specific modeling choices, one has to be cautious about the interpretability of the parameters of interest. A very slow posterior contraction rate in theory implies that a given data set probably has relatively very slow influence on the movement of mass from the prior to the posterior distribution.
In this paper we study Bayesian estimation of model parameters with both wellspecified and misspecified mixture models. There are two sets of results. The first results resolve several outstanding gaps that remain in the existing theory and current practice of Bayesian parameter estimation, given that the mixture model is wellspecified. The second set of results describes posterior contraction properties of such procedures when the mixture model is misspecified. We proceed to describe these results, related works and implications to the mixture modeling practice.
1.1 Wellspecified regimes
Consider discrete mixing measures . Here, is a vector of mixing weights, while atoms are elements in a given compact space . Mixing measure is combined with a likelihood function with respect to Lebesgue measure to yield a mixture density: . When , we call this a finite mixture model with components. We write to denote an infinite mixture model. The atoms ’s are representatives of the underlying subpopulations.
Assume that are i.i.d. samples from a mixture density , where is a discrete mixing measure with unknown number of support points residing in . In the overfitted setting, i.e., an upper bound is given so that one may work with an overfitted mixture with mixture components, Chen [5] showed that the mixing measure can be estimated at a rate under the metric, provided that the kernel satisfies a secondorder identifiability condition – this is a linear independence property on the collection of kernel function and its first and second order derivatives with respect to .
Asymptotic analysis of Bayesian estimation of the mixing measure that arises in both finite and infinite mixtures, where the convergence is assessed under Wasserstein distance metrics, was first investigated by Nguyen [37]. Convergence rates of the mixing measure under a Wasserstein distance can be directly translated to the convergence rates of the parameters in the mixture model. Under the same (secondorder) identifiability condition, it can be shown that either maximum likelihood estimation method or a Bayesian method with a noninformative (e.g., uniform) prior yields a rate of convergence [22, 37, 24]. Note, however, that is not the optimal pointwise rate of convergence. Heinrich and Kahn [20] showed that a distance based estimation method can achieve rate of convergence under metric, even though their method may not be easy to implement in practice. [23] described a minimum Hellinger distance estimator that achieves the same optimal rate of parameter estimation.
An important question in the Bayesian analysis is whether there exists a suitable prior specification for mixture models according to which the posterior distribution on the mixing measure can be shown to contract toward the true mixing measure at the same fast rate . Rousseau and Mengersen [43] provided an interesting result in this regard, which states that for overfitted mixtures with a suitable Dirichlet prior on the mixing weights , assuming that an upper bound to the number of mixture component is given, in addition to a secondorder type identifiability condition, then the posterior contraction to the true mixing measure can be established by the fact that the mixing weights associated with all redundant atoms of mixing measure vanish at the rate close to the optimal .
In our first main result given in Theorem 3.1, we show that an alternative and relatively common choice of prior also yields optimal rates of convergence of the mixing measure (up to a logarithmic term), in addition to correctly recovering the number of mixture components, under considerably weaker conditions. In particular, we study the mixture of finite mixture (MFM) prior, which places an explicit prior distribution on the number of components and a (conditional) Dirichlet prior on the weights , given each value of . This prior has been investigated by Miller and Harrison [36]. Compared to the method of [43], no upper bound on the true number of mixture components is needed. In addition, only firstorder identifiability condition is required for the kernel density , allowing our results to apply to popular mixture models such as locationscale Gaussian mixtures. We also note that the MFM prior is one instance in a class of modeling proposals, e.g., [39, 41, 40] for which the established convergence behavior continues to hold. In other words, from an asymptotic standpoint, all is good on the parametric Bayesian front.
Our second main result, given in Theorem 3.2, is concerned with a Bayesian nonparametric modeling practice. A Bayesian nonparametric prior on mixing measures places zero mass on measures with finite support points, so the BNP model is misspecified with respect to the number of mixture components. Indeed, when has only finite support the true density lies at the boundary of the support of the class of densities produced by the BNP prior. Despite the inconsistency results mentioned earlier on the number of mixture components produced by Dirichlet process mixtures, we will show that this situation can be easily corrected by applying a postprocessing procedure to the samples generated from the posterior distribution arising from the DP mixtures, or any sufficiently wellbehaved Bayesian nonparametric mixture models. By ”wellbehaved” we mean any BNP mixtures under which the posterior contraction rate on the mixing measure can be guaranteed by an upper bound using a Wasserstein metric [37].
Our postprocessing procedure is simple, and motivated by the observation that a posterior sample of the mixing measure tends to produce a large number of atoms with very small and vanishing weights [19, 35]. Such atoms can be ignored by a suitable truncation procedure. In addition, similar atoms in the metric space can also be merged in a systematic and probabilistic way. Our procedure, named MergeTruncateMerge algorithm, is guaranteed to not only produce a consistent estimate of the number of mixture components but also retain the posterior contraction rates of the original posterior samples for the mixing measure. Theorem 3.2 provides a theoretical basis for the heuristics employed in practice in dealing with mixtures with unknown number of components [19, 40].
1.2 Misspecified regimes
There are several ways a mixture model can be misspecified: either in the kernel density function , or the mixing measure , or both. Thus, in the misspecified setting, we assume that the data samples are i.i.d. samples from a mixture density , namely, , where both and are unknown. The statistician draws inference from a mixture model , still denoted by for short, where is a mixing measure with support on compact , and is a chosen kernel density function. In particular, a Bayesian procedure proceeds by placing a prior on the mixing measure and obtaining the posterior distribution on given the data sample. In general, the true data generating density lies outside the support of the induced prior on . We study the posterior behavior of as the sample size tends to infinity.
The behavior of Bayesian procedures under model misspecification has been investigated in the foundational work of [27, 28]. This body of work focuses primarily on density estimation. In particular, assuming that the true data generating distribution’s density lies outside the support of a Bayesian prior, then the posterior distribution on the model density can be shown to contract to an element of the prior’s support, which is obtained by a KullbackLeibler (KL) projection of the true density into the prior’s support [27].
It can be established that the posterior of contracts to a density , where is a probability measure on such that is the (unique) minimizer of the KullbackLeilber distance among all probability measure on . This mere fact is readily deduced from the theory of [27], but the outstanding and relevant issue is whether the posterior contraction behavior carries over to that of , and if so, at what rate. In general, may not be unique, so posterior contraction of cannot be established. Under identifiability, is unique, but still .
This leads to the question about interpretability when the model is misspecified. Specifically, when , it may be unclear how one can interpret the parameters that represent mixing measure , unless can be assumed to be a reasonable approximation of . Mixing measure , too, may be misspecified, when the true support of may not lie entirely in . In practice, it is a perennial challenge to explicate the relationship between and the unknown . In theory, it is mathematically an interesting question to characterize this relationship, if some assumption can be made on the true and , but this is beyond the scope of this paper. Regardless of the truth about this relationship, it is important for the statistician to know how impactful a particular modeling choice on and can affect the posterior contraction rates of the parameters of interest.
The main results that we shall present in Theorem 4.1 and Theorem 4.2 are on the posterior contraction rates of the mixing measure toward the limit point , under very mild conditions on the misspecification of . In particular, we shall require that the tail behavior of function is not much heavier than that of (cf. condition (P.5) in Section 4). Specific posterior contraction rates of contraction for are derived when is either Gaussian or Laplace density kernel, two representatives for supersmooth and ordinary smooth classes of kernel densities [10]. A key step in our proofs lies in several inequalities which provide upper bound of Wasserstein distances on mixing measures in terms of weighted Hellinger distances, a quantity that plays a fundamental role in the asymptotic characterization of misspecified Bayesian models [27].
It is interesting to highlight that the posterior contraction rate for the misspecified Gaussian location mixture is the same as that of wellspecified setting, which is nonetheless extremely slow, in the order of . On the other hand, using a misspecified Laplace location mixture results in some loss in the exponent of the polynomial rate . Although the contrast in contraction rates for the two families of kernels is quite similar to what is obtained for wellspecified deconvolution problems for both frequentist methods [10, 55] and Bayesian methods [37, 14], our results are given for misspecified models, which can be seen in a new light: since the model is misspecified anyway, the statistician should be ”free” to choose the kernel that can yield the most favorable posterior contraction for the parameters of his/ her model. In that regard, Laplace location mixtures should always be preferred to a Gaussian location mixtures. Of course it is not always advisable to use a heavytailed kernel density function, as dictated by condition (P.5).
The relatively slow posterior contraction rate for is due to the fact that the limiting measure in general may have infinite support, regardless of whether the true has finite support or not. From a practical standpoint, it is difficult to interpret the estimate of if has infinite support. However, if happens to have a finite number of support points, which is bounded by a known constant, say , then by placing a suitable prior on to reflect this knowledge we show that the posterior of contracts to at a relatively fast rate . This is the same rate obtained under the wellidentified setting for overfitted mixtures.
1.3 Further remarks
The posterior contraction theorems in this paper provide an opportunity to reexamine several aspects of the fascinating picture about the tension between a model’s expressiveness and its interpretability. They remind us once again about the tradeoffs a modeler must negotiate for a given inferential goal and the information available at hand. We enumerate a few such insights:

”One size does not fit all”: Even though the family of mixture models as a whole can be excellent at inferring about population heterogeneity and at density estimation as a blackbox device, a specific mixture model specification cannot do a good job at both. For instance, a Dirichlet process mixture of Gaussian kernels may yield an asymptotically optimal density estimation machine but it performs poorly when it comes to learning of parameters.

”Finite versus infinite”: If the number of mixture components is known to be small and an object of interest, then employing an explicit prior on this quantity results in the optimal posterior contraction rate for the model parameters and thus is a preferred method. When this quantity is known to be high or not a meaningful object of inference, Bayesian nonparametric mixtures provide a more attractive alternative as it can flexibly adapt to complex forms of densities. Regardless, one can still consistently recover the true number of mixture components using a nonparametric approach.

”Some forms of misspecification are more useful than others”. When the mixture model is misspecified, careful design choices regarding the (mispecified) kernel density and the support of the mixing measure can significantly speed up the posterior contraction behavior of model parameters. For instance, a heavytailed and ordinary smooth kernel such as the Laplace, instead of the Gaussian kernel, is shown to be especially amenable to efficient parameter estimation.
The remainder of the paper is organized as follows. Section 2 provides necessary backgrounds about mixture models, Wasserstein distances and several key notions of strong identifiability. Section 3 presents posterior contraction theorems for wellmispecified mixture models for both parametric and nonparametric Bayesian models. Section 4 presents posterior contraction theorems when the mixture model is misspecified. In Section 5, we provide illustrations of the MergeTruncateMerge algorithm via a simulation study. Proofs of key results are provided in Section 6 while the remaining proofs are deferred to the Appendices.
Notation
Given two densities (with respect to the Lebesgue measure ), the total variation distance is given by . Additionally, the squared Hellinger distance is given by . Furthermore, the KullbackLeibler (KL) divergence is given by and the squared KL divergence is given by . For a measurable function , let denote the integral . For any , we denote . For any metric on , we define the open ball of radius around as . We use to denote the maximal packing number for a general set under a general metric on . Additionally, the expression will be used to denote the inequality up to a constant multiple where the value of the constant is independent of . We also denote if both and hold. Furthermore, we denote as the complement of set for any set while denotes the ball, with respect to the norm, of radius centered at . Finally, we use to denote the diameter of a given parameter space relative to the norm, , for elements in . where
2 Preliminaries
We recall the notion of Wasserstein distance for mixing measures, along with the notions of strong identifiability and uniform Lipschitz continuity conditions that prove useful in Section 3.
Mixture model
Throughout the paper, we assume that are i.i.d. samples from a true but unknown distribution with given density function
where is a true but unknown mixing distribution with exactly number of support points, for some unknown . Also, is a given family of probability densities (or equivalently kernels) with respect to a sigmafinite measure on where . Furthermore, is a chosen parameter space, where we empirically believe that the true parameters belong to. In a wellspecified setting, all support points of reside in , but this may not be the case in a misspecified setting.
Regarding the space of mixing measures, let and respectively denote the space of all mixing measures with exactly and at most support points, all in . Additionally, denote the set of all discrete measures with finite supports on . Moreover, denotes the space of all discrete measures (including those with countably infinite supports) on . Finally, stands for the space of all probability measures on .
Wasserstein distance
As in [37, 22] it is useful to analyze the identifiability and convergence of parameter estimation in mixture models using the notion of Wasserstein distance, which can be defined as the optimal cost of moving masses transforming one probability measure to another [51]. Given two discrete measures and , a coupling between and is a joint distribution on , which is expressed as a matrix with marginal probabilities and for any and . We use to denote the space of all such couplings. For any , the th order Wasserstein distance between and is given by
where denotes the norm for elements in . It is simple to see that if a sequence of probability measures converges to under the metric at a rate for some then there exists a subsequence of such that the set of atoms of converges to the atoms of , up to a permutation of the atoms, at the same rate .
Strong identifiability and uniform Lipschitz continuity
The key assumptions that will be used to analyze the posterior contraction of mixing measures include uniform Lipschitz condition and strong identifiability condition. The uniform Lipschitz condition can be formulated as follows [22].
Definition 2.1.
We say the family of densities is uniformly Lipschitz up to the order , for some , if as a function of is differentiable up to the order and its partial derivatives with respect to satisfy the following inequality
for any and for some positive constants and independent of and . Here, where .
The first order uniform Lipschitz condition is satisfied by many popular classes of density functions, including Gaussian, Student’s t, and skewnormal family. Now, strong identifiability condition of the order is formulated as follows,
Definition 2.2.
For any , we say that the family (or in short, ) is identifiable in the order , for some , if is differentiable up to the order in and the following holds

For any , given different elements . If we have such that for almost all
then for all and .
Many commonly used families of density functions satisfy the first order identifiability condition, including locationscale Gaussian distributions and locationscale Student’s tdistributions. Technically speaking, strong identifiability conditions are useful in providing the guarantee that we have some sort of lower bounds of Hellinger distance between mixing densities in terms of Wasserstein metric between mixing measures. For example, if is identifiable in the first order, we have the following inequality [22]
(1) 
for any . It implies that for any estimation method that yields the convergence rate for density under the Hellinger distance, the induced rate of convergence for the mixing measure is under distance.
3 Posterior contraction under wellspecified regimes
In this section, we assume that the mixture model is wellspecified, i.e., the data are i.i.d. samples from the mixture density , where mixing measure has support points in compact parameter space . Within this section, we assume further that the true but unknown number of components is finite. A Bayesian modeler places a prior distribution on a suitable subspace of . Then, the posterior distribution over is given by:
(2) 
We are interested in the posterior contraction behavior of toward , in addition to recovering the true number of mixture components .
3.1 Prior results
The customary prior specification for a finite mixture is to use a Dirichlet distribution on the mixing weights and another standard prior distribution on the atoms of the mixing measure. Let be a distribution with full support on . Thus, for a mixture of components, the full Bayesian mixture model specification takes the form:
(3) 
Suppose for a moment that is known, we can set in the above model specification. Thus we would be in an exactfitted setting. Provided that satisfies both firstorder identifiability condition and the uniform Lipschitz continuity condition, is approximately uniform on , then according to [37, 22] it can be established that as tends to infinity,
(4) 
The rate of posterior contraction is optimal up to a logarithmic term.
When is unknown, there may be a number of ways for the modeler to proceed. Suppose that an upper bound of is given, say . Then by setting in the above model specification, we have a Bayesian overfitted mixture model. Provided that satisfies the secondorder identifability condition and the uniform Lipschitz continuity condition, is again approximately uniform distribution on , then it can be established that [37, 22]:
(5) 
This result does not provide any guarantee about whether the true number of mixture components can be recovered. The rate (upper bound) under metric implies that under the posterior distribution the redundant mixing weights of contracts toward zero at the rate , but the posterior contraction to each of the atoms of occurs at the rate only.
Interestingly, it can be shown by Rousseau and Mengersen [43] that with a more judicious choice of prior distribution on the mixing weights, one can achieve a nearoptimal posterior contraction behavior. Specifically, they continued to employ the Dirichlet prior, but they required the Dirichlet’s hyperparameters set to be sufficiently small: in (3.1) where , is the dimension of the parameter space . Then, under some conditions on kernel approximately comparable to the secondorder identifiability and the uniform Lipschitz continuity condition defined in the previous section, they showed that for any , as tends to infinity
(6) 
For a more precise statement along with the complete list of sufficient conditions leading to claim (6), we refer the reader to the original theorem of [43]. Although their theorem is concerned with only the behavior of the redundant mixing weights , where , which vanish at a nearoptimal rate , it can be deduced from their proof that the posterior contraction for the true atoms of occurs at this nearoptimal rate as well. [43] also showed that this performance may not hold if the Dirichlet’s hyperparameters are set to be sufficiently large. Along this line, concerning the recovery of the number of mixture components , [4] demonstrated the convergence of the posterior mode of the number of components to the true number of components at a rate , where depends on , the number of redundant components forced upon by our model specification.
3.2 Optimal posterior contraction via a parametric Bayesian mixture
We will show that optimal posterior contraction rates for mixture model parameters can be achieved by a natural Bayesian extension on the prior specification, even when the upper bound on the number of mixture component is unknown. The modeling idea is simple and truly Bayesian in spirit: since is unknown, let be a naturalvalued random variable representing the number of mixture components. We endow with a suitable prior distribution on the positive integers. Conditioning on , for each , the model is specified as before:
(7)  
(8) 
This prior specification is called mixture of finite mixtures (MFM) model [41, 48, 36]. In the sequel we show that the application of the MFM prior leads to the optimal posterior contraction rates for the model parameters. Interestingly, such guarantees can be established under very mild conditions on the kernel density : only the uniform Lipschitz continuity and the firstorder identifiability conditions will be required. The firstorder identifiability condition is the minimal condition for which the optimal posterior contraction rate can be established, since this condition is also necessary for exactfitted mixture models to receive the posterior contraction rate. We proceed to state such conditions.

The parameter space is compact, while kernel density is firstorder identifiable and admits the uniform Lipschitz property up to the first order.

The base distribution is absolutely continuous with respect to the Lebesgue measure on and admits a density function . Additionally, is approximately uniform, i.e., .

There exists such that as long as for any where depends only on , , and .

The prior places positive mass on the set of natural numbers, i.e., for all .
Theorem 3.1.
Under assumptions (P.1), (P.2), (P.3), and (P.4) on MFM, we have that

a.s. under .

Moreover,
in probability.
The proof of Theorem 3.1 is deferred to Section 6.1. We make several remarks regarding the conditions required in the theorem. It is worth stating up front that these conditions are almost minimal in order for the optimal posterior contraction to be guaranteed, and are substantially weaker than previous works (as discussed above). Assumption (P.1) is crucial in establishing that the Hellinger distance where is some positive constant depending only on and . Assumption (P.2) and (P.4) are standard conditions on the support of the prior so that posterior consistency can be guaranteed for any unknown with unknown number of support atoms residing on . Finally, the role of (P.3) is to help control the growing rate of KL neighborhood, which is central in the analysis of posterior convergence rate of mixing measures. This assumption is held for various choices of kernel , including location families and locationscale families. Therefore, the assumptions (P.1), (P.2),(P.3) and (P.4) are fairly general and satisfied by most common choice of kernel densities.
Further remarks
Theorem 3.1 provides a positive endorsement for employing the MFM prior when the number of mixture components is unknown, but is otherwise believed to be finite and an important quantity of inferential interest. The papers of [41, 36] discuss additional favorable properties of this class of models. However, when the true number of mixture components is large, posterior inference with the MFM may still be inefficient in practice. This is because much of the computational effort needs to be expended for the model selection phase, so that the number of mixture components can be reliably ascertained. Only then does the fast asymptotic rate of parameter estimation come meaningfully into effect.
3.3 A posteriori processing for BNP mixtures
Instead of placing a prior distribution explicitly on the number of mixture components when this quantity is unknown, another predominant approach is to place a Bayesian nonparametric prior on the mixing measure , resulting in infinite mixture models. Bayesian nonparametric models such as Dirichlet process mixtures and the variants have remarkably extended the reach of mixture modeling into a vast array of applications, especially those areas where the number of mixture components in the modeling is very large and difficult to fathom, or when it is a quantity of only tangential interest. For instance, in topic modeling applications of webbased text corpora, one may be interested in the most ”popular” topics, the number of topics is totally meaningless [3, 50, 38, 54]. DP mixtures and variants can also serve as an asymptotically optimal device for estimating the population density, under standard conditions on the true density’s smoothness, see, e.g., [17, 18, 46, 44].
Since a nonparametric Bayesian prior such as the Dirichlet process places zero probability on mixing measures with finite number of supporting atoms, the Dirichlet process mixture’s posterior is inconsistent on the number of mixture components, provided the true number of mixture components is finite [35]. It is well known in practice that Dirichlet process mixtures tend to produce many small extraneous components around the ”true” clusters, making them challenging to use to draw conclusion about the true number of mixture components when this becomes a quantity of interest [32, 19]. In this section we describe a simple posteriori processing algorithm that consistently estimates the number of components for any general Bayesian prior, even without the exact knowledge of its structure as long as the posterior for that prior contracts at some known rate to the true .
Our starting point is the availability of a mixing measure sample that is drawn from the posterior distribution , where are i.i.d. samples of the mixing density . Under certain conditions on the kernel density , it can be established that for some Wasserstein metric , as
(9) 
for all constant , while is a vanishing rate. Thus, can be taken to be (slightly) slower than actual rate of posterior contraction of the mixing measure. Concrete examples of the posterior contraction rates in infinite and (overfitted) finite mixtures are given in [37, 14, 22].
The posterior processing algorithm operates on an instance of mixing measure , by suitably merging and truncating atoms that provide the support for . The only inputs to the algorithm, which we call MergeTruncateMerge (MTM) algorithm is , in addition to the upper bound of posterior contraction rate , and a tuning parameter . The tuning parameter is useful in practice, as we shall explain, but in theory the algorithm ”works” for any constant . Thus, the method is almost ”automatic” as it does not require any additional knowledge about the kernel density or the space of support for the atoms. It is also simple and fast. We shall show that the outcome of the algorithm is a consistent estimate of both the number of mixing components and the mixing measure. The latter admits a posterior contraction rate’s upper bound as well.
The detailed pseudocode of MTM algorithm is summarized in Algorithm 1. At a high level, it consists of two main stages. The first stage involves a probabilistic procedure for merging atoms that may be clustered near one another. The second stage involves a deterministic procedure for truncating extraneous atoms and merging them suitably with the remaining ones in a systematic way. The driving force of the algorithm lies in the asymptotic bound on the Wasserstein distance, i.e., with high probability. When is sufficiently small, there may be many atoms that concentrate around each of the supporting atoms of . Although is not known, such clustering atoms may be merged into one, by our first stage of probabilistic merging scheme. The second stage (truncatemerge) is also necessary in order to obtain a consistent estimate of , because there remain distant atoms which carry a relatively small amount of mass. They will need to be suitably truncated and merged with the other more heavily supported atoms. In other words, our method can be viewed as a formal procedure of the common practices employed by numerous practitioners.
We proceed to present the theoretical guarantee for the outcome of Algorithm 1.
Theorem 3.2.
We add several comments concerning this theorem.

The proof of this theorem is deferred to Section 6.2, where we clarify carefully the roles played by each step of the MTM algorithm.

Although it is beyond the scope of this paper to study the practical viability of the MTM algorithm, for interested readers we present a brief illustration of the algorithm via simulations in Section 5.

In practice, one may not have a mixing measure sampled from the posterior but a sample from itself, say . Then one can apply the MTM algorithm to instead. Assume that is sufficiently close to , in the sense that , it is straightforward to extend the above theorem to cover this scenario.
Further remarks
At this point, one may look forward to some guidance regarding the modeling choices of parametrics versus nonparametrics. Even in the tight arena of Bayesian mixture modeling, the jury may still be out. The results in this section seems to provide a stronger theoretical support for the former, when it comes to the efficiency of parameter estimation and the corresponding model interpretation.
However, as we will see in the next section, when the mixture model is misspecified, the fast posterior contraction rate offered by the use of the MFM prior is no longer valid. On the other hand, Bayesian nonparametric models are more versatile in adapting to complex forms of population densities. In many modern applications it is not meaningful to estimate the number of mixing components, only the most ”significant” ones in a sense suitably defined. Perhaps a more meaningful question concerning a Bayesian nonparametric mixture model is whether it is capable of learning selected mixture components in an efficient way.
4 Posterior contraction under model misspecification
In this section, we study the posterior contraction behavior of the mixing measure under the realistic scenarios of model misspecification. There are several ways a mixture model can be misspecified, due to the misspecification of the kernel density function , or the support of the mixing measure , or both. From here on, we shall assume that the data population follows a mixture distribution composed of unknown kernel density and unknown mixing measure — thus, in this section the true density shall be denoted by to highlight the possibility of misspecification.
To avoid heavy subscripting, we continue to use instead of to represent the density function of the mixture model that we operate on. The kernel density is selected by the modeler. Additionally, is endowed with a suitable prior on the space of mixing measures with support belonging to compact parameter space . By Bayes rule (Eq. (2)) one obtains the posterior distribution , where the i.i.d. sample are generated by . It is possible that . It is also possible that the support of does not reside within . In practice, the statistical modeler would hope that the kernel choice of is not too different from the true but unknown . Otherwise, it would be unclear how one can interpret the parameters that represent the mixing measure . Our goal is to investigate the posterior contraction of in such situations, as sample size tends to infinity. The theory is applicable for a broad class of prior specification on the mixing measures on , including the MFM prior and a nonparametric Bayesian prior such as the Dirichlet process.
A fundamental quantity that arises in the theory of Bayesian misspecification for density estimation is the minimizer of the KullbackLeibler (KL) distance from the true population density to a density function residing in the support of the induced prior on the space of densities , which we shall assume to exist (cf. [27]). Moreover, assume that the KL minimizer can be expressed as a mixture density , where is a probability measure on . We may write
(10) 
We will see in the sequel that the existence of the KL minimizer entails its uniqueness. In general, however, may be nonunique. Thus, define
It is challenging to characterize the set in general. However, a very useful technical property can be shown as follows:
Lemma 4.1.
For any and , it holds that .
By exploiting the fact that the class of mixture densities is a convex set, the proof of this lemma is similar to that of Lemma 2.3 of [27], so it is omitted. This leads quickly to the following fact.
Lemma 4.2.
For any two elements , for almost all .
In other words, the mixture density is uniquely identifiable. Under a standard identifiability condition of the kernel , which is satisfied by the examples considered in this section, it follows that is unique. Due to the model misspecification, in general . The best we can hope for is that the posterior distribution of the mixing measure contracts toward as tends to infinity. The goal of the remaining of this section is to study the posterior contraction behavior of the (misspecified) mixing measure towards the unique .
Following the theoretical framework of [27] and [37], the posterior contraction behavior of the mixing measure can be obtained by studying the relationship of a weighted version of Hellinger distance and corresponding Wasserstein distances between and the limiting point . In particular, for a fixed pair of mixture densities and , the weighted Hellinger between two mixture densities is defined as follows [27].
Definition 4.1.
For ,
It is clear that when and , the weighted Hellinger distance reduces to the standard Hellinger distance. In general they are different due to misspecification. According to Lemma 4.1, we have for all .
Choices of prior on mixing measures
As in the previous section, we work with two representative priors on the mixing measure: the MFM prior and the Dirichlet process prior. Both prior choices may contribute to the model misspecification, if the true mixing measure lies outside of the support of the prior distribution.
Recall the MFM prior specification given in Eq. (7). We also need a stronger condition on :

The prior distribution on the number of components satisfies for some .
Note that the assumption with prior on the number of components is mild and satisfied by many distributions, such as Poisson distribution. In order to obtain posterior contraction rates, one needs to make sure the prior places sufficient mass on the (unknown) limiting point of interest. For the MFM prior, such a condition is guaranteed by the following lemma.
Lemma 4.3.
Let denote the prior for generating based on MFM (7), where admits condition (P.2) and admits (P.4’). Fix . Then the following holds, for any
(11) 
for all sufficiently small so that . Here, and stand for the maximal packing number for under norm and the prior weight , respectively.
The proof of Lemma 4.3 is provided in Section 6.3. Alternatively, for a Dirichlet process prior, is distributed a priori according to a Dirichlet measure with concentration parameter and base measure satisfying condition (P.2). An analogous concentration bound for such a prior is given in Lemma 5 of [37].
It is somewhat interesting to note that the difference in the choices of prior under misspecification does not affect the posterior contraction bounds that we can establish. In particular, as we have seen for the definition, does not depend on a specific choice of prior distribution (only its support). Due to misspecification, may have infinite support, even if the true has a finite number of support points. When has infinite support, the posterior contraction toward becomes considerably slower compared to the wellspecified setting. In addition to the structure of , we will see in the sequel that the modeler’s specific choice of kernel density proves to be especially impactful on the rate of posterior contraction.
4.1 Gaussian location mixtures
Consider a class of kernel densities that belong to the supersmooth location family of density functions. A particular example that we focus on in this section is a class of Gaussian distributions with some fixed covariance matrix . More precisely, has the following form:
(12) 
where stands for matrix determinant. Note that, Gaussian kernel is perhaps the most popular choice in mixture modeling.
With the Gaussian location kernel, it is possible to obtain a lower bound on the Hellinger distance between the mixture densities in terms of the Wasserstein distance between corresponding mixing measures [37]. More useful in the misspecified setting is a key lower bound for the weighted Hellinger distance in terms of the Wasserstein metric, which is given as follows. We shall require a technical condition relating to the true and :

The support of , namely, is a bounded subset of . Moreover, there are some constants such that for any ,
Proposition 4.1.
Let be a Gaussian kernel given by (12), a bounded subset of . Moreover, assume that satisfies condition (P.5) for . Then, there exists depending on and , such that for any , whenever , the following inequality holds
Here, and are respectively the maximum and minimum eigenvalue of . is a constant depending on the parameter space , the dimension , the covariance matrix , and in condition (P.5).
The proof of Proposition 4.1 is provided in Section 6.4. We are ready to prove the first main result of this section.
Theorem 4.1.
4.2 Laplace location mixtures
Next, we consider a class of multivariate Laplace kernel, a representative in the family of ordinary smooth density functions. It was shown by [37] that under a Dirichlet process location mixture with a Laplace kernel, assume the model is wellspecified, the posterior contraction rate of mixing measures to is of order for some constant . Under the current misspecification setting, we will be able to derive contraction rates toward in the order of for some constant dependent on . The density of location Laplace distributions is given by :
(13) 
where and are respectively fixed covariance matrix and scale parameter such that . Here, is a Bessel function of the second kind of order . As discussed in [8], as . Therefore, there exists such that as long as , we have
where we use the shorthand notation . To ease the ensuing presentation, we denote
The following proposition provides a key lower bound of weighted Hellinger distance in terms of the Wasserstein metric.
Proposition 4.2.
Let be a Laplace kernel given by (13) for fixed and such that . Moreover, satisfies condition (P.5) for some . Then, there exists depending on , and , such that for any , whenever , the following inequality holds
for any positive constant . Here, and are respectively the maximum and minimum eigenvalue of . The constant depends on the parameter space , the dimension , the covariance matrix , the scale parameter , and in (P.5).
The proof of Proposition 4.2 is provided in Section 6.5. Given the above result, the posterior contraction rate for mixing measures in the location family of Laplace mixture distributions can be obtained from the following result:
Theorem 4.2.
The proof of Theorem 4.2 is straightforward using the result in Proposition 4.2 and analogous to the proof argument of Theorem 4.1; therefore, it is omitted. Note that, identical to the Gaussian kernel case, a similar contraction behavior also holds for the Laplace kernel with the Dirichlet Process Prior. The proof can be obtained similar to the MFM prior by invoking Lemma 5 of [37] instead of Lemma 4.3.
Remarks
(i) It is worth noting that compared to the wellspecified setting, the posterior contraction upper bound obtained for Gaussian location mixtures remains the same slow logarithmic rate . For Laplace mixtures, when the truth satisfies condition (P.5) with , the posterior contraction upper bound obtained under misspecification remains a polynomial rate of the form modulo a logarithmic term. Due to misspecification there is a loss of a constant factor in the exponent , which is dependent on the shape of the kernel density as it is captured by the term .
(ii) Although Gaussian mixtures have proved to be an asymptotically optimal density estimation device under suitable and mild conditions (cf. [18]), the results obtained in this section suggest that it is not a suitable choice for mixture modeling under model misspecification, even if the true has finite number of support points, if the primary interest is in the quality of model parameter estimates. Mixtures of heavytailed and ordinary smooth kernel densities such as the Laplace prove to be more amenable to efficient parameter estimation. Thus, the modeler is advised to select for , say, a Laplace kernel over a supersmooth kernel such as Gaussian kernel, provided that condition (P.5) is valid.
(iii) It is interesting to consider the scenario where the true kernel happens to be a Gaussian kernel: if we use the either a wellspecified or a misspecified Gaussian kernel to fit the data, the posterior contraction bound is the extremely slow accordingly to Theorem 4.1. This rate may be too slow to be practical. If the statistician is too impatient get to the truth , because sample size is not sufficiently large, he may well decide to select a Laplace kernel instead. Despite the intentional misspecification, he might be comforted by the fact that the posterior distribution of contracts at an exponentially faster rate to a given by Theorem 4.2 for .
4.3 When has finite support
The source of the deterioriation in the statistical efficiency of parameter estimation under model mispecification is ultimately due to the increased complexity of the limiting point . Even if the true has a finite number of support points, this is not the case for in general. Unfortunately, it is very difficult to gain concrete information about both in practice and in theory, due to the lack of knowledge about the true . When some precious information about is available, specifically, suppose that we happen to know has a bounded number of support points such that for some known . Then it is possible to devise a new prior specification on the mixing measure so that one can gain a considerably improved posterior contraction rate toward . We will show that it is possible to obtain the contraction rate of the order under metric — this is the same rate of posterior contraction one would get with overfitted mixtures in the wellspecified regime.
In order to analyze the convergence rate of mixing measure under that setting of , we introduce a relevant notion of integral Lipschitz property, which is a generalized form of the uniform Lipschitz property for the misspecification scenarios.
Definition 4.2.
For any given , we say that the family of densities admits the integral Lipschitz property up to the order with respect to two mixing measures