Algorithms | Free Full-Text | GDUI: Guided Diffusion Model for Unlabeled Images

[ad_1]

Given an unlabeled image dataset X = x i i = 1 N consisting of N images, where x i represents the i-th unlabeled image, our objective is to associate X with the true labels of K categories. In other words, our goal is to match each image x i with its corresponding true label y i g , where y i g represents the true label of the i-th image. Subsequently, we employ diffusion models and classifiers to process the labeled images and generate high-quality images guided by their respective categories. To accomplish the aforementioned objectives, we propose the GDUI. The overall flow of the proposed GDUI for unlabeled images manipulation is illustrated in Figure 1. First, the input unlabeled images X = x i i = 1 N are clustered into K classes. Then, the pseudo-label-matching algorithm is used to transform the image set with pseudo-labels into a set of images with true labels. Second, we fine-tune the labels of the images using the label-matching refinement algorithm. Third, we optimize guided diffusion using labeled images matched by the label-matching refinement algorithm. In the following subsections, we will first provide a brief background on diffusion models, followed by a detailed dissection of the individual modules.

3.1. Preliminary

Diffusion probabilistic models [40,41] are a class of latent variable models that involve both a forward diffusion process and a reverse diffusion process. The forward process of diffusion model is a Markov chain where data are gradually corrupted with Gaussian noise based on a variance schedule β 1 , , β T :

q x 1 : T x 0 : = t = 1 T q x t x t 1

q x t x t 1 : = N x t ; 1 β t x t 1 , β t I

where x 0 , , x T are latent variables of the same dimension, and  x 0 follows the distribution q ( x 0 ) . The inverse process of the diffusion model, denoted as p θ x 0 : T , is defined as a Markov chain with learned Gaussian transitions:

p θ x 0 : T : = p x T t = 1 T p θ x t 1 x t

p θ x t 1 x t : = N x t 1 ; μ θ x t , t , Σ θ x t , t I

where μ θ x t , t can be represented as a linear combination of x T and a noise predictor ϵ θ x t , t , the variance Σ θ x t , t is fixed to a known constant typically.

The quality of samples can be optimized by the following parameterized and simplified objective:

L s i m p l e ( θ ) : = E t , x 0 , ϵ ϵ ϵ θ α ¯ t x 0 + 1 α ¯ t ϵ , t 2

Here, t is uniformly distributed between 1 and T. It is defined α t : = 1 β t and α ¯ t : = s = 1 t α s .

Compared to unconditional diffusion models, conditional diffusion models can generate images specified by conditions. The classifier-guided [5,42] sampling method demonstrates that the gradient x t

log p ϕ y x t of a classifier can guide conditional diffusion models to generate higher-quality samples with a specified class y.

3.2. Pseudo-Label Generation

To extend the classifier guidance technique to unlabeled images, here we adopt a deep clustering approach for the unsupervised learning of visual features [43] to cluster the samples and generate synthetic labels. We adopt the SPICE [31] framework that divides network training into three stages. First, there are two branches in which two different random transformations of the same image are taken as inputs. Each branch includes a feature model and a projection head. Given two transformations x and x of an image x, the outputs of the two branches are represented as z + and z, respectively. The parameters of the feature model F and projection head P are optimized by the following loss function:

L f e a = log exp z T z + / τ i = 1 N q exp z T z i / τ + exp z T z + / τ

where z i is the negative sample and τ is the temperature. The finally optimized feature model parameters are denoted as θ F s .

In the second stage, given the feature model parameters θ F s and the unlabeled images X , the goal is to separately optimize the parameters θ C of the clustering head C in order to predict cluster labels y i s . The optimization of parameters θ C is performed within the EM framework, where the cluster labels y i s are obtained given θ C in the expectation (E) step, and then in the maximization (M) step, the parameters θ C are optimized upon obtaining the cluster labels y i s .

In the third stage, the feature model F and clustering head C are jointly optimized. After obtaining the embedding features f i i = 1 N and cluster labels y i s corresponding to the images X in the first two stages, a subset of reliable samples X r is selected as:

X r = x i , y i s r i > σ r , i = 1 , 2 , , N

where r i is the semantically consistent ratio of the sample x i and σ r denotes the confidence threshold for X r . The semantically consistent ratio r i of the sample x i is defined as:

r i = 1 N s y L i 1 y = y i s

where N s represents the number of samples that are closest to the sample x i based on the cosine similarity between their embedding features, and  L i represents the corresponding labels of these N s samples. The jointly trained network optimizes the parameters θ F and θ C using the following loss function:

L j o i n t = 1 L i = 1 L L c e y i s , C F α x i ; θ F

; θ C

+ 1 U j = 1 U 1 y j u 0 L c e y j u , C F β x j ; θ F

; θ C

where the first part is calculated using L reliable samples x i , y i s from X r , and the second part is calculated using U pseudo-labeled samples x i , y i u with pseudo-labels y j u . These pseudo-labels y j u are assigned to the classes predicted by the network with the highest probability and exceeding a certain threshold. α and β respectively denote the operators for weak and strong transformations on the input image. L c e is the cross-entropy loss function.

After three stages of clustering, the input unlabeled images X = x i i = 1 N are divided into K clusters with clustering labels y i s . The probability matrix P = p 1 , p 2 , , p N T R N × K is generated for the image set for each clusters. The probability matrix P represents the probabilities of each image belonging to specific clusters.

3.3. Diffusion Pseudo-Label Matching

Based on the obtained cluster labels y i s of X over K clusters, our goal is to match them with the ground-truth labels to guide target generation in the GDUI model. In unsupervised situations, we do not have ground truth to match against. To address the challenge of matching the ground-truth labels with the obtained cluster labels and to ensure a globally attentive alignment, we adopt the principles of the Stable Marriage Algorithm (SMA) [44] for the overall matching strategy. This approach emphasizes the importance of considering global information in the matching process. To address this issue, we propose the pseudo-label-matching algorithm, which leverages the zero-shot capability of CLIP to achieve bilateral matching between the clustering labels and the ground truth. Given the clustering probability matrix P and K clusters with cluster labels y i s , the top confident samples are selected as the clustering prototypes for each cluster.
To illustrate the process, we take the m-th cluster as an example. We define the m-th cluster as:

X m = x i , y i s y i s = m , i = 1 , 2 , , N

The top samples are selected as:

X m t o p = { ( x i , y i s ) i argtopk ( P j , m , N t o p ) , x j X m }

where the argtopk function argtopk P j , m , N t o p returns the indices of the top N t o p highest-scoring samples in the m-th cluster.

Using CLIP, zero-shot classification is performed on the samples in X m t o p with respect to the provided ground-truth label set Y g = y i g i = 1 K . The classes with the highest classification probability are then selected as the labels for the samples. We obtain p i c as the CLIP classification probability for the m-th cluster by calculating the proportion of each class in the N t o p samples. This ensures that the priority of each class in the clustering result is directly proportional to its probability such that higher probabilities correspond to higher priorities. Then, based on the previously obtained K clusters, the CLIP priority matrix for the K clusters, P c = p 1 c , p 2 c , , p K c T R K × K , is constructed.

A cluster, for example, the m-th cluster X m , is selected from the unmatched clusters U = X i i = 1 K that have not been matched with the ground-truth label set Y g , and the highest priority class y i g , represented as:

y i g = argmax P m , j c , j Y m u

where Y m u represents the ground truth of the m-th cluster that has not been requested for label matching, and P m , j c denotes the element in the m-th row and column j of matrix P c , where j is the index of the element in the Y m u . Then, argmax P m , j c returns the index of the highest priority class among all unmatched ground truths for the m-th cluster. If the class y i g has not been assigned, then it is assigned to the current m-th cluster. Otherwise, its priority is compared with the already assigned clusters. The cluster with a lower priority is added back to the unmatched clustering set U , while the one with a higher priority is matched with class y i g . Until all clusters are matched, we can obtain each cluster and its corresponding label, denoted by M = ( X i , y i g ) i = 1 K , where each label corresponds to the true class.

The above process for diffusion pseudo-label matching is summarized in Algorithm 1.

Algorithm 1 Pseudo-label matching
Input: Unmatched clusters U = X i i = 1 K
Output: Matched clusters M = ( X i , y i g ) i = 1 K
1:

for  i = 1 , 2 , , K  do

2:
    Select i-th cluster X i from U with Equation (10);
3:
    Select N t o p top confident samples X i t o p from X i with Equation (11);
4:

    Compute class proportions p i c based on highest classification probability of each x i X i t o p using CLIP;

5:

end for

6:

Generate priority matrix P c = p 1 c , p 2 c , , p K c T ;

7:

while  U  do

8:

    Pick a cluster X m from U ;

9:
    Select the highest priority true label y i g among unrequested matches with Equation (12);
10:

    if  y i g has not been assigned then

11:

        Assign y i g to cluster X m ;

12:

    else

13:

         y i g has been assigned to cluster X k ;

14:

        Assign y i g to the cluster with higher priority between X m and X k ;

15:

        Add lower-priority cluster to unmatched clusters U

16:

    end if

17:

end while

18:

return Matched clusters M = ( X i , y i g ) i = 1 K ;

3.4. Diffusion Label Matching Refinement

In the SPICE framework, an imperfect feature model can cause similar features to be assigned to truly different clusters, while imperfect cluster heads can result in dissimilar samples being assigned the same cluster label. These issues may eventually lead to the presence of samples from different classes in the same cluster and mismatches between samples and their true labels. Alternatively, these errors can also be caused by the pseudo-label-matching algorithm, which can result in mismatches between the cluster labels and the true labels of the clusters they represent. To overcome these issues, we propose a diffusion model label-matching refinement algorithm to adjust the matching of labels within clusters.

Here, we also use the m-th cluster as an example, similar to our previous selection of X m t o p and the least confident samples X m b t m for m-th cluster being selected as:

X m b t m = { ( x i , y i s ) i arglowk ( P j , m , N b t m ) , x j X m }

where the arglowk function arglowk P j , m , N b t m returns the indices of the least N b t m confident samples, selected from the indices belonging to the m-th samples in the m-th column of matrix P : , m . Similar to X m t o p , zero-shot classification based on true labels for X m b t m is also performed using CLIP.

Furthermore, the semantic matching ratios δ m t o p and δ m b t m for the top and bottom of the m-th cluster can be represented as:

δ m t o p = 1 N t o p y X m t o p 1 y = y i g

δ m b t m = 1 N b t m y X m b t m 1 y = y i g

where δ m t o p and δ m b t m reflect the matching status of the top and bottom of the m-th cluster. To comprehensively reflect the matching status of the m-th cluster, the overall semantic matching ratio δ m is defined as:

δ m = δ m t o p w t o p + δ m b t m w b t m

where w t o p and w b t m represent the weights of δ m t o p and δ m b t m , respectively, in the overall matching ratio δ m .

If the overall semantic matching ratio δ m > λ δ , where λ δ is the overall matching threshold, then a high matching degree for the m-th cluster X m implies that the cluster label y m g for that is trustworthy. In other cases, further examination is required to determine the matching status of the top and bottom of the m-th cluster. In cases where the matching status of the top δ m t o p in the m-th cluster is greater than the top matching threshold λ t o p but the matching status of the bottom δ m b t m is less than the bottom matching threshold λ b t m , it is necessary to evaluate the semantic consistency ratio r m b t m of the least confident samples X m b t m , which can be defined as:

r m b t m = 1 N b t m x i X m b t m P i , m

where P i , m denotes the clustering probability of the element located in the i-th row and m-th column of matrix P , specifically referring to the probability that x i X m b t m belongs to the m-th cluster X m . When r m b t m exceeds the confidence threshold σ b t m , it implies that even if the matching degree at the bottom level is lower than the threshold, the overall consistency of the m-th cluster X m is sufficiently reliable, thus suggesting that the cluster label y m g as a whole can be trusted. Otherwise, samples X m a d j in the m-th cluster with low clustering consistency, denoted as:

X m a d j = x i , y i g P i , m < σ b t m , x i X m

need to be reassigned to other clusters using CLIP.

If δ m t o p is less than the top matching threshold λ t o p , it suggests a mismatch in the overall clustering. If r m b t m is also lower than the confidence threshold σ b t m , we use CLIP to reassign the samples X m a d j in the m-th cluster with low clustering consistency to other clusters, indicating that their original cluster labels are no longer valid. Furthermore, when r m b t m exceeds the confidence threshold σ b t m , suggesting overall clustering consistency but a mismatch in the overall clustering, we will maintain the original matching cluster labels y m g . Following the fine-tuning of each cluster, the resulting fine-tuned clusters, along with their corresponding labels M * = X i * , y i g

i = 1 K are obtained; X * denotes the cluster that has undergone fine-tuning.

3.5. Synthesis Guided with Matching Labels

For conditional image synthesis, we use a classifier p ϕ ( y x )

to enhance the generator of diffusion models based on clusters M * = X i * , y i g

i = 1 K with given matching real classes, where x is the input image to the classifier and y is the corresponding output label. The classification network is composed of the feature model F and the clustering head C from the clustering stage, along with an additional classification head C c l f . As demonstrated in previous works [41,42], a pre-trained diffusion model can be conditioned via the gradients of a classifier. The conditioned reverse denoising process, denoted as p θ ( x t 1 x t )

in Equation (4), can be expressed as p θ , ϕ x t x t + 1 , y . In [41,42], the following equation:

log p θ , ϕ x t x t + 1 , y = log ( p θ x t x t + 1 p ϕ y x t

+ B 1 log p ( z ) + B 2 , z N ( μ + Σ g , Σ )

where g = x t log p ϕ y x t

x t = μ

and Σ θ x t , t = Σ for brevity, have been proven. B 1 and B 2 are constants. p ϕ y x t

is a shorthand for the classifier p ϕ y x t , t

trained on noisy images x t .

[ad_2]

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More