[Audio] Hello, my name is saurabh Kumar Jain, today in this presentation I i will present our work titled as Stochastic binary network for Universal Domain Adaptation..
[image]. [image] O Source Domain Label Set •g Target Domain Label Set.
Challenge-1: Overfitting issue from the source data due to the lack of supervision from target domain..
Three major components of our framework:. Feature extractor (F): Maps the input images x to features f: f= F(x)..
[image] .11. = labeled data from source domain. [image].
SBN consists of |Cs| binary classifiers, one for each source domain class..
For the target domain samples, we use open-set entropy minimization..
[image] (a) One-vs-All Loss —IIN A Sampling Source Output Hard Negative Negative Label 0.4 0.6 1 0.9 0.2 0.9 0.1 0.8 Positive Extracted Features Stochastic Binary Network 0.1 0.32 Target Output Entropy Positive Label Average (b) Entropy Minimization Loss.
Current UniDA works use the output of one classifier for finding certainty scores (e.g. confidence, entropy and domain similarity)..
[image] m Source Outputs m Target Outputs Sampling Final Source Output 0.60 040 0.80 020 Source Confidence Score 0.5 0.3 0.8 0.7 0.6 0.4 m Stochastic Binary Networks 0.5 0.7 0.2 0.3 0.4 0.6 0.8 0.3 0.9 0.7 0.2 0.7 0.2 0.7 0.1 0.3 0.8 0.3 Average 0.45 0.60 0.75 0.25 0.55 0.55 (c: ) 0.40 0.25 0.75 —+ 0.75 (Ci) Target Extracted Features Final Target Confidence Output Score.
Adversarial learning is a powerful technique for reducing feature discrepancy and discovering invariant representations in domain adaptation..
Consistency regularization is the powerful solution for learning compact representations in semi supervised learning and FixMatch is one of the representative of it..
Advantage of deep discriminative clustering is that it introduces auxiliary distribution (i.e. soft label) by considering overall feature structure in the target domain..
[image] Hard Negative Classifier & Entr,py Minimization Binary sampled Figure I. Illustration of the proposed STUN framework: Overall mcxlel consists ofa feature extractor stochastic binary network (SBN), and an adversarial domain discriminator Hard negative classifier sampling (Sec. 3.2) is used for efficient training with source samples. The proposed confidence score estimation technique (Sec. 3.3) calculates robust confidence scores for source and target samples. Weighted adversarial learning is introduced for common class alignment between source and target domain (Sec. 3.4). Consistency regularization is used for learning compact features distribution in the target domain via deep discriminative clustering (Sec. 3.5)..
[image] Office-31 (10/0/21) Method Table 1. H-score results of each method in open-set domain adaptation (ODA) setting (best in red and second best in blue). 46.8 38.9 54.8 58.3 88.3 90.5 90.4 88.6 88.3 88.2 89.6 99.2 OfficeHome (25/0/40) VisDA A2W A2D W2A W2D D2A D2W Avg A2C A2P A2R C2A C2P C2R P2A P2C P2R R2A R2C R2P Avg (6/0/6) OVANET* 90.2 89.7 86.8 89.4 UAN CMU DCC DANCE ROS OVANET CPR STUN 78.8 84.9 68.3 53.0 68.0 68.8 55.1 0.0 0.0 0.2 71.7 65.8 82.0 98.2 87.2 94.8 83.3 60.1 69.3 76.5 58.9 65.2 68.6 60.6 56.3 74.4 68.8 60.4 75.7 66.2 54.9 85.3 88.3 98.4 82.6 92.7 0.0 0.2 0.2 0.0 0.0 0.2 0.2 0.0 0.1 0.1 80.9 67.2 89.4 72.6 56.1 67.5 66.7 49.6 66.5 64.0 55.8 53.0 70.5 61.6 57.2 71.9 61.7 88.9 79.1 78.8 79.8 61.9 61.3 63.7 64.2 58.6 62.667.461.065.5 65.9 61.3 64.2 63.0 86.7 982 91 7 58.4 66.3 69.3 60.3 65.1 67.2 58.8 52.4 68.7 67.6 58.6 666 63.3 91.0 59.4 67.9 75.3 62.7 65.6 70.2 61.4 54.2 71.3 68.3 58.3 71.9 65.5 91.1 57.1 67.2 75.7 64.9 66.8 65.6 64.5 57.3 73.8 71.0 60.9 74.4 66.6 92.0 64.0 70.4 74.1 64.3 67.8 71.4 61.7 58.9 72.1 69.7 62.5 70.2 67.3 99.8 86.7 90.5 96.9 98.5 96.4 51.9 54.2 70.7 67.5 50.1 53.5 79.4 80.0.
[image] Table 2. H-score results Of each method in universal domain adaptation (UniDA) setting (best in red and second best in b 58.6 59.7 71.5 83.0 879 79.9 88.0 86.7 89.8 868 OVANET* 80.9 85.4 84.4 91.3 Office-31 (10/10/11) OfficeHome (10/5/50) Vism Method UAN CMU I-UAN ROS DANCE Zhu et al. DCC PCL OVANET SNAIL CPR STUN A2W A2D W2A W2D D2A D2W Avg A2C A2P A2R C2A C2P C2R P2A P2C P2R R2A R2C R2P Avg (6/3/3) 79.4 79.4 80.6 81.4 67.3 68.1 72.2 71.3 71.4 79.2 71.5 78.6 72.2 83.2 86.1 86.6 78.5 88.5 75.9 80.5 82.8 65.7 85.8 82.4 83.9 89.5 89.0 60.3 84.0 86.4 82.5 71.4 80.4 94.3 94.2 97.5 95.8 60.1 70.6 63.5 51.6 51.7 54.3 61.7 57.6 61.9 50.447.661.5 629 526 65.2 56.6 79.3 73.1 56.0 56.9 59.2 67.0 64.3 67.8 54.7 51.166.4 68.2 57.9 69.7 61.6 80.7 81.0 81.5 79.5 54.1 63.1 65.2 70.5 68.3 73.2 61.9 51.8 63.8 69.8 55.6 70.7 64.0 95.3 81.0 94.6 82.1 54.0 77.6 85.3 62.1 71.0 76.4 68.8 52.4 83.2 71.6 57.8 79.2 0.0 80.3 61.0 60.4 64.9 65.7 58.8 61.8 73.1 61.2 66.6 67.7 624 63.7 63.9 88.6 70.2 79.3 80.2 58.0 54.1 58.0 74.6 70.6 77.5 64.3 73.674.9 81.0 75.1 80.4 70.2 93.5 68.7 78.8 78.3 52.9 71.7 84.5 70.8 72.9 82.1 66.8 43.8 84.2 76.4 84.2 76.5 0.3 86.5 62.8 75.6 78.6 70.7 68.8 75.0 71.3 58.6 80.5 76.1 64.1 78.9 1.8 96.5 87.4 55.9 57.9 63.1 52.5 55.4 56.4 66.8 53.5 61.1 64.3 53.8 63.2 58.6 82.5 92.3 86.9 62.0 77.7 86.3 70.0 70.1 79.3 70.0 58.8 82.5 76.8 64.0 80.5 73 2 96.8 85.5 934 888 59.0 77.1 83.7 69.7 68.1 75.4 74.6 56.1 78.9 80.5 63.0 81.0 2.3 94.7 89.8 64.3 77.8 81.3 70.1 70.0 75.8 75.3 63.5 81.6 78.9 656 81.0 71.4 80.1 84.2 86.1 91.4 95.4 30.5 34.6 50.1 42.8 50.41 43.0 53.1 59.8 58.2 3.8 68.3.
Thank You.