Skip to content

bruinxiong/xionglin.github.io

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bio of Xiong Lin (熊 霖)

Now, I have a new position as a GenAI algorithm expert of Geely Auto Research in NingBo China since Aug 2024. Before that, I'm a senior algorithm expert of SenseTime in Xi'an China. I was an algorithm expert of JDTech in Beijing, China during 2019-2022. I was a research scientist of JD Digits (aka JD Finance) AI Lab during 2018-2019 in Silicon Valley, CA, US. Before joining JD Digits AI Lab, I was a senior research engineer of Learning & Vision, Core Technology Group, Panasonic R&D Center Singapore (PRDCSG). I received Ph.D degree of pattern recognition & intelligent system from school of electronic engineering, Xidian University. My Master's superviser is Prof. ZHANG li1,2 and Ph.D superviser is Prof. JIAO licheng. I am working on developing deep neural network models and algorithms for face detection, face recognition, image generation and instance segmentation. To date, I have published over 22 papers including top journals such as IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI) and the International Journal of Computer Vision (IJCV), top international conferences such as Neural Information Processing Systems (NeurIPS), Computer Vision and Pattern Recognition (CVPR) and International Joint Conference on Artificial Intelligence (IJCAI). I am also the reviewer for more than 10 top international journals and conferences. There are some intern positions enthused about neural rendering, neural radiance fields related tasks. Contact me at the bottom email address if your are interested.

Research Interests:

Diffusition Transformer, Neural Rendering, Intelligent Video Synthesis, Virtual Digital Man, Federated Learning, Distributed Model Parallelism, Unconstrained/Large-Scale Face Recognition, Deep Learning Architecture Engineering, Person Re-Identification, Transfer Learning, Riemannian Manifold Optimization, Sparse and Low-Rank Matrix Factorization.

bruinxiong's GitHub stats

News and Activities:

Sep 2023: One paper titled with RGMIL: Guide Your Multiple-Instance Learning Model with Regressor is accepted by NeurIPS 2023.

Mar 2023: One paper titled with Surrogate-assisted multi-objective optimization via knee-oriented Pareto front estimation is accepted by Swarm and Evolutionary Computation.

Jan 2023: One paper titled with Adaptive Self-Supervised SAR Image Registration With Modifications of Alignment Transformation is accepted by TGRS.

Jun 2021: One paper titled with Multi-scale Fused SAR Image Registration based on Deep Forest is accepted by Remote Sensing.

Aug 2020: To be a member of the Program Committee (PC) for AAAI 2021, the 35th AAAI Conference on Artificial Intelligence, Feb 2-9 in a virtual conference.

Oct 2019: One paper titled with Recognizing Profile Faces by Imagining Frontal View is accepted by IJCV.

Aug 2019: To be a member of the Program Committee (PC) for AAAI 2020, the 34th AAAI Conference on Artificial Intelligence, Feb 7-12 in New York, NY, USA.

Nov 2018: My former employer, Panasonic R&D Center Singapore, has just achieved the No.6 in Wild Setting of fierce NIST Face Recognition Vendor Test (FRVT) 1:1 Verification. It proves that our previous strategies for wild setting are shown to be correct. This score is the same as the No.1 of three months ago. Many competitors get large improvement. It is a fiercely competitive battle field of face recognition. Congratulations, my friends and the former boss Ms. Shen. More information can be found in the offical report.

Nov 2018: One paper titled with Look Across Elapse: Disentangled Representation Learning and Photorealistic Cross-Age Face Synthesis for Age-Invariant Face Recognition is accepted by AAAI 2019.

Aug 2018: Panasonic releases a news for new facial recognition system "FacePRO" integrated with our face recognition algorithm.

Jul 2018: One paper titled with 3D-Aided Dual-Agent GANs for Unconstrained Face Recognition is accepted by TPAMI.

Jun 2018: We submit our first API (psl) submission to NIST FRVT 1:1 Verification, the leaderboard can be found in here. We will continue to improve our model in the future, especially for WILD Setting.

May 2018: We release the evaluation codes for IJB-A with open and close protocols. Anybody can reproduce our results based on single ResNext 152 model.

Apr 2018: One paper is accepted by IJCAI 2018.

Apr 2018: One paper is accepted by CVPR 2018 Workshop of NVIDIA AI City Challenge.

Feb 2018: One paper is accepted by CVPR 2018.

Feb 2018: Panasonic releases a news,Japanese,English for a pre-release of new product integrated with our face recognition algorithm. News on Youtube 1,News on Youtube 2,News on Youtube 3.

Feb 2018: Our method has achieved TAR = 0.959 @ FAR = 0.0001 for 1:1 verification and TPIR = 0.946 @ FPIR = 0.01 for 1:N open-protocol identification on IJB-A. (Top 1 performance in current state-of-the-art methods).

Jan 2018: As a Chinese Tech-oriented media, AI Era gave a full interview for our NIST IJB-A Face Challenge. Our latest performance of IJB-A will be updated in the next version of our arXiv paper. (New update)

Jul 2017: We attended MS-Celeb-1M Large-Scale Face Recognition with our proposed face recognition algorithm and archieved No.1 place on Challenge-1 Random Set and Hard Set. AI Era gave an interview for this challenge.

May 2017: One paper is accepted by NIPS 2017.

May 2017: Panasonic released a news to introduce our project and report our archievement on IJB-A Face Verification and Identification Challenge. Moreover, CNET Japan also picked up the story.

Apr 2017: We proposed Transferred Deep Feature Fusion (TDFF) for face recogntion and obtained No.1 place on all tracks of National Institute of Standards and Technology (NIST) IARPA Janus Benchmark A ( IJB-A) Unconstrained Face Verification and Identification Challenge. Official reports can be found in here: Identification and Verification 

Selected Publications:

  • Zhaolong Du, Shasha Mao, Yimeng Zhang, Shuiping Gou, Licheng Jiao, Lin Xiong, "RGMIL: Guide Your Multiple-Instance Learning Model with Regressor", in Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS), New Orleans, USA, Dec 10-16, 2023. Acceptance rate is 26.1% (3321/12343)
  • Abstract: In video analysis, an important challenge is insufficient annotated data due to the rare occurrence of the critical patterns, and we need to provide discriminative frame-level representation with limited annotation in some applications. Multiple Instance Learning (MIL) is suitable for this scenario. However, many MIL models paid attention to analyzing the relationships between instance representations and aggregating them, but neglecting the critical information from the MIL problem itself, which causes difficultly achieving ideal instance-level performance compared with the supervised model. To address this issue, we propose the Regressor-Guided MIL network (RGMIL), which effectively produces discriminative instance-level representations in a general multi-classification scenario. In the proposed method, we make full use of the regressor through our newly introduced aggregator, Regressor-Guided Pooling (RGP). RGP focuses on simulating the correct inference process of humans while facing similar problems without introducing new parameters, and the MIL problem can be accurately described through the critical information from the regressor in our method. In experiments, RGP shows dominance on more than 20 MIL benchmark datasets, with the average bag-level classification accuracy close to 1. We also perform a series of comprehensive experiments on the MMNIST dataset. Experimental results illustrate that our aggregator outperforms existing methods under different challenging circumstances. Instance-level predictions are even possible under the guidance of RGP information table in a long sequence. RGMIL also presents comparable instance-level performance with S-O-T-A supervised models in complicated applications. Statistical results demonstrate the assumption that a MIL model can compete with a supervised model at the instance level, as long as a structure that accurately describes the MIL problem is provided. The codes are available on here.
  • Junfeng Tang, Handing Wang, Lin Xiong, "Surrogate-assisted multi-objective optimization via knee-oriented Pareto front estimation", is accepted by Swarm and Evolutionary Computation, vol 77, 2023. IF 10.267 (2023).
  • Abstract: In preference-based multi-objective optimization, knee solutions are termed as the implicit preferred promising solution, particularly when users have trouble in articulating any sensible preferences. However, finding knee solutions by existing posteriori knee identification methods is hard when the function evaluations are expensive, because the computational budget wastes on non-knee solutions. Although a number of knee-oriented multi-objective evolutionary algorithms have been proposed to overcome this issue, they still demand massive function evaluations. Therefore, we propose a surrogate-assisted evolutionary multi-objective optimization algorithm via knee-oriented Pareto front estimation, which employs surrogate models to replace most of the expensive evaluations. The proposed algorithm uses a Pareto front estimation method and a cooperative knee point identification method to predict the potential knee vector. Then, based on the potential knee vector, the aggregated function with an error-tolerant assignment converts the original problem into a single-objective optimization problem for an efficient optimizer. We perform the proposed algorithm on 2-/3-objective problems and experimental results demonstrate that the proposed algorithm outperforms the state-of-the-art knee identification evolutionary algorithms on most test problems within a limited computational budget.
  • Shasha Mao, Jinyuan Yang, Shuiping Gou, Kai Lu, Licheng Jiao, Tao Xiong, Lin Xiong, "Adaptive Self-Supervised SAR Image Registration With Modifications of Alignment Transformation", is accepted by IEEE Transactions on Geoscience and Remote Sensing (TGRS), 2023. IF 8.125 (2023).
  • Abstract: Considering that deep learning achieves the prominent performance, it has been applied into SAR image registration to improve the registration accuracy. In most methods, a deep registration model is constructed to classify for matched-points and unmatched-points, in which SAR image registration is regarded as a supervised two-classification problem. However, it is difficult to annotate massive matched-points manually in practice, which limits the performance of deep networks. Besides, inevitable differences among SAR images easily cause that some training and testing samples are inconsistent, which probably brings negative effects for training a robust registration model. To address these problems, we propose an adaptive self-supervised SAR image registration method, where SAR image registration is regarded as a self-supervised task rather than the supervised two-classification task. Inspired by self-supervised learning, we consider each point on SAR images as a category-independent instance, which mitigates the requirement of manual annotations. Based on key points from images, a self-supervised model is constructed to explore the latent feature of each key point, and then pairs of match-points are sought via evaluating similarities among key points and used to calculate the alignment transformation matrix. Meanwhile, to enhance the consistency of samples, we design a new strategy that constructs multi-scale samples by transforming key points from one image into another, which avoids inevitable diversities between two images effectively. Specially, the constructed samples feeding to the self-supervised model are adaptively updated with the modification of the transformation matrix in iterations. Moreover, the MPAS indicator is proposed to assist in estimating the transformation. Finally, experimental results illustrate that the proposed method achieves more accurate registrations than other compared methods.
  • Shasha Mao, Jinyuan Yang, Shuiping Gou, Licheng Jiao, Tao Xiong, Lin Xiong, "Multi-scale Fused SAR Image Registration based on Deep Forest", is accepted by Remote Sensing, vol.13(11): 2227, 2021. IF 5.349 (2021).PDF.
  • Abstract: SAR image registration is a crucial problem in SAR image processing, since the registration result with high precision is conducive to improve the quality of other problems, such as change detection of SAR images. Recently, for most of DL-based SAR image registration methods, the problem of SAR image registration is regarded as a binary classification problem with matching and non-matching caterogies to construct the training model, where a fixed scale is generally set to capture pair image blocks corresponding to key points to generate the training set. Whereas, it is known that image blocks with different scales contain different information, which effects the performance of registration. Moreover, the number of key points is not enough to generate a mass of class-balance training samples. Hence, we proposed a new method of SAR image registration that meanwhile utilizes the information of multiple scales to construct the matching models. Specifically, considering that the number of training samples is small, deep forest is employed to train multiple matching models. Moreover, a multi-scale fusion strategy is proposed to integrate the multiple predictions and obtain the best pair matching points between the reference image and the sensed image. Finally, experimental results on four datas illustrate the proposed method is better than the compared state-of-the-art methods, and the analyses for different scales also indicate that the fusion of multiple scales is more effective and more robustness for SAR image registration than one single fixed scale.
  • Jian Zhao, Junliang Xing, Lin Xiong, Shuicheng Yan and Jiashi Feng, "Recognizing Profile Faces by Imagining Frontal View", is accepted by International Journal of Computer Vision (IJCV) -Springer, vol. 128, issue. 2, pp. 460-478, 2020. IF 6.071 (2018).PDF.
  • Abstract: Extreme pose variation is one of the key obstacles to accurate face recognition in practice. Compared with current techniques for pose-invariant face recognition, which either expect pose invariance from hand-crafted features or data-driven deep learning solutions, or first normalize profile face images to frontal pose before feature extraction, we argue that it is more desirable to perform both tasks jointly to allow them to benefit from each other. To this end, we propose a Pose-Invariant Model (PIM) for face recognition in the wild, with three distinct novelties. First, PIM is a novel and unified deep architecture, containing a Face Frontalization sub-Net (FFN) and a Discriminative Learning sub-Net (DLN), which are jointly learned from end to end. Second, FFN is a well-designed dual-pathGenerative Adversarial Network (GAN) which simultaneously perceives global structures and local details, incorporating an unsupervised crossdomain adversarial training and a “learning to learn” strategy using siamese discriminator with dynamic convolution for high-fidelity and identity-preserving frontal view synthesis. Third, DLN is a generic Convolutional Neural Network (CNN) for face recognition with our enforced cross-entropy optimization strategy for learning discriminative yet generalized feature representations with large intra-class affinity and inter-class separability. Qualitative and quantitative experiments on both controlled and in-the-wild benchmark datasets demonstrate the superiority of the proposed model over the state-of-the-arts. The complete source code, trained models and online demo of this work will be released to facilitate future research on pose-invariant face recognition in the wild.
  • Jian Zhao+, Lin Xiong+, Jianshu Li, Shuicheng Yan and Jiashi Feng, "3D-Aided Dual-Agent GANs for Unconstrained Face Recognition", is accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 41, no. 10, pp. 2380-2394, 2019. IF 17.730 (2018) (+ Equal contribution).PDF.
  • Abstract: Synthesizing realistic profile faces is promising for more efficiently training deep pose-invariant models for large-scale unconstrained face recognition, by populating samples with extreme poses and avoiding tedious annotations. However, learning from synthetic faces may not achieve the desired performance due to the discrepancy between distributions of the synthetic and real face images. To narrow this gap, we propose a Dual-Agent Generative Adversarial Network (DA-GAN) model, which can improve the realism of a face simulator’s output using unlabelled real faces, while preserving the identity information during the realism refinement.The dual agents are specifically designed for distinguishing real v.s. fake and identities simultaneously. In particular, we employ an off-the-shelf 3D face model as a simulator to generate profile face images with varying poses. DA-GAN leverages a fully convolutional network as the generator to generate high-resolution images and an auto-encoder as the discriminator with the dual agents. Besides the novel architecture, we make several key modifications to the standard GAN to preserve pose and texture, preserve identity and stabilize training process: (i) a pose perception loss; (ii) an identity perception loss; (iii) an adversarial loss with a boundary equilibrium regularization term. Experimental results show that DA-GAN not only presents compelling perceptual results but also significantly outperforms state-of-the-arts on the large-scale and challenging NIST IJB-A and CFP unconstrained face recognition benchmarks. In addition, the proposed DA-GAN is also promising as a new approach for solving generic transfer learning problems more effectively. DA-GAN is the foundation of our winning entry to the NIST IJB-A face recognition competitions in which we secured the 1st places on the tracks of verification and identification.
  • Jian Zhao+, Lin Xiong+, Yu Cheng+, Jianshu Li, Li Zhou, Yan Xu, Yi Cheng, Karlekar Jayashree, Sugiri Pranata, Shengmei Shen, Junliang Xing, Shuicheng Yan and Jiashi Feng, "3D-Aided Deep Pose-Invariant Face Recognition", in Proceedings of the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence (IJCAI-ECAI), Stockholm, Sweden, July 13-19, 2018. Oral and Acceptance rate is 20.46% (710/3470). (+ Equal contribution). PDF.
  • Abstract: Learning from synthetic faces, though perhaps appealing for high data efficiency, may not bring satisfactory performance due to the distribution discrepancy of the synthetic and real face images. To mitigate this gap, we propose a 3D-Aided Deep Pose-Invariant Face Recognition Model (3D-PIM), which automatically recovers realistic frontal faces from arbitrary poses through a 3D face model in a novel way. Specifically, 3D-PIM incorporates a simulator with the aid of a 3D Morphable Model (3D MM) to obtain shape and appearance prior for accelerating face normalization learning, requiring less training data. It further leverages a global-local Generative Adversarial Network (GAN) with multiple critical improvements as a refiner to enhance the realism of both global structures and local details of the face simulator’s output using unlabelled real data only, while preserving the identity information. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks clearly demonstrate superiority of the proposed model over state-of-the-arts.
  • Jian Zhao, Yu Cheng, Yan Xu, Lin Xiong, Jianshu Li, Fang Zhao, Karlekar Jayashree, Sugiri Pranata, Shengmei Shen, Junliang Xing, Shuicheng Yan, and Jiashi Feng, "Towards Pose Invariant Face Recognition in the Wild", in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake, UT, USA, Jun 18-22, 2018. Acceptance rate is 29.64% (979/3303). PDF.
  • Abstract: Pose variation is one key challenge in face recognition. As opposed to current techniques for pose invariant face recognition, which either directly extract pose invariant features for recognition, or first normalize profile face images to frontal pose before feature extraction, we argue that it is more desirable to perform both tasks jointly to allow them to benefit from each other. To this end, we propose a Pose Invariant Model (PIM) for face recognition in the wild, with three distinct novelties. First, PIM is a novel and unified deep architecture, containing a Face Frontalization sub-Net (FFN) and a Discriminative Learning sub-Net (DLN), which are jointly learned from end to end. Second, FFN is a well-designed dual-path Generative Adversarial Network (GAN) which simultaneously perceives global structures and local details, incorporated with an unsupervised cross-domain adversarial training and a “learning to learn” strategy for high-fidelity and identity-preserving frontal view synthesis. Third, DLN is a generic Convolutional Neural Network (CNN) for face recognition with our enforced cross-entropy optimization strategy for learning discriminative yet generalized feature representation. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks demonstrate the superiority of the proposed model over the state-of-the-arts.
  • Jian Zhao, Lin Xiong, Karlekar Jayashree, Jianshu Li, Fang Zhao, Zhecan Wang, Sugiri Pranata, Shengmei Shen, Shuicheng Yan, Jiashi Feng, "Dual-Agent GANs for Photorealistic and Identity Preserving Profile Face Synthesis", in Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, Dec 4-9, 2017. Acceptance rate is 20.92% (678/3240). PDF, Full version.
  • Abstract: Synthesizing realistic profile faces is promising for more efficiently training deep pose-invariant models for large-scale unconstrained face recognition, by populating samples with extreme poses and avoiding tedious annotations. However, learning from synthetic faces may not achieve the desired performance due to the discrepancy between distributions of the synthetic and real face images. To narrow this gap, we propose a Dual-Agent Generative Adversarial Network (DA-GAN) model, which can improve the realism of a face simulator's output using unlabeled real faces, while preserving the identity information during the realism refinement. The dual agents are specifically designed for distinguishing real v.s. fake and identities simultaneously. In particular, we employ an off-the-shelf 3D face model as a simulator to generate profile face images with varying poses. DA-GAN leverages a fully convolutional network as the generator to generate high-resolution images and an auto-encoder as the discriminator with the dual agents. Besides the novel architecture, we make several key modifications to the standard GAN to preserve pose and texture, preserve identity and stabilize training process: (i) a pose perception loss; (ii) an identity perception loss; (iii) an adversarial loss with a boundary equilibrium regularization term. Experimental results show that DA-GAN not only presents compelling perceptual results but also significantly outperforms state-of-the-arts on the large-scale and challenging NIST IJB-A unconstrained face recognition benchmark. In addition, the proposed DA-GAN is also promising as a new approach for solving generic transfer learning problems more effectively. DA-GAN is the foundation of our submissions to NIST IJB-A 2017 face recognition competitions, where we won the 1st places on the tracks of verification and identification.
  • Lin Xiong, Jayashree Karlekar, Jian Zhao, Yi Cheng, Yan Xu, Jiashi Feng, Sugiri Pranata, Shengmei Shen, "A Good Practice Towards Top Performance of Face Recognition: Transferred Deep Feature Fusion", arXiv Keep the Top 1 performance on IJB-A, the new version has come out.
  • Abstract: Unconstrained face recognition performance evaluations have traditionally focused on Labeled Faces in the Wild (LFW) dataset for imagery and the YouTubeFaces (YTF) dataset for videos in the last couple of years. Spectacular progress in this field has resulted in saturation on verification and identification accuracies for those benchmark datasets. In this paper, we propose a unified learning framework named Transferred Deep Feature Fusion (TDFF) targeting at the new IARPA Janus Benchmark A (IJB-A) face recognition dataset released by NIST face challenge. The IJB-A dataset includes real-world unconstrained faces from 500 subjects with full pose and illumination variations which are much harder than the LFW and YTF datasets. Inspired by transfer learning, we train two advanced deep convolutional neural networks (DCNN) with two different large datasets in source domain, respectively. By exploring the complementarity of two distinct DCNNs, deep feature fusion is utilized after feature extraction in target domain. Then, template specific linear SVMs is adopted to enhance the discrimination of framework. Finally, multiple matching scores corresponding different templates are merged as the final results. This simple unified framework exhibits excellent performance on IJB-A dataset. Based on the proposed approach, we have submitted our IJB-A results to National Institute of Standards and Technology (NIST) for official evaluation. Moreover, by introducing new data and advanced neural architecture, our method outperforms the state-of-the-art by a wide margin on IJB-A dataset.
  • Shasha Mao, Lin Xiong*, Licheng Jiao, Tian Feng, Sai-Kit Yeung, "A Novel Riemannian Metric Based on Riemannian Structure and Scaling Information for Fixed Low-Rank Matrix Completion", IEEE Transactions on Cybernetics (TCYB), vol. 47, no. 5, pp. 1299–1312, 2017. IF 10.387 (2018) (* Corresponding author). PDF
  • Abstract: Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.
  • Shasha Mao+, Licheng Jiao, Lin Xiong+, Shuiping Gou, Bo Chen, Sai-Kit Yeung. “Weighted classifier ensemble based on quadratic form”. Pattern Recognition (PR), vol.48 (5), pp.1688-1706, 2015. IF 5.898 (2018) (+ Equal contribution). PDF
  • Abstract: Diversity and accuracy are the two key factors that decide the ensemble generalization error. Constructing a good ensemble method by balancing these two factors is difficult, because increasing diversity is at the cost of reducing accuracy normally. In order to improve the performance of an ensemble while avoiding the difficulty derived of balancing diversity and accuracy, we propose a novel method that weights each classifier in the ensemble by maximizing three different quadratic forms. In this paper, the optimal weight of individual classifiers is obtained by minimizing the ensemble error, rather than analyzing diversity and accuracy. Since it is difficult to minimize the general form of the ensemble error directly, we approximate the error in an objective function subject to two constraints. Particularly, we introduce an error term with a weight vector w0, and subtract this error with the quadratic form to obtain our approximated error. This subtraction makes minimizing the approximation form equivalent to maximizing the original quadratic form. Theoretical analysis finds that when the value of the quadratic form is maximized, the error of an ensemble system with the corresponding optimal weight w* will be smallest, especially compared with the ensemble with w0. Finally, we demonstrate improved classification performance from the experimental results of an artificial dataset, UCI datasets and PolSAR image data.

Full publication can be found in my researchgate or googlescholar.

Reviewer and Member:

  • The Twelfth International Conference on Learning Representations (ICLR'2024)
  • IEEE International Conference on Computer Vision (ICCV'2023)
  • 36th Neural Information Processing Systems (NeurIPS'2022)
  • 39th International Conference on Machine Learning (ICML'2022)
  • 35th Neural Information Processing Systems (NeurIPS'2021)
  • 38th International Conference on Machine Learning (ICML'2021)
  • 35th AAAI Conference on Artificial Intelligence, Member of Program Committee (PC) for AAAI 2021
  • 9th International Conference on Learning Representation (ICLR'2021)
  • 34th Neural Information Processing Systems (NeurIPS'2020)
  • 16th European Conference on Computer Vision (ECCV'2020)
  • 34th AAAI Conference on Artificial Intelligence, Member of Program Committee (PC) for AAAI 2020
  • IEEE International Conference on Computer Vision (ICCV'2019)
  • IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'2019 ~ CVPR'2023)
  • IEEE Transactions on Neural Networks and Learning Systems (TNNLS), IF 11.683 (2018), 2018 -
  • IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), IF 4.046 (2018), 2018 -
  • 24th International Conference on Pattern Recognition (ICPR 2018)

Work Experiences:

  • 08/2024 - Now: GenAI Algorithm Expert of Geely Auto Research in NingBo, China.

  • 06/2022 - 09/2023: Senior Algorithm Expert of SenseTime in Xi'an, China.

  • 12/2019 - 06/2022: Algorithm Expert of JD Tech in Beijing, China.

  • 10/2018 - 11/2019: Research Scientist of JD Digits AI Lab in Silicon Valley, San Francisco, US.

  • 03/2018 - 10/2018: Senior Research Engineer of Panasonic R&D Center Singapore, Singapore.

  • 09/2015 - 03/2018: Research Engineer of Panasonic R&D Center Singapore, Singapore.

  • 05/2015 - 09/2015: 2012 Labs, MV OSS Technology Development Department in HuaWei Technologies Co., LTD.

Awards:

  • Achieved First Place Award on Track2: Anomaly Detection on NVIDIA AI CITY CHALLENGE from CVPR 2018 workshop. (Team 15).

  • Achieved Gold prize1,2 award in Panasonic Technology Symposium (PTS 2017) held by Headquarter of Panasonic, Osaka. (No.1 of 21 competitors, 4.8%)

  • Achieved the World's most accurate Face Recognition based on the IJB-A dataset provided by NIST and obtained an award prize from Panasonic R&D Center Singapore, 2017

  • Achieved the First Place Award on Track1: Recognizing One Million Celebrities(with external Data) from ICCV 2017 workshop.

CV:

More detailed information can be found in my full_CV.

Hobbies:

Photography, Travel, Figures collection (Saint_Seiya, Mazinger and MB_Gundam so on), LEGO and Cooking.

Contact:

Email: , bruinxiong@me.com, bruinxiongmac@gmail.com

Phone: +86 18092636295, +1 669 454 6698

WeChat: bruinxiongmac

Address: NingBo, China.

Releases

No releases published

Packages

No packages published