Invited Speakers

Title: Learning Single-Image 3D Representations

Speaker: Dr. Jia Deng

Bio: Jia Deng is an Assistant Professor of Computer Science at Princeton University. He received his Ph.D. from Princeton University and his BSc degree from Tsinghua University, both in computer science. He is a recipient of the Sload Research Fellowship, the PAMI Mark Everingham Prize, the Yahoo ACE Award, a Google Faculty Research Award, the ICCV Marr Prize, and the ECCV Best Paper Award. His research focus is on computer vision and machine learning. His papers have currently over 19,000 citations in Google Scholar.

Title: Deep high-resolution representation learning for visual recognition

Speaker: Dr. Jingdong Wang

Abstract: Classification networks have been dominant in visual recognition, from image-level classification to region-level classification (object detection) and pixel-level classification (semantic segmentation, human pose estimation, and facial landmark detection). We argue that the classification network, formed by connecting high-to-low convolutions in series, is not a good choice for region-level and pixel-level classification because it only leads to rich low-resolution representations or poor high-resolution representations obtained with upsampling processes.

We propose a high-resolution network (HRNet). The HRNet maintains high-resolution representations by connecting high-to-low resolution convolutions in parallel and strengthens high-resolution representations by repeatedly performing multi-scale fusions across parallel convolutions. We demonstrate the effectives on pixel-level classification, region-level classification, and image-level classification. The HRNet turns out to be a strong repalcement of classification networks (e.g., ResNets, VGGNets) for visual recognition.

Bio: Jingdong Wang is a Senior Researcher with the Visual Computing Group, Microsoft Research, Beijing, China. His areas of current interest include CNN architecture design, human pose estimation, semantic segmentation, person re-identification, large-scale indexing, and salient object detection. He has authored one book and 100+ papers in top conferences and prestigious international journals in computer vision, multimedia, and machine learning. He authored a comprehensive survey on learning to hash in TPAMI. His paper was selected into the Best Paper Finalist at the ACM MM 2015. Dr. Wang is an Associate Editor of IEEE TPAMI, IEEE TCSVT and IEEE TMM. He was an Area Chair or a Senior Program Committee Member of top conferences, such as CVPR, ICCV, ECCV, AAAI, IJCAI, and ACM Multimedia. He is an ACM Distinguished Member and a Fellow of the IAPR. His homepage is


Feature representation is at the core of many computer vision and pattern recognition applications such as image classification, object detection, image and video retrieval, image matching and many others. For years, milestone engineered feature descriptors such as Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Histogram of Oriented Gradients (HOG) and Local Binary Pattern (LBP) have dominated various domains of computer vision. The design of feature descriptors with low computational complexity has gained lots of attention and a number of efficient descriptors including BRIEF, FREAK, BRISK and DAISY have been presented. In the past few years we have witnessed significant progress in feature representation and learning. The popularity of traditional handcrafted features seems to be overtaken by the Deep Convolutional Neural Networks (DeepCNNs), which can learn powerful features automatically from data and have brought about breakthroughs in various problems in computer vision. However, these advances rely on deep networks with millions or even billions of parameters, and the availability of GPUs with very high computation capability and large scale labeled datasets plays a key role in their success. In other words, powerful DeepCNNs are data hungry and energy hungry.

Nowadays, featuring exponentially increasing number of images and videos, the emerging phenomenon of big dimensionality (millions of dimensions and above) renders the inadequacies of existing approaches, no matter traditional handcrafted features or recent deep learning based ones. There is thus a pressing need for new scalable and efficient approaches that can cope with this explosion of dimensionality. In addition, with the prevalence of social media networks and portable/mobile/wearable devices which have limited resources (e.g. battery life, memory, storage space, CPUs, and bandwidth), the demands for sophisticated portable/mobile/wearable device applications in handling large-scale visual data is rising. In such applications, real time performance is of utmost importance to users, since no one is willing to spend any time waiting nowadays. Therefore, there is a growing need for feature descriptors that are fast to compute, memory efficient, and yet exhibiting good discriminability and robustness. A number of attempting efforts, such as compact binary features, DCNN network quantization, simple and efficient neural network architectures and big dimensionality-oriented feature selection, have appeared in top conferences (including CVPR, ICCV, ECCV, NIPS and ICLR) and top journals (including TPAMI and IJCV). The aim of this workshop is to stimulate researchers from the fields of computer vision to present high quality work and to provide a cross-fertilization ground for stimulating discussions on the next steps in this important research area.

Important Dates(Tentative)

Event Date
Paper Submission DeadlineMarch 24, 2019
Notification of AcceptanceApril 6, 2019
Camera-ready dueApril 18, 2019
Workshop (Half day)June 16, 2019 (pm)


We encourage researchers to study and develop new compact and efficient feature representations that are fast to compute, memory efficient, and yet exhibiting good discriminability and robustness. We also encourage new theories and applications related to feature representation and learning for dealing with these challenges. We are soliciting original contributions that address a wide range of theoretical and practical issues including, but not limited to:

1. New features (handcrafted features, lightweight DeepCNN architectures, deep model compression/quantization, and feature learning in supervised, weakly supervised or unsupervised way) that are fast to compute, memory efficient and suitable for large scale problems;

2. New compact and efficient features that are suitable for wearable devices (e.g., smart glasses, smart phones, smart watches) with strict requirements for computational efficiency and low power consumption;

3. Hashing/binary codes learning and its related applications in different domains, e.g. content based retrieval;

4. Evaluations of current traditional descriptors and features learned by deep learning;

5. Hybrid methods combining strengths of handcrafted and learning based approaches;

6. New applications of existing features in different domains, e.g. medical domain;

Program outline (afternoon, 16 June 2019, Hyatt Shoreline B)

Time Event
13:50~14:00Welcome Introduction
14:00~14:45Invited Talk (Jia Deng)
14:45~15:25Oral Session 1
15:25~16:25Poster Session
16:25~17:10Invited Talk (Jingdong Wang)
17:10~17:50Oral Session 2
17:50~18:00Closing Remarks

Oral Session 1 (14:45~15:25)

Oral Session 2 (17:10~17:50)

Poster Session (15:25~16:25)

Paper Submission Information

All submissions will be handled electronically via the workshop’s CMT Website. Click the following link to go to the submission site:

Papers should describe original and unpublished work about the related topics. Each paper will receive double blind reviews, moderated by the workshop chairs. Authors should take into account the following:

- All papers must be written and presented in English.

- All papers must be submitted in PDF format. The workshop paper format guidelines are the same as the Main Conference papers

- The maximum paper length is 8 pages (excluding references). Note that shorter submissions are also welcome.

- The accepted papers will be published in CVF open access as wel as in IEEE Xplore.


Dr. Li Liu
(University of Oulu & NUDT)
Dr. Wanli Ouyang
(Univeristy of Sydney)

Dr. Jiwen Lu
(Tsinghua University)
Prof. Matti Pietikäinen
(University of Oulu)

Previous CEFRL Workshop

· 2nd CEFRL Workshop in conjunction with ECCV 2018

· 1st CEFRL Workshop in conjunction with ICCV 2017

Please contact Li Liu if you have question. The webpage template is by the courtesy of awesome Georgia.