Publications 2018

Journals

  1. Long Chen, Hanwang Zhang, Jun Xiao, Xiangnan He, Shiliang Pu, Shih-Fu Chang. Scene dynamics: Counterfactual critic multi-agent training for scene graph generation. arXiv preprint arXiv:1812.02347, 2018. details
  2. Yu-Gang Jiang, Zuxuan Wu, Jinhui Tang, Zechao Li, Xiangyang Xue, Shih-Fu Chang. Modeling multimodal clues in a hybrid deep learning framework for video classification. IEEE Transactions on Multimedia, 20(11):3137-3147, 2018. details
  3. Yu-Gang Jiang, Zuxuan Wu, Jun Wang, Xiangyang Xue, Shih-Fu Chang. Exploiting feature and class relationships in video categorization with regularized deep neural networks. IEEE transactions on pattern analysis and machine intelligence, 40(2):352-364, 2018. details
  4. Yinxiao Li, Yan Wang, Yonghao Yue, Danfei Xu, Michael Case, Shih-Fu Chang, Eitan Grinspun, Peter K Allen. Model-driven feedforward prediction for manipulation of deformable objects. IEEE Transactions on Automation Science and Engineering, (99):1-18, 2018. details
  5. Xu Zhang, Felix Xinnan Yu, Svebor Karaman, Wei Zhang, Shih-Fu Chang. Heated-Up Softmax Embedding. arXiv preprint arXiv:1809.04157, 2018. details

Conferences

  1. Victor Campos, Brendan Jou, Xavier Giro-i-Nieto, Jordi Torres, Shih-Fu Chang. Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks. In International Conference on Learning Representations, 2018. details
  2. Long Chen, Hanwang Zhang, Jun Xiao, Wei Liu, Shih-Fu Chang. Zero-Shot Visual Recognition using Semantics-Preserving Adversarial Embedding Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. details
  3. Hang Gao, Zheng Shou, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang. Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks. In Advances in Neural Information Processing Systems (NIPS), 2018. details
  4. Hongzhi Li, Joseph G Ellis, Lei Zhang, Shih-Fu Chang. PatternNet: Visual Pattern Mining with Deep Neural Network. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, Pages 291-299, 2018. details
  5. Di Lu, Spencer Whitehead, Lifu Huang, Heng Ji, Shih-Fu Chang. Entity-aware Image Caption Generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Pages 4013-4023, 2018. details
  6. Zheng Shou, Junting Pan, Jonathan Chan, Kazuyuki Miyazawa, Hassan Mansour, Anthony Vetro, Xavier Giro-i-Nieto, Shih-Fu Chang. Online Detection of Action Start in Untrimmed, Streaming Videos. In ECCV, 2018. details
  7. Zheng Shou, Hang Gao, Lei Zhang, Kazuyuki Miyazawa, Shih-Fu Chang. AutoLoc: Weakly-supervised Temporal Action Localization in Untrimmed Videos. In ECCV, 2018. details
  8. Spencer Whitehead, Heng Ji, Mohit Bansal, Shih-Fu Chang, Clare Voss. Incorporating Background Knowledge into Video Description Generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Pages 3992-4001, 2018. details
  9. Hanwang Zhang, Yulei Niu, Shih-Fu Chang. Grounding Referring Expressions in Images by Variational Context. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. details

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

 
bar

For problems or questions regarding this web site contact The Web Master.

This document was translated automatically from BibTEX by bib2html (Copyright 2003 © Eric Marchand, INRIA, Vista Project).