November meeting is hosted by Guest Organizers
Jianchao Yang (Adobe) is a research scientist in Imagination Lab at Adobe Research, San Jose, California. He got his M.S. and Ph.D. degrees from Electrical and Computer Engineering (ECE) Department of University of Illinois at Urbana-Champaign (UIUC) in 2011, under supervision of Professor Thomas S. Huang at Beckman Institute. Before that, he received his Bachelor's degree in EEIS Department from University of Science and Technology of China (USTC) in 2006. His research interests are in the broad area of computer vision, machine learning, and image processing. Specifically, he has extensive experience in the following research areas: image categorization, object recognition and detection, image retrieval; image and video super-resolution, denoising and deblurring; face recognition and soft biometrics; sparse coding and sparse representation; unsupervised learning, supervised learning, and deep learning.
Eugene Bart (PARC) is a member of research staff at PARC, Palo Alto, California. He received his Ph.D. degree from the Weizmann Institute in 2004, under the supervision of Prof. Shimon Ullman. Prior to that, he received his B.Sc degree in physics and computer science from the Tel Aviv University. His research interests are in machine learning, computer vision, and biological vision.
Bay Area Multimedia Forum (BAMMF)
BAMMF is a Bay Area Multimedia Forum series. Experts from both academia and industry are invited to exchange ideas and information through talks, tutorials, posters, panel discussions and networking sessions. Topics of the forum will include emerging areas in vision, audio, touch, speech, text, various sensors, human computer interaction, natural language processing, machine learning, media-related signal processing, communication, and cross-media analysis etc. Talks in the event may cover advancement in algorithms and development, demonstration of new inventions, product innovation, business opportunities, etc. If you are interested in giving a presentation at the forum, please contact us.
The 5th BAMMF
The 5th BAMMF was held in the George E. Pake Auditorium in Palo Alto, CA, USA on November 20, 2014. The slides and videos of the speakers at the forum have been made available on the BAMMF web page, and we provide here an overview of their talks. For speakers’ bios, the slides and videos, please visit the web page.
Industrial Impact of Deep Learning – From Speech Recognition to Language and Multimodal Processing
Li Deng (Deep Learning Technology Center, Microsoft Research, Redmond, USA)
Since 2010, deep neural networks have started making real impact in speech recognition industry, building upon earlier work on (shallow) neural nets and (deep) graphical models developed by both speech and machine learning communities. This keynote will first reflect on the historical path to this transformative success. The role of well-timed academic-industrial collaboration will be highlighted, so will be the advances of big data, big compute, and seamless integration between application-domain knowledge of speech and general principles of deep learning. Then, an overview will be given on the sweeping achievements of deep learning in speech recognition since its initial success in 2010 (as well as in image recognition since 2012). Such achievements have resulted in across-the-board, industry-wide deployment of deep learning. The final part of the talk will focus on applications of deep learning to large-scale language/text and multimodal processing, a more challenging area where potentially much greater industrial impact than in speech and image recognition is emerging.
Brewing a Deeper Understanding of Images
In this talk I will introduce the recent developments in the image recognition fields from two perspectives: as a researcher and as an engineer. For the first part I will describe our recent entry “GoogLeNet” that won the ImageNet 2014 challenge, including the motivation of the model and knowledge learned from the inception of the model. For the second part, I will dive into the practical details of Caffe, an open-source deep learning library I created at UC Berkeley, and show how one could utilize the toolkit for a quick start in deep learning as well as integration and deployment in real-world applications.
Applied Deep Learning
I am interested in machine learning algorithms which can be applied in real-life applications and which can be trained on “raw data”. Specifically, I prefer to trade simple “shallow” algorithms with task-specific handcrafted features for more complex (“deeper”) algorithms trained on raw features. In that respect, I will present several general deep learning architectures, which excels in performance on various Natural Language, Speech and Image Processing tasks. I will look into specific issues related to each application domain, and will attempt to propose general solutions for each use case.
Compositional Language and Visual Understanding
In this talk, I will describe deep learning algorithms that learn representations for language that are useful for solving a variety of complex language tasks. I will focus on 3 projects:
- Contextual sentiment analysis (e.g. having an algorithm that actually learns what’s positive in this sentence: “The Android phone is better than the IPhone”)
- Question answering to win trivia competitions (like IBM Watson’s Jeopardy system but with one neural network)
- Multimodal sentence-image embeddings to find images that visualize sentences and vice versa (with a fun demo!) All three tasks are solved with a similar type of recursive neural network algorithm.