- Yoshua Bengio : September 30, 2013
- Alessandro Acquisti : October 1, 2013
Mark Burge : October 2, 2013(Dr. Burge is unable to deliver his keynote talk)
- Sargur Srihari : October 2, 2013
Prof. Yoshua Bengio
Department of Computer Science and Operations Research
Canada Research Chair in Statistical Learning Algorithms
Deep Learning towards AI
Abstract: Deep learning methods have been extremely successful recently, in particular in the areas of speech recognition, object recognition and language modeling. Deep representations are representations at multiple levels of abstraction, of increasing non-linearity. The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This talk first reviews recent advances in training deep supervised networks, and then introduces recent work in the area of unsupervised feature learning and deep learning of generative models, focusing on advances in understanding the probabilistic and geometric (manifold) aspects of regularized auto-encoders. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning. We show that denoising auto-encoders and their deep recurrent and stochastic generalization (called Deep Generative Networks or GSNs) can be associated with a Markov chain whose stationary distribution is a consistent estimator of the data generating distribution. This circumvents one of the challenges of probabilistic models (especially deep ones), i.e., the need for approximate inference and MCMC in the middle of the training loop, and GSNs can actually be trained by back-prop. This challenge is one of the several challenges that the talk will briefly cover towards approaching AI with deep learning.
Biography: Yoshua Bengio received a PhD in Computer Science from McGill University, Canada in 1991. After two post-doctoral years, one at M.I.T. with Michael Jordan and one at AT&T Bell Laboratories with Yann LeCun and Vladimir Vapnik, he became professor at the Department of Computer Science and Operations Research at Université de Montréal. He is the author of two books and around 200 publications, the most cited being in the areas of deep learning, recurrent neural networks, probabilistic learning algorithms, natural language processing and manifold learning. He is among the most cited Canadian computer scientists and is or has been associate editor of the top journals in machine learning and neural networks. Since ’2000 he holds a Canada Research Chair in Statistical Learning Algorithms, since ’2006 an NSERC Industrial Chair, since ’2005 his is a Fellow of the Canadian Institute for Advanced Research. He is on the board of the NIPS foundation and has been program chair and general chair for NIPS. He has co-organized the Learning Workshop for 14 years and co-created the new International Conference on Learning Representations. His current interests are centered around a quest for AI through machine learning, and include fundamental questions on deep learning and representation learning, the geometry of generalization in high-dimensional spaces, manifold learning, biologically inspired learning algorithms, and challenging applications of statistical machine learning. In July 2013, Google Scholar finds more than 13600 citations to his work, yielding an h-index of 50.
Prof . Alessandro Acquisti
Associate Professor at the Heinz College,
Carnegie Mellon University (CMU)
Co-director of CMU Center for Behavioral and Decision Research
Privacy in the Age of Augmented Reality
Abstract: Alessandro Acquisti will present the results of a series of experiments investigating the feasibility of combining publicly available Web 2.0 data with off-the-shelf face recognition software for the purpose of large-scale, automated individual re-identification. Two experiments demonstrated the ability of identifying strangers online (on a dating site where individuals protect their identities by using pseudonyms) and offline (in a public space), based on photos made publicly available on a social network site. A third proof-of-concept experiment illustrated the ability of inferring strangers’ personal or sensitive information (their interests and Social Security numbers) from their faces, by combining face recognition, data mining algorithms, and statistical re-identification techniques. The results highlight the implications of the inevitable convergence of face recognition technology and increasing online self-disclosures, and the emergence of “personally predictable” information. They raise questions about the future of privacy in an “augmented” reality world in which online and offline data will seamlessly blend.
University at Buffalo
Evaluating Likelihood Ratios in Forensic Identification
Abstract: Forensic identification is the task of determining whether or not observed evidence arose from a known source. It is useful to associate a degree of confidence with the identification/exclusion/no-opinion decision since uncertainty is always present and judges and juries have begun to expect such a characterization in courtroom presentations. Today, in most forensic domains outside of DNA, it is not possible to make a probability statement since the necessary distributions cannot be computed with reasonable accuracy even when the number of evidence measurements is small. This talk will describe methods for the evaluation of a likelihood ratio (LR) — the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). Since the joint probability approach is computationally and statistically infeasible (because the number of parameters to be determined is exponential with the number of variables), we can replace the joint probability by another probability: that of (dis)imilarity between evidence and object under the two hypotheses. While this distance-based approach reduces to linear complexity with the number of variables, it is an oversimplification. A third method, which decomposes the LR into a product of two factors, one based on distance and the other on rarity, has intuitive appeal– forensic examiners assign higher importance to rare attributes in the evidence. Theoretical discussions of the three approaches and empirical evaluations done with several data types (continuous features, binary features, multinomial and graph) will be described. Experiments with handwriting, footwear marks and fingerprints show that the distance and rarity method is significantly better than the distance only method.
Biography: Sargur (Hari) Srihari is a SUNY Distinguished Professor in the Computer Science and Engineering Department at the State University of New York at Buffalo. He is the founding director of CEDAR, the Center of Excellence for Document Analysis and Recognition, which was recognized as the first United State Postal Service Center of Excellence in 1991. Research at CEDAR spawned a new thread of work in pattern recognition which led to the first Handwritten Address Interpretation (HWAI) system, the first name and address block reader (NABR) used by the IRS, and the first comprehensive forensic handwriting examination system.
Srihari has been a member of several national committees, including the Board of Scientific Counselors of the National Library of Medicine for six years (2001-2007), the National Academy of Sciences Committee on Identifying the Needs of the Forensic Science Community (2006-2008), the NIST Expert Working Group on Human Factors in Latent Print Analysis (2008-10), and the Houston Forensic Science LGC Technical Advisory Group (2013-15).
Srihari’s honors include: Outstanding Achievements Award of IAPR/ICDAR in Beijing China in 2011, Fellow of the Institute of Electronics and Telecommunications Engineers (IETE, India) in 1992, Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 1995, Fellow of the International Association for Pattern Recognition in 1996 and distinguished alumnus of the Ohio State University College of Engineering in 1999.
Srihari has served as principal adviser on 37 doctoral dissertations. He currently teaches courses on Machine Learning and on Probabilistic Graphical Models
Srihari received a B.Sc. in Physics and Mathematics from the Bangalore University in 1967, a B.E. in Electrical Communication Engineering from the Indian Institute of Science, Bangalore in 1970, and a Ph.D. in Computer and Information Science from the Ohio State University, Columbus in 1976.