Brief Biography

Updated March 2015

Peyman received his undergraduate education in electrical engineering and mathematics from the University of California, Berkeley, and the MS and PhD degrees in electrical engineering from the Massachusetts Institute of Technology. He was a Professor of EE at UC Santa Cruz from 1999-2014, where he is now a visiting faculty. He was Associate Dean for Research at the School of Engineering from 2010-12. From 2012-2014 he was on leave at Google-x, where he helped develop the imaging pipeline for Google Glass. He currently leads the Computational Imaging team in Google Research. He holds 8 US patents, several of which are commercially licensed. He founded MotionDSP in 2005. He has been keynote speaker at numerous technical conferences including PCS, SPIE, and ICME; and along with his students, has won several best paper awards from the IEEE Signal Processing Society. He is a Fellow of the IEEE "for contributions to inverse problems and super-resolution in imaging."

Recent News

Updated April 2015

  • April 2015: devCam: Open-source Android Camera Control for Algorithm Development and Testing --
    Rob Sumner has built an open-source app that makes it easy for researchers in computational photography, vision, and image processing to capture images/burts, do experiments, and test their algorithms in this area. devCam makes it simple to generate and capture a set of photographic exposures with designated values for standard photographic settings. It is designed to give the user as much control as the camera allows, making use of the Camera2 API (requires Lollipop, Android 5.0+), giving the user manual control over the following parameters such as Exposure time, ISO, Aperture, Focal Length, Focus Distance. devCam essentially turns your Android device into an easily scripted "DSLR". It allows manual control over photographic parameters when that is desired, and use of the camera's auto-focus/exposure algorithms when it is not. devCam is primarily intended as a tool for those interested in computational photography research. Images/bursts are saved for later analysis and manipulation in more powerful engineering environments (i.e. MATLAB, Python, C++). Each output image is accompanied by metadata about the image and the camera device at the time of its capture.
  • November 2014: My student Sujoy Biswas Kumar won the Best Student Paper Award at ICIP 2014.
  • October 2014: I gave a plenary talk at the SPIE Optics and Photonics Conference in San Diego. Here is the video of my talk.
  • May 2014: I gave a plenary talk at the Technion's TCE Conference . Here is the video of my talk along with the slides .
  • March 2014: New Software package for our paper Deconvolving PSFs for A Better Motion Deblurring using Multiple Images European Conference on Computer Vision (ECCV) 2012
  • December 2013: I gave a plenary talk at the 2013 Picture Coding Symposium . The slides for my talk can be found here.
  • October 2013: New patent no. 8,559,671 issued: Training-free generic object detection in 2-D and 3-D using locally adaptive regression kernels

Paper Highlights

Updated March 2015

A. Kheradmand and P. Milanfar, "A General Framework for Regularized, Similarity-based Image Restoration", IEEE Transactions on Image Processing, vol. 23, no. 12, pp. 5136-5151, Dec. 2014
We've developed an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function which consists of a new data fidelity term and a regularization term derived from the specific definition of the normalized graph Laplacian. The specific form of the cost function allows us to render the spectral analysis for the algorithm. The approach is general in the sense that we have shown its effectiveness for different restoration problems including deblurring, denoising, and sharpening. Teaser Image
H. Talebi and P. Milanfar, "Nonlocal Image Editing", IEEE Transactions on Image Processing, vol. 23, no. 10, pp. 4460-4473, Oct. 2014
This is a new image editing tool, based on the spectrum of a global filter computed from image affinities. The orthonormal eigenvectors of the filter matrix are highly expressive of the coarse and fine details in the underlying image. Each eigenvalue can boost or suppress the corresponding signal component in each scale. This endows the filter with a number of important capabilities, such as edge-aware sharpening, tone manipulation and abstraction, to name a few. The edits can be easily propagated across the image Teaser Image