Camera Depth of Field Manipulation for Pre- and Post-Image Capture

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on pinterest
Pinterest
Share on email
Email

Digital photography has been the active engine of many digital devices such as DSLRs, smart phones, printers, and photocopy machines. All types of image processing operations fall within the scope of this field. It ranges from low-level operations such as intensity manipulations all the way to the middle-level operations such as feature representation to the high-level ones such as semantic recognition. The objective is whatever helps to improve the experience of digital photography must be studies and applications defined. 

Among different problems tackled so far in this field, one is the issue of Depth-of-Field (DoF). DoF is a phenomenon that naturally happens due to the mechanisms defined by the current generation of cameras. Inspired by human eyes, cameras work in a way that there is always one point in the focus and all the rest of the field behind or in front of it  falls out of focus. 

DoF is very annoying in auto-focus photography. Since focus on the subject of interest is subjective, decisions made by the automatic processes are always prone to error. To alleviate this shortcoming, we review a novel approach to recover visual information from the out-of-focus regions. In other words, the series of papers we review in this talk propose a systematic way to deblur the out-of-focus regions. 

Abdullah, the presenter and main authors of the papers, is a smart PhD student at York University working with Prof. Michael Brown. Michael is one of the great active vision researchers at the Centre of Vision Research at York University. He is mainly focused on low-level processing in digital photography. Abdullah in this amazing talk, presents a deep learning approach to model the process of blurring out-of-focus regions imposed by DoF using dual sensors in smart phones and DSLRs. Dual sensor cameras enjoy having pairs of receptors measuring light brightness with some offset. The offset helps measure the disparity at the array of receptors hence estimating the DoF. This intrinsically can be modeled with neural networks to help recover the deblurring functionality. 

If you are interested to find out more about the paper and learn the details of the proposed approach, check out the papers here:

The information about the event is provided in this link. You can watch the recorded video of the event in the YouTube stream below. I am going to post the subsequent events in the future. So wait for more cool stuff to get published.

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on pinterest
Pinterest
Share on email
Email

Leave a Reply