develop your own website

Muratcan Cicek

A PhD Student at UCSC under Prof. Roberto Manduchi 

About

I am Muratcan Cicek, a self-motivated computer scientist.
I am a 24-years-old person with motor impairments who was born and grown in Turkey although fallen in love with Cali since childhood.

I have graduated from bachelor’s degree in Computer Science at Ozyegin University, Turkey in 2017. I have also studied two of my academic terms at Oregon State University, U.S. as an exchange student. For my senior year, I was awarded by Google Europe Scholarship for Students with Disabilities 2016 which a highly competitive and prestigious scholarship sponsored by Google. I have also been hired for eBay and Turkcell concurrently to work part-time on their Big Data projects with my Machine Learning skills for years 2016 and 2017.

In 2017, I have been admitted to PhD program in Computer Engineering at UCSC for studying on Computer Vision and now I am part of Professor Roberto Manduchi’s Computer Vision Lab at UCSC. Recently, I have been hired by Computer Vision team of eBay to implement the first library that brings head-based pointing on iOS. We just open-sourced HeadGazeLib in September 2018 and it had wide coverage in the international media, including Techcrunch, Venturebeat.

I am continuing to study Computer Vision-based Human-Computer Interaction, especially head-based pointing for my PhD thesis.

 

Research Statement

 My special needs shape my research and lead me study the fields of Computer Vision (CV) and Human-Computer Interaction (HCI). As a person with motor impairments, I benefit from vision-based interaction (VBI) solutions every day. With years of ser experience on VBI and a strong background on CV, I would be one of the most capable researchers to understand the requirements of HCI, especially VBI techniques and meet them with novel solutions by my unique knowledge.

In HCI, pointing and selection are two primary tasks that strictly required for any computing environment, from very basic consoles to futuristic Virtual Reality (VR) applications and people with motor impairments (PwMI) like me need alternative HCI techniques to complete those required tasks. My research aims to evolve pointing and selection methods to provide greater functionality with minimum physical ability. I am particularly focusing on Head-based Pointing (HBP) which is the Vision based Interaction (VBI) technique to achieve this aim. Head-based pointing first has been used as a writing tool by attaching a regular pencil at the edge of a head stylus. Today, we still have physical head-mounted styluses for touch screens and the sophisticated products that attached the head and act as a virtual stylus. Besides physical devices, an important number of today’s alternative pointing methods employ visual-based interactions (VBI) that detect and track voluntary movement of head for two-dimensional pointing. Today, there are successful applications of HBP in assistive technology (e.g. Camera Mouse, Smyle Mouse, Enable Viacam). But to the best of my knowledge, efficiency of the existing HBP software only fine at immobile usage although computers, especially laptops reached a greater mobility and can be used almost everywhere. This mobility brings several disadvantages for HBP solutions such as too noisy background and with a large variety of lighting issues that the existing software cannot handle. The primary goal of my research is reaching the state-of-the-art HBP on computing devices by solving these problems.

Proposing a proper HCI via HBP requires to utilize tracking algorithms and apply machine learning during the calculation of the pointing. I need to document the state-of-the-art Head Pose Estimation algorithms for robustness to background noise and bad lighting in addition to their power consumption and applicability on run time. As the next steps, I am planning to study different machine learning techniques to calculate the pointing coordinates from the raw estimation of head pose. These techniques would be similar with Kalman Filters and estimates the pointed coordinates on the screen by head. Another approach would be considering Head-based Pointing (HBP) as a single task and study the direct correlation between the camera input which contains user’s appearance and the pointing coordinates on the screen. During my research, I am going to implement these HBP solutions, evaluate my approaches by several user studies and propose an end product that provides a robust HBP for computers.
 

Papers

As the first author, I have a full paper about head-based pointing submitted to CHI 2019 which is still on the review process
and
here are some seminal papers in my current research areas:

Dense 3D Face Alignment from 2D Videos in Real-Time

Jeni, László A., Jeffrey F. Cohn, and Takeo Kanade.

Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on. Vol. 1. IEEE, 2015.

Head-tracking interfaces on mobile devices: Evaluation using Fitts’ law and a new multi-directional corner task for small displays

Roig-Maimó, M.F., MacKenzie, I.S., Manresa-Yee, C. and Varona, J.

International Journal of Human-Computer Studies, 112, pp.1-15.


Evaluating fitts' law performance with a non-ISO task

Roig-Maimó, M. F., MacKenzie, I. S., Manresa-Yee, C., & Varona, J.

In Proceedings of the XVIII International Conference on Human Computer Interaction (p. 5). ACM.