Paper in M2CAI (workshop MICCAI) on “Fine-tuning Deep Architectures for Surgical Tool Detection” and results of Tool Detection Challange

Paper

  • A. Zia, D. Castro, and I. Essa (2016), “Fine-tuning Deep Architectures for Surgical Tool Detection,” in Workshop and Challenges on Modeling and Monitoring of Computer Assisted Interventions (M2CAI), Held in Conjunction with International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Athens, Greece, 2016. [PDF] [WEBSITE] [BIBTEX]
    @InProceedings{    2016-Zia-FDASTD,
      address  = {Athens, Greece},
      author  = {Aneeq Zia and Daniel Castro and Irfan Essa},
      booktitle  = {Workshop and Challenges on Modeling and Monitoring
          of Computer Assisted Interventions (M2CAI), Held in
          Conjunction with International Conference on Medical
          Image Computing and Computer Assisted Intervention
          (MICCAI)},
      month    = {October},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2016-Zia-FDASTD.pdf},
      title    = {Fine-tuning Deep Architectures for Surgical Tool
          Detection},
      url    = {http://www.cc.gatech.edu/cpl/projects/deepm2cai/},
      year    = {2016}
    }

Abstract

Visualization of some of the training videos.

Understanding surgical workflow has been a key concern of the medical research community. One of the main advantages of surgical workflow detection is real-time operating room (OR) scheduling. For hospitals, each minute of OR time is important in order to reduce cost and increase patient throughput. Traditional approaches in this field generally tackle the video analysis using hand-crafted video features to facilitate the tool detection. Recently, Twinanda et al. presented a CNN architecture ’EndoNet’ which outperformed previous methods for both surgical tool detection and surgical phase detection. Given the recent success of these networks, we present a study of various architectures coupled with a submission to the M2CAI Surgical Tool Detection challenge. We achieved a top-3 result for the M2CAI competition with a mAP of 37.6.

 

Tags: , , , | Categories: Aneeq Zia, Awards, Computer Vision, Daniel Castro, Medical, MICCAI | Date: October 21st, 2016 | By: Irfan Essa |

No Comments »



Paper (ACM MM 2016) “Leveraging Contextual Cues for Generating Basketball Highlights”

Paper

  • V. Bettadapura, C. Pantofaru, and I. Essa (2016), “Leveraging Contextual Cues for Generating Basketball Highlights,” in Proceedings of ACM International Conference on Multimedia (ACM-MM), 2016. [PDF] [WEBSITE] [arXiv] [BIBTEX]
    @InProceedings{    2016-Bettadapura-LCCGBH,
      arxiv    = {http://arxiv.org/abs/1606.08955},
      author  = {Vinay Bettadapura and Caroline Pantofaru and Irfan
          Essa},
      booktitle  = {Proceedings of ACM International Conference on
          Multimedia (ACM-MM)},
      month    = {October},
      organization  = {ACM},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2016-Bettadapura-LCCGBH.pdf},
      title    = {Leveraging Contextual Cues for Generating
          Basketball Highlights},
      url    = {http://www.vbettadapura.com/highlights/basketball/index.htm},
      year    = {2016}
    }

Abstract

2016-Bettadapura-LCCGBH

Leveraging Contextual Cues for Generating Basketball Highlights

The massive growth of sports videos has resulted in a need for automatic generation of sports highlights that are comparable in quality to the hand-edited highlights produced by broadcasters such as ESPN. Unlike previous works that mostly use audio-visual cues derived from the video, we propose an approach that additionally leverages contextual cues derived from the environment that the game is being played in. The contextual cues provide information about the excitement levels in the game, which can be ranked and selected to automatically produce high-quality basketball highlights. We introduce a new dataset of 25 NCAA games along with their play-by-play stats and the ground-truth excitement data for each basket. We explore the informativeness of five different cues derived from the video and from the environment through user studies. Our experiments show that for our study participants, the highlights produced by our system are comparable to the ones produced by ESPN for the same games.

Tags: , , , | Categories: ACM MM, Caroline Pantofaru, Computational Photography and Video, Computer Vision, Papers, Sports Visualization, Vinay Bettadapura | Date: October 18th, 2016 | By: Irfan Essa |

No Comments »



Announcing the new Interdisciplinary Research Center for Machine Learning at Georgia Tech (ML@GT)

Announcement from Georgia Tech’s College of Computing about a new Interdisciplinary Research Center for Machine Learning (ML@GT) that I will be serving as the Inaugural Director for.ML@GT

Machine Learning @ Georgia Tech Based in the College of Computing, ML@GT represents all of Georgia Tech. It is tasked with pushing forward the ability for computers to learn from observations and data. As one of the fastest growing research areas in computing, machine learning spans many disciplines that use data to discover scientific principles, infer patterns, and extract meaningful knowledge.

According to School of Interactive Computing Professor Irfan Essa, inaugural director of ML@GT, machine learning (ML) has reached a new level of maturity and is now impacting all aspects of computing, engineering, science, and business. “We are in the era of aggregation, of collecting data,” said Essa. “However, machine learning is now propelling data analysis, and the whole concept of interpreting that data, toward a new era of making sense of the data, using it to make meaningful connections between information, and acting upon it in innovative ways that bring the most benefit to the most people.”

The new center begins with more than 100 affiliated faculty members from five Georgia Tech colleges and the Georgia Tech Research Institute, as well as some jointly affiliated with Emory University.

Source: Two New Interdisciplinary Research Centers Shaping Future of Computing | Georgia Tech – College of Computing

Tags: , , | Categories: In The News, Interesting, Machine Learning | Date: October 6th, 2016 | By: Irfan Essa |

No Comments »



20 years at GA Tech

September 22nd, 2016 marked 20 years of my being at GA Tech.  My team threw me a surprise party to celebrate. Here is Spherical Image of the event.  So nice of them

Lab Party for 20 – Spherical Image – RICOH THETA

Tags: , | Categories: Events, Interesting, Personal | Date: September 22nd, 2016 | By: Irfan Essa |

No Comments »



Paper in IJCARS (2016) on “Automated video-based assessment of surgical skills for training and evaluation in medical schools”

Paper

  • A. Zia, Y. Sharma, V. Bettadapura, E. L. Sarin, T. Ploetz, M. A. Clements, and I. Essa (2016), “Automated video-based assessment of surgical skills for training and evaluation in medical schools,” International Journal of Computer Assisted Radiology and Surgery, vol. 11, iss. 9, pp. 1623-1636, 2016. [WEBSITE] [DOI] [BIBTEX]
    @Article{    2016-Zia-AVASSTEMS,
      author  = {Zia, Aneeq and Sharma, Yachna and Bettadapura,
          Vinay and Sarin, Eric L and Ploetz, Thomas and
          Clements, Mark A and Essa, Irfan},
      doi    = {10.1007/s11548-016-1468-2},
      journal  = {International Journal of Computer Assisted
          Radiology and Surgery},
      month    = {September},
      number  = {9},
      pages    = {1623--1636},
      publisher  = {Springer Berlin Heidelberg},
      title    = {Automated video-based assessment of surgical skills
          for training and evaluation in medical schools},
      url    = {http://link.springer.com/article/10.1007/s11548-016-1468-2},
      volume  = {11},
      year    = {2016}
    }

Abstract

2016-Zia-AVASSTEMS

Sample frames from our video dataset

Purpose: Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in- person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches, however, are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities.

Method : We compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis. Results: We were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos.

Conclusion: Our evaluations show that frequency features perform better than motion texture features, which in turn perform better than symbol/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.

Tags: , , , | Categories: Activity Recognition, Aneeq Zia, Computer Vision, Eric Sarin, Mark Clements, Medical, MICCAI, Thomas Ploetz, Vinay Bettadapura, Yachna Sharma | Date: September 2nd, 2016 | By: Irfan Essa |

No Comments »



Fall 2016 Teaching

My teaching activities for Fall 2016 areBB1162B4-4F87-480C-A850-00C54FAA0E21

Tags: , , , , | Categories: Computational Photography, Computer Vision | Date: August 10th, 2016 | By: Irfan Essa |

No Comments »



Research Blog: Motion Stills – Create beautiful GIFs from Live Photos

Kudos to the team from Machine Perception at Google Research that just launched the Motion Still App to generate novel photos on an iOS device. This work is in part aimed at combining efforts like Video Textures and Video Stabilization and a lot more.

Today we are releasing Motion Stills, an iOS app from Google Research that acts as a virtual camera operator for your Apple Live Photos. We use our video stabilization technology to freeze the background into a still photo or create sweeping cinematic pans. The resulting looping GIFs and movies come alive, and can easily be shared via messaging or on social media.

Source: Research Blog: Motion Stills – Create beautiful GIFs from Live Photos

Tags: , , , , | Categories: Computational Photography and Video, Computer Vision, In The News, Interesting, Matthias Grundmann, Projects | Date: June 7th, 2016 | By: Irfan Essa |

No Comments »



Paper (WACV 2016) “Discovering Picturesque Highlights from Egocentric Vacation Videos”

Paper

  • D. Castro, V. Bettadapura, and I. Essa (2016), “Discovering Picturesque Highlights from Egocentric Vacation Video,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2016. [PDF] [WEBSITE] [arXiv] [BIBTEX]
    @InProceedings{    2016-Castro-DPHFEVV,
      arxiv    = {http://arxiv.org/abs/1601.04406},
      author  = {Daniel Castro and Vinay Bettadapura and Irfan
          Essa},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      month    = {March},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2016-Castro-DPHFEVV.pdf},
      title    = {Discovering Picturesque Highlights from Egocentric
          Vacation Video},
      url    = {http://www.cc.gatech.edu/cpl/projects/egocentrichighlights/},
      year    = {2016}
    }

Abstract

2016-Castro-DPHFEVVWe present an approach for identifying picturesque highlights from large amounts of egocentric video data. Given a set of egocentric videos captured over the course of a vacation, our method analyzes the videos and looks for images that have good picturesque and artistic properties. We introduce novel techniques to automatically determine aesthetic features such as composition, symmetry, and color vibrancy in egocentric videos and rank the video frames based on their photographic qualities to generate highlights. Our approach also uses contextual information such as GPS, when available, to assess the relative importance of each geographic location where the vacation videos were shot. Furthermore, we specifically leverage the properties of egocentric videos to improve our highlight detection. We demonstrate results on a new egocentric vacation dataset which includes 26.5 hours of videos taken over a 14-day vacation that spans many famous tourist destinations and also provide results from a user-study to access our results.

 

Tags: , , , , | Categories: Computational Photography and Video, Computer Vision, Daniel Castro, PAMI/ICCV/CVPR/ECCV, Vinay Bettadapura | Date: March 7th, 2016 | By: Irfan Essa |

No Comments »



Spring 2016 Teaching

My teaching activities for Spring 2016 areBB1162B4-4F87-480C-A850-00C54FAA0E21

Tags: , , , | Categories: Computational Photography, Computational Photography and Video, Computer Vision, Computer Vision | Date: January 10th, 2016 | By: Irfan Essa |

No Comments »



Paper in MICCAI (2015): “Automated Assessment of Surgical Skills Using Frequency Analysis”

Paper

  • A. Zia, Y. Sharma, V. Bettadapura, E. Sarin, M. Clements, and I. Essa (2015), “Automated Assessment of Surgical Skills Using Frequency Analysis,” in International Conference on Medical Image Computing and Computer Assisted Interventions (MICCAI), 2015. [PDF] [BIBTEX]
    @InProceedings{    2015-Zia-AASSUFA,
      author  = {A. Zia and Y. Sharma and V. Bettadapura and E.
          Sarin and M. Clements and I. Essa},
      booktitle  = {International Conference on Medical Image Computing
          and Computer Assisted Interventions (MICCAI)},
      month    = {October},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Zia-AASSUFA.pdf},
      title    = {Automated Assessment of Surgical Skills Using
          Frequency Analysis},
      year    = {2015}
    }

Abstract

We present an automated framework for a visual assessment of the expertise level of surgeons using the OSATS (Objective Structured Assessment of Technical Skills) criteria. Video analysis technique for extracting motion quality via  frequency coefficients is introduced. The framework is tested in a case study that involved analysis of videos of medical students with different expertise levels performing basic surgical tasks in a surgical training lab setting. We demonstrate that transforming the sequential time data into frequency components effectively extracts the useful information differentiating between different skill levels of the surgeons. The results show significant performance improvements using DFT and DCT coefficients over known state-of-the-art techniques.

Tags: , , , , , , | Categories: Activity Recognition, Aneeq Zia, Eric Sarin, Mark Clements, Medical, MICCAI, Papers, Vinay Bettadapura, Yachna Sharma | Date: October 6th, 2015 | By: Irfan Essa |

No Comments »