Interview | Winners of the 2015 Algerian Paper of the Year in Computer Science & Engineering

Interview | Winners of the 2015 Algerian Paper of the Year in Computer Science & Engineering

Inspire Magazine speaks to Dr Sid Ahmed Fezza, a lecturer at the University of Oran 2, Algeria. Dr Fezza is the first author of the paper entitled “Feature-Based Colour Correction of Multiview Video for Coding and Rendering Enhancement”, published in the Journal of IEEE Transactions On Circuits and Systems for Video Technology, and which won the 2015 Algerian Paper of the Year Award in Computer Science & Engineering. His research interests include image and video coding, visual quality assessment, and immersive multimedia technologies.

Inspire Magazine: Thank you for speaking to Inspire Magazine, and many congratulations for winning the 2015 Algerian Paper of the Year Award in Computer Science & Engineering. How do you feel about winning this award?

Sid Ahmed Fezza: First, many thanks to anasr.org’s team who made this event a tremendous success and a unique opportunity to highlight the output of Algerian researchers. I am very delighted and deeply honoured to receive this award, and I would like to take this opportunity to thank the co-authors of the paper who helped make this works possible.

IM: Can you tell us what your award winning paper is about in simple terms?

Sid Ahmed Fezza

Sid Ahmed Fezza

SF: In this paper, we proposed a pre-processing method to correct colours in videos acquired with a set of cameras capturing the same scene from different viewpoints, this is called multiview video and it is used to enable 3D video technologies. The method we proposed is based on an improvement of histogram matching algorithm using only common regions across views. This allows for a more precise colour correction. In addition, to maintain the temporal correlation that exists between successive frames within a single view, the approach we propose is performed on a temporal sliding window, which means that the advantage of this technique can be used for time-varying acquiring systems, videos captured using moving cameras, and real-time broadcasting.

IM: Why is this an important problem to address?
SF: Multiview video is captured from different viewpoints with multiple cameras, which often leads to significant illumination and colour inconsistencies between views. This means that the same captured object can appear differently from one view to another. These colour mismatches reduce the correlations that exist between views, and therefore can significantly decrease the compression efficiency. Furthermore, colour inconsistencies degrade the performance of view rendering algorithms by introducing visible artefacts leading to a reduced view synthesis quality that will appear unnatural. This aspect may be annoying when viewers switch between the different views, and ultimately affect their 3D viewing experience.

IM: What is the exact contribution(s) of the paper to your field of research and how does it compare to other approaches that exist in the literature?

SF: To cope with the issue I described, we proposed an effective method for correcting colour inconsistencies in multiview video. Firstly, to avoid occlusion problems and make accurate corrections, we only consider overlapping regions when calculating the colour mapping function. We use a reliable feature matching technique to determine these regions. Also, to maintain the temporal coherence, we apply corrections on a temporal sliding window, where each colour-mapping function is defined using a group of pictures. Another issue tackled in this paper is the selection of colour reference view. Previous research has often ignored this step even though it may highly influence the correction results. We proposed to perform colour reference selection in a fully automatic manner, and we do this by defining a robust criterion relying on view statistics and structural content.

We ran three experiments to assess the colour correction method we proposed. The first experiment investigates its impact on visual colour consistency, the second evaluates its impact on coding performance, and finally, the third explores the effect on the rendering views. The results that we obtained, both objective and subjective performance evaluations, demonstrated that the correction performed with our approach outperforms existing methods and that the colour of views is harmonized and consistent. This results in significant coding gains and an improvement in visual quality of the synthesized views, thus increasing the 3D viewing experiences.

An example of the color inconsistency between views

An example of the color inconsistency between views (top: before colour correction; bottom: after correction).

IM: How did you get into this particular research and where does it fit in relation to other work conducted in your research lab or institution?

SF: There is great interest in 3D video applications both in the research community and in the industry. Since multiview content (MVV) with a very large number of views are needed in such applications, efficient compression is essential to store or transmit MVV streams. Based on these considerations, Multi-view Video Coding (MVC), an extension of the H.264/MPEG-4 AVC standard, has been developed to address the encoding of MVV content. My PhD thesis focused on MVC and related quality of experience. MVC takes advantage of the redundancies that exist between views to achieve high coding performance. However, one major problem with MVV systems is linked to illumination and colour variations between views. These colour mismatches may impair both coding performance and rendering quality. To cope with such important problems, various approaches were proposed to compensate for discrepancies between views. In this context, we have tried to propose an effective colour correction method of MVV to overcome some drawbacks identified in the previous approaches.

IM: What kind of support, if any, have you received to help you accomplish this work?

SF: This work was made possible through the joint collaboration with Dr Mohamed-Chaker Larabi from the University of Poitiers, France. With the exception of a small scholarship, unfortunately, we have received little support from our own university.

IM: What is your take on the state of this type of research in Algeria? And how do you see it progressing in the future?
SD: Frankly, with the exception of few marginal PhD projects, research into the field of 3D video is yet to be fully explored in Algeria, and I think it will still take quite some time before we get there. This can perhaps be explained by the expensive cost of 3D displays, or by the fact that Algerian students haven’t had the opportunity to experience this new immersive multimedia technology yet. However, I hope that through collaborations with research laboratory from developed countries, we can fill this technological gap and enhance our research policy in this area.

IM: Thank you again for speaking to Inspire Magazine, and all the best for your future endeavours.

SF: It has been a pleasure to speak with you and thank you very much for giving me this opportunity to address your readership. All the best wishes to you too.

About the Author

Oussama Metatla administrator

Dr. Oussama Metatla is an EPSRC Research Fellow at the Department of Computer Science, University of Bristol, specialising in Human-Computer Interaction. He is co-founder of Anas.org and founder and editor-in-chief of Inspire Magazine.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.