Paper Submission

This conference uses EasyChair to manage paper submissions: please click here to access the submission system.

Please format your paper according to the IEEE Conference Format. Templates in LaTeX and Word (A4) format are available here.

We ask that you only submit full manuscripts (with a maximum of six pages in length, including references). Upon submission, you will shortly receive an acknowledgement email along with a paper submission ID number. Please use this ID number in any future correspondence. Submitted manuscripts are subject to a peer review process (single-blind). After review, authors of papers will be notified via email (see important dates on the right, or below on mobile).

Authors of accepted papers will be invited to either present their paper via a long oral presentation (12 minute talk, 2 minute Q&A session) or a short oral presentation (7 minutes talk, 2 minutes Q&A session). In order for a paper to be presented, at least one author must register and present at the conference (physically or virtually). You will also need to pre-record and submit your presentation (see below for details). The paper will not be published in the conference proceedings otherwise.

Important Dates

Paper Submission: 16 SEP 2020
30 SEP 2020
7 OCT 2020 (final)
Notification: 28 OCT 2020
Camera-Ready: 15 NOV 2020
Presentation Video: 19 NOV 2020
Early Registration: 19 NOV 2020
Conference: 25-27 NOV 2020

Presentation Video Submission

Due to the conference's mixed physical/virtual attendance and unpredictable nature surrounding COVID-19, we require that all presenters submit a pre-recorded video of their presentation in case they are unable to present live.

Please submit your presentation video before the 19th of November.

Name your video file using the following convention: "PaperID_LastNameOfFirstAuthor.mp4". For example, for paper ID 50, the filename is "50_Chalmers.mp4". Please ensure that your video is MP4 format, limited to 100MB file size, minimum height of 720 pixels, and has an aspect ratio of 16:9. The length of the video should match the length of your presentation (long or short oral presentation) as if you were to do it live, however you can optionally use the extra 2 minutes you would normally have for Q&A for your video presentation as well. For your video, you can record yourself speaking over your slideshow.

Once you have recorded your video, please upload it to our system using the following button:

Camera-ready Paper Formatting

Five steps for preparing your camera-ready files:

1. Revise the paper

You have already received the comments by the reviewers in a previous email. Please take them carefully into account when preparing your camera-ready paper, and improve your writing when it is necessary and possible.

2. Use the right template

To help ensure correct formatting, please use the IEEE style files for conference proceedings as a template for your submission. These include LaTeX and Word style files.

The MS Word and LaTeX instructions and templates can be found on: https://www.ieee.org/conferences/publishing/templates.html

  • Only papers in PDF format will be accepted.
  • Paper Size: A4.
  • The papers are limited to maximum of 6 pages, including figures, tables, and references. A maximum of two extra pages per paper is allowed (i.e., up to 8 pages), at an additional charge of $100 NZD (New Zealand Dollars) per extra page.
  • No page numbers please. We will insert the page numbers for you.

Please Note: Violations of any of the above specifications may result in rejection of your paper.

  • Please add the copyright notice to the bottom of the first page of your source document (Choose the relevant option from the instructions given below). If necessary, contact Andrew Lensen (andrew.lensen@vuw.ac.nz) for the appropriate copyright notice.
  • The appropriate copyright clearance code notice is to appear on the bottom left of the first page of each paper according to the guidelines set forth in the Cataloguing/Copyright Instructions for an IEEE Conference Proceeding. Detailed instructions can be found at: https://www.ieee.org/publications/rights/index.html.
  • Please use the option applicable to you from the following:
    • For papers in which all authors are employed by the US government, the copyright notice is: U.S. Government work not protected by U.S. copyright
    • For papers in which all authors are employed by a Crown government (UK, Canada, and Australia), the copyright notice is: 978-1-7281-8579-8/20/$31.00 ©2020 Crown
    • For papers in which all authors are employed by the European Union, the copyright notice is: 978-1-7281-8579-8/20/$31.00 ©2020 European Union
    • For all other papers the copyright notice is: 978-1-7281-8579-8/20/$31.00 ©2020 IEEE
  • For authors using LaTeX, the copyright can be inserted the following way:
    % Add the following code before the \maketitle command:
    \IEEEoverridecommandlockouts
    \IEEEpubid{\makebox[\columnwidth]
    {978-1-7281-8579-8/20/\$31.00~\copyright2020 IEEE \hfill} %==> insert the copyright option applicable from above. \hspace{\columnsep}\makebox[\columnwidth]{ }}

    % Add the following code after the \maketitle command:
    \IEEEpubidadjcol
  • Proofread your source document thoroughly to confirm that it will require no revision.

4. Validate your PDF using IEEE PDF eXpress site

The site will embed validation tags inside your PDF. Without these tags, the paper will not be included in IEEEXplore.

Log in to the IEEE PDF eXpress website: https://www.pdf-express.org/

  • First-time users should do the following:
    • Select the “New Users - Click Here” link.
    • Enter the following:
      • 51579X for the Conference ID
      • your email address and
      • a password
    • Continue to enter information as prompted
    • An Online confirmation will be displayed, and an email confirmation will be sent verifying your account setup.
  • Previous users of PDF eXpress or IEEE PDF eXpress Plus need to follow the above steps, but should enter the same password that was used for previous conferences. Verify that your contact information is valid.

Note

when you log into PDF eXpress it can take a long time to load for a new user. Moreover, it may take a couple of minutes for the “status” to update to verify that your PDF worked.

Once you are able to log in, you can upload the source files to create the PDF. Once generated, the PDF will be emailed to you. Use this version ‘as it is’, i.e., unchanged/unaltered in any way, in the final upload.

If you fail to validate your PDF using the above process, it may not be included by IEEE eXplore.

5. Submit your validated PDF to EasyChair

Upload the validated PDF file (previous step) using your EasyChair proceedings author role for IVCNZ 2020 via https://easychair.org/my/conference?conf=ivcnz2020. To upload the validated camera-ready paper, please click the magnifying-glass icon (under ‘view’) next to the title of your paper then click ‘Update file’ on the upper-right side of the page.

List of Accepted Papers and Presentation Format (long/short)

ID Authors Title Presentation
1 Tapabrata Chakraborti, Brendan McCane, Steven Mills and Umapada Pal PProCRC
Probabilistic Collaboration of Image Patches for Fine-grained Classification
Short
6 Ignazio Gallo, Gianmarco Ria, Nicola Landro and Riccardo La Grassa Image and Text fusion for UPMC Food-101 using BERT and CNNs Long
7 Noor Fatima AI in Photography: Scrutinizing Implementation of Super-Resolution Techniques in Photo-Editors Short
10 Jia Lu, Minh Nguyen and Weiqi Yan Deep Learning Methods for Human Behavior Recognition Short
13 Luxin Zhang and Wei Qi Yan Deep Learning Methods for Virus Identification From Digital Images Short
14 Richard Clare, Byron Engler and Stephen Weddell Wavefront reconstruction with the cone sensor Long
15 Yash Dixit, Mahmoud Al-Sarayreh, Cameron Craigie and Marlon M. Reis A rapid method of hypercube stitching for snapshot multi-camera system Short
17 Mohammad Awrangjeb, Jardel Rodrigues, Bela Stantic and Vladimir Estivill-Castro A fair comparison of the EEG signal classification methods for alcoholic subject identification Short
18 Parham Taghinia Jelodari, Vishnu Anand Muruganandan, Richard Clare and Stephen Weddell A Wavefront Sensorless Tip/Tilt Removal method for Correcting Astronomical Images Long
22 Amir Ebrahimi, Suhuai Luo and Raymond Chiong Introducing Transfer Learning to 3D ResNet-18 for Alzheimer's Disease Detection on MRI Images Long
23 Sierra Hickman, Vishnu Anand Muruganandan, Stephen Weddell and Richard Clare Image Metrics for Deconvolution of Satellites in Low Earth Orbit Long
25 Ignazio Gallo, Gabriele Magistrali, Nicola Landro and Riccardo La Grassa Improving the Efficient Neural Architecture Search via Rewarding Modifications Short
26 Naeha Sharif, Mohammad Jalwana, Mohammed Bennamoun, Wei Liu and Syed Afaq Ali Shah Leveraging Linguistically-aware Object Relationshs and NASNet for Image Captioning Long
33 Jiayu Liu, Vishnu Anand Muruganandan, Richard Clare, Mar´ıa Cruz Ram´ırez Trujillo and Stephen Weddell A Tip-Tilt Mirror Control System for Partial Image Correction at UC Mount John Observatory Long
34 Seng Cheong Loke, Bruce A. MacDonald, Matthew Parsons and Burkhard C. Wünsche Fast Portrait Segmentation of the Head and Upper Body Long
35 Yishi Guo and Burkhard Wuensche Comparison of Face Detection Algorithms on Mobile Devices Short
37 Juncheng Liu, Steven Mills and Brendan McCane Variational Autoencoder for 3D Voxel Compression Long
39 Zeqi Yu and Wei Qi Yan Human Action Recognition Using Deep Learning Methods Short
40 Huy Le, Minh Nguyen and Weiqi Yan Machine Learning with Synthetic Data -- a New Way to Learn and Classify the Pictorial Augmented Reality Markers in Real-Time Short
42 Debapriya Roy, Sanchayan Santra and Bhabatosh Chanda Incorporating Human Body Shape Guidance for Cloth Warping in Model to Person Virtual Try-on Problems Short
45 Xiaoxu Liu, Wei Qi Yan and Nikola Kasabov Vehicle-Related Scene Segmentation Using CapsNets Short
48 Yerren van Sint Annaland, Lech Szymanski and Steven Mills Predicting Cherry Quality Using Siamese Networks Long
49 Lech Szymanski and Michael Lee Deep Sheep: kinship assignment in livestock from facial images Long
50 Andrew Chalmers and Taehyun Rhee Shadow-based Light Detection for HDR Environment Maps Long
51 Munir Shah, Vanessa Cave and Marlon dos Reis Automatically localising ROIs in hyperspectral images using background subtraction techniques Short
52 Achref Ouni A machine learning approach for image retrieval tasks Short
53 Pranjal Chitale, Hrishikesh Shenai, Jay Gala, Kaustubh Kekre and Ruhina Karani Pothole Detection and Dimension Estimation System using Deep Learning (YOLO) and Image Processing Short
58 Junhong Zhao, Christopher James Parry, Rafael dos Anjos, Craig Anslow and Taehyun Rhee Voice Interaction for Augmented Reality Navigation Interfaces with Natural Language Understanding Short
59 Romain Arnal, David Wojtas and Rick Millane Progress towards imaging biological filaments using X-ray free-electron lasers Long
60 Sixiang Xu, Damien Muselet, Alain Trémeau and Robert Laganière Confidence-based Local Feature Selection for Material Classification Long
62 Nazabat Hussain, Mojde Hasanzade, Dag Werner Breiby and Muhammad Nadeem Akram Wavelet Based Thresholding for Fourier Ptychography Microscopy Short
65 Simon Finnie, Fang-Lue Zhang and Taehyun Rhee Visual Object Tracking in Spherical 360° Videos: A Bridging Approach Long
66 Arpita Dawda and Minh Nguyen Defects Detection in Highly Specular Surface using a Combination of Stereo and Laser Reconstruction Long
67 Finn Petrie and Steven Mills Real Time Ray Tracing of Analytic and Implicit Surfaces Long
68 Mrunalini Nalamati, Nabin Sharma, Saqib Muhammad and Michael Blumenstein Automated Monitoring in Maritime Video Surveillance System Long
69 Arun Gopal Govindaswamy, Enid Montague, Daniela Raicu and Jacob Furst Predicting physician gaze in clinical settings using optical flow and positioning Long
74 Rishil Shah Evolutionary Algorithm Based Residual Block Search for Compression Artifact Removal Long
75 Rongsen Chen, Fang-Lue Zhang and Taehyun Rhee Edge-Aware Convolution for RGB-D Image Segmentation Long
77 Amal Punchihewa and Donald Bailey A Review of Emerging Video Codecs: Challenges and Opportunities Short
79 Adam Tupper and Kourosh Neshatian Evaluating Learned State Representations for Atari Short
84 Tapabrata Chakraborty, Brendan McCane, Steven Mills and Umapada Pal CoCoNet
A Collaborative Convolutional Network applied to fine-grained bird species classification
Long
89 Kapila Pahalawatta, Jaco Fourie, Amber Parker, Peter Carey and Armin Werner Detection and classification of opened and closed flowers in grape inflorescences using Mask R-CNN Short
92 Abhipray Paturkar, Gourab Sen Gupta and Donald Bailey Plant Trait Segmentation for Plant Growth Monitoring Short
93 Donald Bailey History and Evolution of Single Pass Connected Component Analysis Long
94 Ashutosh Soni and Partha Bhowmick Isotropic Remeshing by Dynamic Voronoi Tessellation on Voxelized Surface Long
96 Pubudu Sanjeewani and Brijesh Verma An Optimisation Technique for the Detection of Safety Attributes Using Roadside Video Data Short
97 Basim Azam, Ranju Mandal, Ligang Zhang and Brijesh Verma Class Probability-based Visual and Contextual Feature Integration for Image Parsing Long
98 Sowmya Kasturi, Steven Le Moan and Donald Bailey Heating Patterns Recognition in Industrial Microwave-Processed Foods Short
99 Casey Peat, Oliver Batchelor and Richard Green Multi-View Stereo Evaluation for Fine Object Reconstruction Long
106 Matthew Edwards, Michael Hayes and Richard Green Experimental Validation of Bias in Checkerboard Corner Detection Long
109 Robert Grove and Richard Green Melanoma and Nevi Classification using Convolution Neural Network s Long
110 Dana Lambert and Richard Green Automatic Identification of Diatom Morphology using Deep Learning Long
111 Gonglin Yuan, Bing Xue and Mengjie Zhang A Graph-Based Approach to Automatic Convolutional Neural Network Construction for Image Classification Long
112 Sarah Kennelly and Richard Green Classifying Bird Feeder Photos Long
113 Aisha Ajmal, Christopher Hollitt and Harith Al-Sahaf SALIENT MOTION FEATURES FOR VISUAL ATTENTION MODELS Short
114 Rodrigo Terpan Arenas, Patrice Delmas and Alfonso Gastelum Strozzi Development of a Virtual Environment Based Image Generation Tool for Neural Network Training Long

Copyright © 2020 Victoria University of Wellington. All Rights Reserved.

Log In