COMP102 2020 Tri 1

Assignment 8+9: 2D graphics and Image Processing

  • Due 18 Jun 10am

Overview

In this assignment, you will implement a program that provides a variety of tools for manipulating 2D images.

Goals

After completing this assignment you should understand how to implement geometric transformations on images, the recursive spread-fill algorithm, and convolution filters. You will also have a greater understanding of the power of convolution filters.

Preparation

Download the zip file and extract it to your home folder. It should contain template for the Java program you are to complete. Read through the whole assignment to see what you need to do. You can view a Demo Video on the AssignmentVideos page.

To Submit

  • Your ImageProcessor.java program.
  • A FilterExplorations text file reporting on your explorations with different filters (if you do the challenge component).

  • There is no Reflection for this assignment.

The Assignment

Implement an ImageProcessor program that can load a jpeg or png image into a 2D array, can display the image on the screen, and provides a set of tools for manipulating the image. The program has two arrays, one, called image, with the "current" version of the image, and a temporary array, called result, with the results of the most recent manipulation. Each of the tools below will apply its transformation to the current image, putting the result in the temporary image. The program displays both the current image on the left and the resulting image on the right.

The program has a "commit" button which copies the new image in the temporary array into the current array, so that later transformations will be applied to the new image. The current and temporary images will then be identical until another transformation is applied.

The program also has a "save" button which will save the current image to a file.

Transformations:

  • Brightness adjustment:
    A slider allows the user to change the brightness of the image (darkening it to complete black or lightening it to complete white). Pixels cannot be lightened beyond white, nor darkened below black. Note that each time the slider is changed, the program should compute a new brightness transformation on the current image, to produce the temporary image.

  • Horizontal and vertical flips and 90 degree rotatations (clockwise and anticlockwise):
    Flipping will keep the size of the image the same. Rotating will turn a rows x cols image into a cols x rows image.

  • Merge images:
    Allows the user to select a new image file and merges the new image with the current image (putting the result in result array). The merging should work by replacing each pixel of the current image that overlaps a pixel of the new image with the weighted average of the two pixels: w x new-image-pixel + (1 - w ) x current-image-pixel. The program provides a slider for the user to change the weight from 0% (only the current image) to 100% (only the new image).

  • Crop&Zoom image:
    Allows the user to select a rectangular region of the image (with the mouse), and expands that region to fill the whole image. If the rectangular region is not the same ratio as the image, then it will need to stretch the region horizontally or vertically to fill the image.

  • Blur:
    Applies a blur filter to the image. The blur filter should be a 3x3 convolution filter, as described in the lectures. The blur may ignore pixels on the edge of the image.

  • Rotate:
    Allow the user to rotate the image by an arbitrary angle. It should rotate the image within the current size. Pixels that are rotated to a position outside the image will be lost. There will also be pixels in the new image that have no value put in them. They should be made to be white.

Core

Complete the following methods:

  • brightness(float value) that makes the image brighter or darker. The value is between -1.0 (black image) and 1.0 (white image).
  • horizontalFlip() that flips the image horizontally.
  • verticalFlip() that flips the image vertically.
  • rotate90clockwise() that rotates the image 90 degrees clockwise.
  • rotate90anticlockwise() that rotates the image 90 degrees anticlockwise.
  • merge(float factor) that merges the current image and the toMerge image, if there is one.
  • saveImage() that writes the current image to a file.

Completion

Complete the following methods:

  • cropAndZoom() that scales the currently selected region of the image (if there is one) to fill the working image. Return true if a region was selected, false otherwise.
  • convolve(float[][] weights) that modifies each pixel to make it a weighted average of itself and the pixels around it. Remember to uncomment the call to convolve method in buttonBlur() method and declare and inilitialise blurWeights parameter as a 3x3 convolution filter.
  • rotate(double angle) that rotates the image by the specified angle. Rotates around the center of the image, or around the center of the selected region if there is a selected region. It does not mean it rotates only the selected region.

Challenge

Extend your program so that it implements the following functionalities:

  • General Convolution Filter.
    Asks the user for a file containing the values of a convolution filter, and applies the filter to the image. The format of the file will be an integer specifying the size of the filter array, followed by all the values of the filter array in row order (ie, the values in the first row, followed by the values in the next row, ....). You should provide a set of at least four files with different kinds of filter: a large blur filter, a sharpen filter, a simple bokeh filter (uniform circular filter which gives an effect like an out-of-focus lens), and an edge-detection filter.
    Experiment with a range of different filters for blurring, sharpening, and edge detection. Find filter arrays from the web, and/or make some up yourself, and explore the effects. Report on a set of filters you tried, the effects you got, and the limitations of the filters. Submit a FilterExplorations text file reporting on your explorations with different filters (if you do the challenge component).

  • Pour.
    Allows the user to select a colour ( and then click on a pixel of the image. The program will then "pour" the new colour onto that pixel, and spread the colour to all the connected pixels of approximately the same colour as the chosen pixel, ie, where the difference between the two colours is less than some threshold. This will require the spread-fill algorithm. The program should allow the user to specify the threshold. You will have to work out a reasonable way of measuring the difference between two colours.

  • Red-eye detection and removal.
    Search the image for small red blobs surrounded by darker regions, and change the red pixels to black (use the Pour method).

Hints for implementation:

  • When changing brightness, the program should always apply the change to the current image, putting the result in the working array. Do not apply the change to the contents of the working array. You can either increase the brightness of each colour component separately, or you can convert the RGB components to HSV components, change the V component, and convert back to RGB.

  • For Crop&Zoom and Rotate, step through each pixel in the working array, working out which pixel of the selected region (in the current image) should be transformed to the pixel in the working array, and copy its colour to the working array. This is easier than doing it in the reverse direction (see the lecture notes for lecture 17/18).

  • For Pour (also known as flood fill) if you use recursion do not allow the threshold to get too large - pouring into a large region is likely to cause Java to crash from running out of "stack space" - memory for keeping track of all the unfinished method calls. You could try a non recursive algorithm such as the "todo" list approach discussed in lectures.

  • For the convolution filter, simply ignore edge pixels which would make some of the filter array go over the edge of the image. This is not ideal, but good enough for large images. Ensure that the pixel values at the end of a transformation are between 0 and 255 (or 0.0 and 1.0) before you render the image on the screen. (It is OK, and may be necessary, for the pixel values to go outside these bounds during the computation of the transformation.)