48 results for Cree, Michael J., Conference item

  • Colour image processing and texture analysis on images of porterhouse steak meat

    Streeter, Lee V.; Burling-Claridge, G. Robert; Cree, Michael J. (2005)

    Conference item
    University of Waikato

    This paper outlines two colour image processing and texture analysis techniques applied to meat images and assessment of error due to the use of JPEG compression at image capture. JPEG error analysis was performed by capturing TIFF and JPEG images, then calculating the RMS difference and applying a calibration between block boundary features and subjective visual JPEG scores. Both scores indicated high JPEG quality. Correction of JPEG blocking error was trialled and found to produce minimal improvement in the RMS difference. The texture analysis methods used were singular value decomposition over pixel blocks and complex cell analysis. The block singular values were classified as meat or non- meat by Fisher linear discriminant analysis with the colour image processing result used as ‘truth.’ Using receiver operator characteristic (ROC) analysis, an area under the ROC curve of 0.996 was obtained, demonstrating good correspondence between the colour image processing and the singular values. The complex cell analysis indicated a ‘texture angle’ expected from human inspection.

    View record details
  • Combination of Mean Shift of Colour Signature and Optical Flow for Tracking During Foreground and Background Occlusion

    Hedayati, M.; Cree, Michael J.; Scott, Jonathan B. (2015)

    Conference item
    University of Waikato

    This paper proposes a multiple hypothesis tracking for multiple object tracking with moving camera. The proposed model makes use of the stability of sparse optical flow along with the invariant colour property under size and pose variation, by merging the colour property of objects into optical flow tracking. To evaluate the algorithm five different videos are selected from broadcast horse races where each video represents different challenges that present in object tracking literature. A comparison study of the proposed method, with a colour based mean shift tracking proves the significant improvement in accuracy and stability of object tracking.

    View record details
  • Vectorised SIMD Implementations of Morphology Algorithms

    Cree, Michael J. (2015)

    Conference item
    University of Waikato

    We explore vectorised implementations, exploiting single instruction multiple data (SIMD) CPU instructions on commonly used architectures, of three efficient algorithms for morphological dilation and erosion. We discuss issues specific to SIMD implementation and describe how they guide algorithm choice. We compare our implementations to a commonly used opensource SIMD accelerated machine vision library and find orders of magnitude speed-ups can be achieved for erosions using two-dimensional structuring elements.

    View record details
  • Estimating heading direction from monocular video sequences using biologically-based sensor

    Cree, Michael J.; Perrone, John A.; Anthonys, Gehan; Garnett, Aden C.; Gouk, Henry (2016)

    Conference item
    University of Waikato

    The determination of one’s movement through the environment (visual odometry or self-motion estimation) from monocular sources such as video is an important research problem because of its relevance to robotics and autonomous vehicles. The traditional computer vision approach to this problem tracks visual features across frames in order to obtain 2-D image motion estimates from which the camera motion can be derived. We present an alternative scheme which uses the properties of motion sensitive cells in the primate brain to derive the image motion and the camera heading vector. We tested heading estimation using a camera mounted on a linear translation table with the line of sight of the camera set at a range of angles relative to straight ahead (0◦ to 50◦ in 10◦ steps). The camera velocity was also varied (0.2, 0.4, 0.8, 1.2, 1.6 and 2.0 m/s). Our biologically-based method produced accurate heading estimates over a wide range of test angles and camera speeds. Our approach has the advantage of being a one-shot estimator and not requiring iterative search techniques for finding the heading.

    View record details
  • Design of a Pseudo-Holographic Distributed Time-of-Flight Sonar Range-Imaging System

    Streeter, Lee; Scott, Jonathan B.; Lickfold, Carl A.; Cree, Michael J. (2016)

    Conference item
    University of Waikato

    The design of an audible sonar distributed sensor time-of-flight range imaging system is investigated, sonar being chosen as a substitute for optical range imaging due to cost and simplicity of implementation. The distributed range imaging system proposed is based on the holographic principle where the sensors detect the self interference of the reflected sound from the scene, and the Fourier analysis computes the reflected object profile. An approximate linearised model used in related holographic imaging techniques is found to be inappropriate for the design, and qualitative assessment of simulations show that removing the linearisation dramatically improves image reconstruction. Quantitatively the nonlinear reconstruction improves the RMSE by a factor of 1.3-2.1 times. The full nonlinear reconstruction is slow, and mathematical development lead to 15 fold reduction in computation time.

    View record details
  • Extracting the MESA SR4000 Calibrations

    Charleston, Sean A.; Dorrington, Adrian A.; Streeter, Lee; Cree, Michael J. (2015)

    Conference item
    University of Waikato

    Time-of-flight range imaging cameras are capable of acquiring depth images of a scene. Some algorithms require these cameras to be run in `raw mode', where any calibrations from the off-the-shelf manufacturers are lost. The calibration of the MESA SR4000 is herein investigated, with an attempt to reconstruct the full calibration. Possession of the factory calibration enables calibrated data to be acquired and manipulated even in “raw mode.” This work is motivated by the problem of motion correction, in which the calibration must be separated into component parts to be applied at different stages in the algorithm. There are also other applications, in which multiple frequencies are required, such as multipath interference correction. The other frequencies can be calibrated in a similar way, using the factory calibration as a base. A novel technique for capturing the calibration data is described; a retro-reflector is used on a moving platform, which acts as a point source at a distance, resulting in planar waves on the sensor. A number of calibrations are retrieved from the camera, and are then modelled and compared to the factory calibration. When comparing the factory calibration to both the “raw mode” data, and the calibration described herein, a root mean squared error improvement of 51:3mm was seen, with a standard deviation improvement of 34:9mm. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

    View record details
  • Separating true range measurements from multi-path and scattering interference in commercial range cameras

    Dorrington, Adrian A.; Godbaz, John Peter; Cree, Michael J.; Payne, Andrew D.; Streeter, Lee V. (2011)

    Conference item
    University of Waikato

    Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path problem, which causes range distortions when stray light interferes with the range measurement in a given pixel. Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization approach, rather than relying on the processing provided by the manufacturer, to determine the individual component returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene.

    View record details
  • Illumination waveform optimization for time-of-flight range imaging cameras

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J. (2011)

    Conference item
    University of Waikato

    Time-of-flight range imaging sensors acquire an image of a scene, where in addition to standard intensity information, the range (or distance) is also measured concurrently by each pixel. Range is measured using a correlation technique, where an amplitude modulated light source illuminates the scene and the reflected light is sampled by a gain modulated image sensor. Typically the illumination source and image sensor are amplitude modulated with square waves, leading to a range measurement linearity error caused by aliased harmonic components within the correlation waveform. A simple method to improve measurement linearity by reducing the duty cycle of the illumination waveform to suppress problematic aliased harmonic components is demonstrated. If the total optical power is kept constant, the measured correlation waveform amplitude also increases at these reduced illumination duty cycles. Measurement performance is evaluated over a range of illumination duty cycles, both for a standard range imaging camera configuration, and also using a more complicated phase encoding method that is designed to cancel aliased harmonics during the sampling process. The standard configuration benefits from improved measurement linearity for illumination duty cycles around 30%, while the measured amplitude, hence range precision, is increased for both methods as the duty cycle is reduced below 50% (while maintaining constant optical power).

    View record details
  • A power-saving modulation technique for time-of-flight range imaging sensors

    Conroy, Richard M.; Dorrington, Adrian A.; Payne, Andrew D.; Künnemeyer, Rainer; Cree, Michael J. (2011)

    Conference item
    University of Waikato

    Time-of-flight range imaging cameras measure distance and intensity simultaneously for every pixel in an image. With the continued advancement of the technology, a wide variety of new depth sensing applications are emerging; however a number of these potential applications have stringent electrical power constraints that are difficult to meet with the current state-of-the-art systems. Sensor gain modulation contributes a significant proportion of the total image sensor power consumption, and as higher spatial resolution range image sensors operating at higher modulation frequencies (to achieve better measurement precision) are developed, this proportion is likely to increase. The authors have developed a new sensor modulation technique using resonant circuit concepts that is more power efficient than the standard mode of operation. With a proof of principle system, a 93–96% reduction in modulation drive power was demonstrated across a range of modulation frequencies from 1–11 MHz. Finally, an evaluation of the range imaging performance revealed an improvement in measurement linearity in the resonant configuration due primarily to the more sinusoidal shape of the resonant electrical waveforms, while the average precision values were comparable between the standard and resonant operating modes.

    View record details
  • A fast Maximum Likelihood method for improving AMCW lidar precision using waveform shape

    Godbaz, John Peter; Cree, Michael J.; Dorrington, Adrian A.; Payne, Andrew D. (2009)

    Conference item
    University of Waikato

    Amplitude Modulated Continuous Wave imaging lidar systems use the time-of-flight principle to determine the range to objects in a scene. Typical systems use modulated illumination of a scene and a modulated sensor or image intensifier. By changing the relative phase of the two modulation signals it is possible to measure the phase shift induced in the illumination signal, thus the range to the scene. In practical systems, the resultant correlation waveform contains harmonics that typically result in a non-linear range response. Nevertheless, these harmonics can be used to improve range precision. We model a waveform continuously variable in phase and intensity as a linear interpolation. By approximating the problem as a Maximum Likelihood problem, an analytic solution for the problem is derived that enables an entire range image to be processed in a few seconds. A substantial improvement in overall RMS error and precision over the standard Fourier phase analysis approach results.

    View record details
  • Undue influence: Mitigating range-intensity coupling in AMCW ‘flash’ lidar using scene texture

    Godbaz, John Peter; Cree, Michael J.; Dorrington, Adrian A. (2009)

    Conference item
    University of Waikato

    We present a new algorithm for mitigating range-intensity coupling caused by scattered light in full-field amplitude modulated continuous wave lidar systems using scene texture. Full-field Lidar works using the time-of-flight principle to measure the range to thousands of points in a scene simultaneously. Mixed pixel are erroneous range measurements caused by pixels integrating light from more than one object at a time. Conventional optics suffer from internal reflections and light scattering which can result in every pixel being mixed with scattered light. This causes erroneous range measurements and range-intensity coupling. By measuring how range changes with intensity over local regions it is possible to determine the phase and intensity of the scattered light without the complex calibration inherent in deconvolution based restoration. The new method is shown to produce a substantial improvement in range image quality. An additional range from texture method is demonstrated which is resistant to scattered light. Variations of the algorithms are tested with and without segmentation - the variant without segmentation is faster, but causes erroneous ranges around the edges of objects which are not present in the segmented algorithm.

    View record details
  • Scene structure analysis for sprint sports

    Hedayati, M.; Cree, Michael J.; Scott, Jonathan B. (2016)

    Conference item
    University of Waikato

    This work proposes a robust model to analyse the structure of horse races based on 2D velocity vector information. This model is capable of detecting scene breaks, classifying the view of the contenders and extracting the trajectory of the contenders throughout the race. The performance of the system is tested over six video clips from two different broadcast sources. The performance analysis shows the model achieves a high accuracy of view classification with the lowest value of 83%, all in real time.

    View record details
  • A clustering based denoising technique for range images of time of flight cameras

    Schoner, H.; Moser, B.; Dorrington, Adrian A.; Payne, Andrew D.; Cree, Michael J.; Heise, B.; Bauer, F. (2008)

    Conference item
    University of Waikato

    A relatively new technique for measuring the 3D structure of visual scenes is provided by time of flight (TOF) cameras. Reflections of modulated light waves are recorded by a parallel pixel array structure. The time series at each pixel of the resulting image stream is used to estimate travelling time and thus range information. This measuring technique results in pixel dependent noise levels with variances changing over several orders of magnitude dependent on the illumination and material parameters. This makes application of traditional (global) denoising techniques suboptimal. Using free aditional information from the camera and a clustering procedure we can get information about which pixels belong to the same object, and what their noise level is, which allows for locally adapted smoothing. To illustrate the success of this method, we compare it with raw camera output and a traditional method for edge preserving smoothing, anisotropic diffusion [10, 12]. We show that this mathematical technique works without individual adaptations on two camera systems with highly different noise characteristics.

    View record details
  • Resolving depth measurement ambiguity with commercially available range imaging cameras

    McClure, Shane H.; Cree, Michael J.; Dorrington, Adrian A.; Payne, Andrew D. (2010)

    Conference item
    University of Waikato

    Time-of-flight range imaging is typically performed with the amplitude modulated continuous wave method. This involves illuminating a scene with amplitude modulated light. Reflected light from the scene is received by the sensor with the range to the scene encoded as a phase delay of the modulation envelope. Due to the cyclic nature of phase, an ambiguity in the measured range occurs every half wavelength in distance, thereby limiting the maximum useable range of the camera. This paper proposes a procedure to resolve depth ambiguity using software post processing. First, the range data is processed to segment the scene into separate objects. The average intensity of each object can then be used to determine which pixels are beyond the non-ambiguous range. The results demonstrate that depth ambiguity can be resolved for various scenes using only the available depth and intensity information. This proposed method reduces the sensitivity to objects with very high and very low reflectance, normally a key problem with basic threshold approaches. This approach is very flexible as it can be used with any range imaging camera. Furthermore, capture time is not extended, keeping the artifacts caused by moving objects at a minimum. This makes it suitable for applications such as robot vision where the camera may be moving during captures. The key limitation of the method is its inability to distinguish between two overlapping objects that are separated by a distance of exactly one non-ambiguous range. Overall the reliability of this method is higher than the basic threshold approach, but not as high as the multiple frequency method of resolving ambiguity.

    View record details
  • Multiple range imaging camera operation with minimal performance impact

    Whyte, Refael Z.; Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J. (2010)

    Conference item
    University of Waikato

    Time-of-flight range imaging cameras operate by illuminating a scene with amplitude modulated light and measuring the phase shift of the modulation envelope between the emitted and reflected light. Object distance can then be calculated from this phase measurement. This approach does not work in multiple camera environments as the measured phase is corrupted by the illumination from other cameras. To minimize inaccuracies in multiple camera environments, replacing the traditional cyclic modulation with pseudo-noise amplitude modulation has been previously demonstrated. However, this technique effectively reduced the modulation frequency, therefore decreasing the distance measurement precision (which has a proportional relationship with the modulation frequency). A new modulation scheme using maximum length pseudo-random sequences binary phase encoded onto the existing cyclic amplitude modulation, is presented. The effective modulation frequency therefore remains unchanged, providing range measurements with high precision. The effectiveness of the new modulation scheme was verified using a custom time-of-flight camera based on the PMD19-K2 range imaging sensor. The new pseudo-noise modulation has no significant performance decrease in a single camera environment. In a two camera environment, the precision is only reduced by the increased photon shot noise from the second illumination source.

    View record details
  • Surface projection for mixed pixel restoration

    Larkins, Robert L.; Cree, Michael J.; Dorrington, Adrian A.; Godbaz, John Peter (2009)

    Conference item
    University of Waikato

    Amplitude modulated full-field range-imagers are measurement devices that determine the range to an object simultaneously for each pixel in the scene, but due to the nature of this operation, they commonly suffer from the significant problem of mixed pixels. Once mixed pixels are identified a common procedure is to remove them from the scene; this solution is not ideal as the captured point cloud may become damaged. This paper introduces an alternative approach, in which mixed pixels are projected onto the surface that they should belong. This is achieved by breaking the area around an identified mixed pixel into two classes. A parametric surface is then fitted to the class closest to the mixed pixel, with this mixed pixel then being project onto this surface. The restoration procedure was tested using twelve simulated scenes designed to determine its accuracy and robustness. For these simulated scenes, 93% of the mixed pixels were restored to the surface to which they belong. This mixed pixel restoration process is shown to be accurate and robust for both simulated and real world scenes, thus provides a reliable alternative to removing mixed pixels that can be easily adapted to any mixed pixel detection algorithm.

    View record details
  • Analysis of binning of normals for spherical harmonic cross-correlation

    Larkins, Robert L.; Cree, Michael J.; Dorrington, Adrian A. (2012)

    Conference item
    University of Waikato

    Spherical harmonic cross-correlation is a robust registration technique that uses the normals of two overlapping point clouds to bring them into coarse rotational alignment. This registration technique however has a high computational cost as spherical harmonics need to be calculated for every normal. By binning the normals, the computational efficiency is improved as the spherical harmonics can be pre-computed and cached at each bin location. In this paper we evaluate the efficiency and accuracy of the equiangle grid, icosahedron subdivision and the Fibonacci spiral, an approach we propose. It is found that the equiangle grid has the best efficiency as it can perform direct binning, followed by the Fibonacci spiral and then the icosahedron, all of which decrease the computational cost compared to no binning. The Fibonacci spiral produces the highest achieved accuracy of the three approaches while maintaining a low number of bins. The number of bins allowed by the equiangle grid and icosahedron are much more restrictive than the Fibonacci spiral. The performed analysis shows that the Fibonacci spiral can perform as well as the original cross-correlation algorithm without binning, while also providing a significant improvement in computational efficiency.

    View record details
  • Video-rate or high-precision: A flexible range imaging camera

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John Peter; Jongenelen, Adrian P.P. (2008)

    Conference item
    University of Waikato

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system’s frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

    View record details
  • Calibration and control of a robot arm using a range imaging camera

    Kelly, Cameron B.D.; Dorrington, Adrian A.; Cree, Michael J.; Payne, Andrew D. (2010)

    Conference item
    University of Waikato

    Time of flight range imaging is an emerging technology that has numerous applications in machine vision. In this paper we cover the use of a commercial time of flight range imaging camera for calibrating a robotic arm. We do this by identifying retro-reflective targets attached to the arm, and centroiding on calibrated spatial data, which allows precise measurement of three dimensional target locations. The robotic arm is an inexpensive model that does not have positional feedback, so a series of movements are performed to calibrate the servos signals to the physical position of the arm. The calibration showed a good linear response between the control signal and servo angles. The calibration procedure also provided a transformation between the camera and arm coordinate systems. Inverse kinematic control was then used to position the arm. The range camera could also be used to identify objects in the scene. With the object location now known in the arm's coordinate system (transformed from the camera's coordinate system) the arm was able to move allowing it to grasp the object.

    View record details
  • Blind deconvolution of depth-of-field limited full-field lidar data by determination of focal parameters

    Godbaz, John Peter; Cree, Michael J.; Dorrington, Adrian A. (2010)

    Conference item
    University of Waikato

    We present a new two-stage method for parametric spatially variant blind deconvolution of full-field Amplitude Modulated Continuous Wave lidar image pairs taken at different aperture settings subject to limited depth of field. A Maximum Likelihood based focal parameter determination algorithm uses range information to reblur the image taken with a smaller aperture size to match the large aperture image. This allows estimation of focal parameters without prior calibration of the optical setup and produces blur estimates which have better spatial resolution and less noise than previous depth from defocus (DFD) blur measurement algorithms. We compare blur estimates from the focal parameter determination method to those from Pentland's DFD method, Subbarao's S-Transform method and estimates from range data/the sampled point spread function. In a second stage the estimated focal parameters are applied to deconvolution of total integrated intensity lidar images improving depth of field. We give an example of application to complex domain lidar images and discuss the trade-off between recovered amplitude texture and sharp range estimates.

    View record details