49 results for Cree, Michael J., Conference item

  • Colour image processing and texture analysis on images of porterhouse steak meat

    Streeter, Lee V.; Burling-Claridge, G. Robert; Cree, Michael J. (2005)

    Conference item
    University of Waikato

    This paper outlines two colour image processing and texture analysis techniques applied to meat images and assessment of error due to the use of JPEG compression at image capture. JPEG error analysis was performed by capturing TIFF and JPEG images, then calculating the RMS difference and applying a calibration between block boundary features and subjective visual JPEG scores. Both scores indicated high JPEG quality. Correction of JPEG blocking error was trialled and found to produce minimal improvement in the RMS difference. The texture analysis methods used were singular value decomposition over pixel blocks and complex cell analysis. The block singular values were classified as meat or non- meat by Fisher linear discriminant analysis with the colour image processing result used as ‘truth.’ Using receiver operator characteristic (ROC) analysis, an area under the ROC curve of 0.996 was obtained, demonstrating good correspondence between the colour image processing and the singular values. The complex cell analysis indicated a ‘texture angle’ expected from human inspection.

    View record details
  • Scene structure analysis for sprint sports

    Hedayati, M.; Cree, Michael J.; Scott, Jonathan B. (2016)

    Conference item
    University of Waikato

    This work proposes a robust model to analyse the structure of horse races based on 2D velocity vector information. This model is capable of detecting scene breaks, classifying the view of the contenders and extracting the trajectory of the contenders throughout the race. The performance of the system is tested over six video clips from two different broadcast sources. The performance analysis shows the model achieves a high accuracy of view classification with the lowest value of 83%, all in real time.

    View record details
  • A high resolution full-field range imaging system for robotic devices

    Carnegie, Dale A.; Cree, Michael J.; Dorrington, Adrian A.; Payne, Andrew D. (2005)

    Conference item
    University of Waikato

    There has been considerable effort by many researchers to develop a high resolution full-field range imaging system. Traditionally these systems rely on a homodyne technique that modulates the illumination source and shutter speed at some high frequency. These systems tend to suffer from the need to be calibrated to account for changing ambient light conditions and generally cannot provide better than single centimeter range resolution, and even then over a range of only a few meters. We present a system, tested to proof-of-concept stage that is being developed for use on a range of mobile robots. The system has the potential for real-time, sub millimeter range resolution, with minimal power and space requirements.

    View record details
  • The Waikato range imager

    Cree, Michael J.; Dorrington, Adrian A.; Conroy, Richard M.; Payne, Andrew D.; Carnegie, Dale A. (2006)

    Conference item
    University of Waikato

    We are developing a high precision simultaneous full-field acquisition range imager. This device measures range with sub millimetre precision in range simultaneously over a full-field view of the scene. Laser diodes are used to illuminate the scene with amplitude modulation with a frequency of 10MHz up to 100 MHz. The received light is interrupted by a high speed shutter operating in a heterodyne configuration thus producing a low-frequency signal which is sampled with a digital camera. By detecting the phase of the signal at each pixel the range to the scene is determined. We show 3D reconstructions of some viewed objects to demonstrate the capabilities of the ranger.

    View record details
  • Analysis of the SoftKinetic DepthSense for Range Imaging

    Cree, Michael J.; Streeter, Lee V.; Conroy, Richard M.; Dorrington, Adrian A. (2013)

    Conference item
    University of Waikato

    We analyse the SoftKinetic DepthSense 325 range imaging camera for precision and accuracy in ranging out to 3 m. Flat planar targets (one a grey board, the other made from retroreflective material) are imaged at a variety of distances. Straight-forward image processing is used to identify the target and calculate the range and the root mean square variation in ranging to the target. It is found that inaccuracies in ranging of up to 2 cm occur to the grey board when imaging over 0 m to 1.5 m and the precision in ranging degrades from just below 1 cm at 0 m to almost 10 cm at 1.5 m. Similar inaccuracies occur with the retroreflective target but the precision is always under 1 cm even out to 3 m due to the strong signal return received from the target.

    View record details
  • Combination of Mean Shift of Colour Signature and Optical Flow for Tracking During Foreground and Background Occlusion

    Hedayati, M.; Cree, Michael J.; Scott, Jonathan B. (2015)

    Conference item
    University of Waikato

    This paper proposes a multiple hypothesis tracking for multiple object tracking with moving camera. The proposed model makes use of the stability of sparse optical flow along with the invariant colour property under size and pose variation, by merging the colour property of objects into optical flow tracking. To evaluate the algorithm five different videos are selected from broadcast horse races where each video represents different challenges that present in object tracking literature. A comparison study of the proposed method, with a colour based mean shift tracking proves the significant improvement in accuracy and stability of object tracking.

    View record details
  • Vectorised SIMD Implementations of Morphology Algorithms

    Cree, Michael J. (2015)

    Conference item
    University of Waikato

    We explore vectorised implementations, exploiting single instruction multiple data (SIMD) CPU instructions on commonly used architectures, of three efficient algorithms for morphological dilation and erosion. We discuss issues specific to SIMD implementation and describe how they guide algorithm choice. We compare our implementations to a commonly used opensource SIMD accelerated machine vision library and find orders of magnitude speed-ups can be achieved for erosions using two-dimensional structuring elements.

    View record details
  • Analysis of ICP variants for the registration of partially overlapping time-of-flight range images

    Larkins, Robert L.; Cree, Michael J.; Dorrington, Adrian A. (2010)

    Conference item
    University of Waikato

    The iterative closest point (ICP) algorithm is one of the most commonly used methods for registering partially overlapping range images. Nevertheless, this algorithm was not originally designed for this task, and many variants have been proposed in an effort to improve its prociency. The relatively new full-field amplitude-modulated time-of-flight range imaging cameras present further complications to registration in the form of measurement errors due to mixed and scattered light. This paper investigates the effectiveness of the most common ICP variants applied to range image data acquired from full-field range imaging cameras. The original ICP algorithm combined with boundary rejection performed the same as or better than the majority of variants tested. In fact, many of these variants proved to decrease the registration alignment.

    View record details
  • A clustering based denoising technique for range images of time of flight cameras

    Schoner, H.; Moser, B.; Dorrington, Adrian A.; Payne, Andrew D.; Cree, Michael J.; Heise, B.; Bauer, F. (2008)

    Conference item
    University of Waikato

    A relatively new technique for measuring the 3D structure of visual scenes is provided by time of flight (TOF) cameras. Reflections of modulated light waves are recorded by a parallel pixel array structure. The time series at each pixel of the resulting image stream is used to estimate travelling time and thus range information. This measuring technique results in pixel dependent noise levels with variances changing over several orders of magnitude dependent on the illumination and material parameters. This makes application of traditional (global) denoising techniques suboptimal. Using free aditional information from the camera and a clustering procedure we can get information about which pixels belong to the same object, and what their noise level is, which allows for locally adapted smoothing. To illustrate the success of this method, we compare it with raw camera output and a traditional method for edge preserving smoothing, anisotropic diffusion [10, 12]. We show that this mathematical technique works without individual adaptations on two camera systems with highly different noise characteristics.

    View record details
  • Improved linearity using harmonic error rejection in a full-field range imaging system

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A. (2008)

    Conference item
    University of Waikato

    Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.

    View record details
  • Estimating heading direction from monocular video sequences using biologically-based sensor

    Cree, Michael J.; Perrone, John A.; Anthonys, Gehan; Garnett, Aden C.; Gouk, Henry (2016)

    Conference item
    University of Waikato

    The determination of one’s movement through the environment (visual odometry or self-motion estimation) from monocular sources such as video is an important research problem because of its relevance to robotics and autonomous vehicles. The traditional computer vision approach to this problem tracks visual features across frames in order to obtain 2-D image motion estimates from which the camera motion can be derived. We present an alternative scheme which uses the properties of motion sensitive cells in the primate brain to derive the image motion and the camera heading vector. We tested heading estimation using a camera mounted on a linear translation table with the line of sight of the camera set at a range of angles relative to straight ahead (0◦ to 50◦ in 10◦ steps). The camera velocity was also varied (0.2, 0.4, 0.8, 1.2, 1.6 and 2.0 m/s). Our biologically-based method produced accurate heading estimates over a wide range of test angles and camera speeds. Our approach has the advantage of being a one-shot estimator and not requiring iterative search techniques for finding the heading.

    View record details
  • Video-rate or high-precision: A flexible range imaging camera

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John Peter; Jongenelen, Adrian P.P. (2008)

    Conference item
    University of Waikato

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system’s frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

    View record details
  • Verification of multi-view point-cloud registration for spherical harmonic cross-correlation

    Larkins, Robert L.; Cree, Michael J.; Dorrington, Adrian A. (2012)

    Conference item
    University of Waikato

    Spherical harmonic cross-correlation is a robust registration algorithm that brings two point-clouds of the same scene into coarse rotational alignment. The found rotation however may not give the desired alignment, as misalignments can occur if there is not enough overlap between point-clouds, or if they contain a form of symmetry. We propose a verification method whose purpose is to determine if registration has failed for a priori unknown registration. The rotational transformation between multiple clouds must satisfy internal consistency, namely multiple rotational transformations are transitive. The rotation verification is performed using triplets of images, which are cross-referenced with each other to classify rotations individually. Testing is performed on a dataset of a priori known registrations. It is found that when the number of images or the percentage of correct rotations is increased, the number of correct rotation classifications improves. Even when tested with only four images and a correct rotation percentage of 17%, the rotation verification is still considered a viable method for classifying rotations. Spherical harmonic cross-correlation is benefited by rotation verification as it provides an additional approach for checking whether found rotations are correct.

    View record details
  • Analysis of binning of normals for spherical harmonic cross-correlation

    Larkins, Robert L.; Cree, Michael J.; Dorrington, Adrian A. (2012)

    Conference item
    University of Waikato

    Spherical harmonic cross-correlation is a robust registration technique that uses the normals of two overlapping point clouds to bring them into coarse rotational alignment. This registration technique however has a high computational cost as spherical harmonics need to be calculated for every normal. By binning the normals, the computational efficiency is improved as the spherical harmonics can be pre-computed and cached at each bin location. In this paper we evaluate the efficiency and accuracy of the equiangle grid, icosahedron subdivision and the Fibonacci spiral, an approach we propose. It is found that the equiangle grid has the best efficiency as it can perform direct binning, followed by the Fibonacci spiral and then the icosahedron, all of which decrease the computational cost compared to no binning. The Fibonacci spiral produces the highest achieved accuracy of the three approaches while maintaining a low number of bins. The number of bins allowed by the equiangle grid and icosahedron are much more restrictive than the Fibonacci spiral. The performed analysis shows that the Fibonacci spiral can perform as well as the original cross-correlation algorithm without binning, while also providing a significant improvement in computational efficiency.

    View record details
  • Toward-1mm depth precision with a solid state full-field range imaging system

    Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J. (2006)

    Conference item
    University of Waikato

    Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.

    View record details
  • Heterodyne range imaging as an alternative to photogrammetry

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M. (2007)

    Conference item
    University of Waikato

    Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications. Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately 0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate for the range determination and other errors, which could possibly lead to three-dimensional measurement precision approaching that of photogrammetry.

    View record details
  • Range imager performance comparison in homodyne and heterodyne operating modes

    Conroy, Richard M.; Dorrington, Adrian A.; Künnemeyer, Rainer; Cree, Michael J. (2009)

    Conference item
    University of Waikato

    Range imaging cameras measure depth simultaneously for every pixel in a given field of view. In most implementations the basic operating principles are the same. A scene is illuminated with an intensity modulated light source and the reflected signal is sampled using a gain-modulated imager. Previously we presented a unique heterodyne range imaging system that employed a bulky and power hungry image intensifier as the high speed gain-modulation mechanism. In this paper we present a new range imager using an internally modulated image sensor that is designed to operate in heterodyne mode, but can also operate in homodyne mode. We discuss homodyne and heterodyne range imaging, and the merits of the various types of hardware used to implement these systems. Following this we describe in detail the hardware and firmware components of our new ranger. We experimentally compare the two operating modes and demonstrate that heterodyne operation is less sensitive to some of the limitations suffered in homodyne mode, resulting in better linearity and ranging precision characteristics. We conclude by showing various qualitative examples that demonstrate the system’s three-dimensional measurement performance.

    View record details
  • Characterization of modulated time-of-flight range image sensors

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A. (2009)

    Conference item
    University of Waikato

    A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10–100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements.

    View record details
  • Resolving depth measurement ambiguity with commercially available range imaging cameras

    McClure, Shane H.; Cree, Michael J.; Dorrington, Adrian A.; Payne, Andrew D. (2010)

    Conference item
    University of Waikato

    Time-of-flight range imaging is typically performed with the amplitude modulated continuous wave method. This involves illuminating a scene with amplitude modulated light. Reflected light from the scene is received by the sensor with the range to the scene encoded as a phase delay of the modulation envelope. Due to the cyclic nature of phase, an ambiguity in the measured range occurs every half wavelength in distance, thereby limiting the maximum useable range of the camera. This paper proposes a procedure to resolve depth ambiguity using software post processing. First, the range data is processed to segment the scene into separate objects. The average intensity of each object can then be used to determine which pixels are beyond the non-ambiguous range. The results demonstrate that depth ambiguity can be resolved for various scenes using only the available depth and intensity information. This proposed method reduces the sensitivity to objects with very high and very low reflectance, normally a key problem with basic threshold approaches. This approach is very flexible as it can be used with any range imaging camera. Furthermore, capture time is not extended, keeping the artifacts caused by moving objects at a minimum. This makes it suitable for applications such as robot vision where the camera may be moving during captures. The key limitation of the method is its inability to distinguish between two overlapping objects that are separated by a distance of exactly one non-ambiguous range. Overall the reliability of this method is higher than the basic threshold approach, but not as high as the multiple frequency method of resolving ambiguity.

    View record details
  • Multiple range imaging camera operation with minimal performance impact

    Whyte, Refael Z.; Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J. (2010)

    Conference item
    University of Waikato

    Time-of-flight range imaging cameras operate by illuminating a scene with amplitude modulated light and measuring the phase shift of the modulation envelope between the emitted and reflected light. Object distance can then be calculated from this phase measurement. This approach does not work in multiple camera environments as the measured phase is corrupted by the illumination from other cameras. To minimize inaccuracies in multiple camera environments, replacing the traditional cyclic modulation with pseudo-noise amplitude modulation has been previously demonstrated. However, this technique effectively reduced the modulation frequency, therefore decreasing the distance measurement precision (which has a proportional relationship with the modulation frequency). A new modulation scheme using maximum length pseudo-random sequences binary phase encoded onto the existing cyclic amplitude modulation, is presented. The effective modulation frequency therefore remains unchanged, providing range measurements with high precision. The effectiveness of the new modulation scheme was verified using a custom time-of-flight camera based on the PMD19-K2 range imaging sensor. The new pseudo-noise modulation has no significant performance decrease in a single camera environment. In a two camera environment, the precision is only reduced by the increased photon shot noise from the second illumination source.

    View record details