David Powers

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 13 of 13
  • Item
    Bioplausible multiscale filtering in retinal to cortical processing as a model of computer vision
    (SCITEPRESS, 2015-01) Nematzadeh, Nasim; Lewis, Trent Wilson; Powers, David Martin
    Visual illusions emerge as an attractive field of research with the discovery over the last century of a variety of deep and mysterious mechanisms of visual information processing in the human visual system. Among many classes of visual illusion relating to shape, brightness, colour and motion, “geometrical illusions” are essentially based on the misperception of orientation, size, and position. The main focus of this paper is on illusions of orientation, sometimes referred to as “tilt illusions”, where parallel lines appear not to be parallel, a straight line is perceived as a curved line, or angles where lines intersect appear larger or smaller. Although some low level and high level explanations have been proposed for geometrical tilt illusions, a systematic explanation based on model predictions of both illusion magnitude and local tilt direction is still an open issue. Here a neurophysiological model is expounded based on Difference of Gaussians implementing a classical receptive field model of retinal processing that predicts tilt illusion effects.
  • Item
    Multiplying the Mileage of Your Dataset with Subwindowing
    (Springer Berlin Heidelberg, 2011) Atyabi, Adham; Fitzgibbon, Sean Patrick; Powers, David Martin
    This study is focused on improving the classification performance of EEG data through the use of some data restructuring methods. In this study, the impact of having more training instances/samples vs. using shorter window sizes is investigated. The BCI2003 IVa dataset is used to examine the results. The results not surprisingly indicate that, up to a certain point, having higher numbers of training instances significantly improves the classification performance while the use of shorter window sizes tends to worsen performance in a way that usually cannot fully be compensated for by the additional instances, but tends to provide useful gain in overall performance for small divisors into two or three subepochs. We have moreover determined that use of an incomplete set of overlapping windows can have little effect, and is inapplicable for the smallest divisors, but that use of overlapping subepochs from three specific non-overlapping areas (start, middle and end) of a superepoch tends to contribute significant additional information. Examination of a division into five equal non-overlapping areas indicates that for some subjects the first or last fifth contributes significantly less information than the middle three fifths.
  • Item
    Towards a brain-controlled Wheelchair Prototype
    (BCS, 2010) Yazdani, Naisan; Khazab, Fatemah; Fitzgibbon, Sean Patrick; Luerssen, Martin Holger; Powers, David Martin; Clark, Christopher Richard
    In this project, a design for a non-invasive, EEG-based braincontrolled wheelchair has been developed for use by completely paralyzed patients. The proposed design includes a novel approach for selecting optimal electrode positions, a series of signal processing algorithms and an interface to a powered wheelchair. In addition, a 3D virtual environment has been implemented for training, evaluating and testing the system prior to establishing the wheelchair interface. Simulation of a virtual scenario replicating the real world gives subjects an opportunity to become familiar with operating the device prior to engaging the wheelchair.
  • Item
    Multiplication of EEG Samples through Replicating, Biasing, and Overlapping
    (Springer Berlin Heidelberg, 2012) Atyabi, Adham; Fitzgibbon, Sean Patrick; Powers, David Martin
    EEG recording is a time consuming operation during which the subject is expected to stay still for a long time performing tasks. It is reasonable to expect some uctuation in the level of focus toward the performed task during the task period. This study is focused on investi- gating various approaches for emphasizing regions of interest during the task period. Dividing the task period into three segments of beginning, middle and end, is expectable to improve the overall classi cation per- formance by changing the concentration of the training samples toward regions in which subject had better concentration toward the performed tasks. This issue is investigated through the use of techniques such as i) replication, ii) biasing, and iii) overlapping. A dataset with 4 motor imagery tasks (BCI Competition III dataset IIIa) is used. The results il- lustrate the existing variations within the potential of di erent segments of the task period and the feasibility of techniques that focus the training samples toward such regions.
  • Item
    A computationally and cognitively plausible model of supervised and unsupervised learning
    (Springer-Verlag, 2013-01-01) Powers, David Martin
    The issue of chance correction has been discussed for many decades in the context of statistics, psychology and machine learning, with multiple measures being shown to have desirable properties, including various definitions of Kappa or Correlation, and the psychologically validated ΔP measures. In this paper, we discuss the relationships between these measures, showing that they form part of a single family of measures, and that using an appropriate measure can positively impact learning.
  • Item
    PSO-based dimension reduction of EEG recordings: Implications for subject transfer in BCI
    (Elsevier, 2013-11) Atyabi, Adham; Luerssen, Martin Holger; Powers, David Martin
  • Item
    Surface Laplacian of Central Scalp Electrical Signals is Insensitive to Muscle Contamination
    (IEEE - Institute of Electrical and Electronics, 2013-01) Fitzgibbon, Sean Patrick; Lewis, Trent Wilson; Powers, David Martin; Whitham, Emma Mary; Willoughby, John Osborne; Pope, Kennith
    Abstract—Objective: To investigate the effects of surface Laplacian processing on gross and persistent electromyographic (EMG) contamination of electroencephalographic (EEG) signals in electrical scalp recordings. Methods: We made scalp recordings during passive and active tasks, on awake subjects in the absence and in the presence of complete neuromuscular blockade. Three scalp surface Laplacian estimators were compared to left ear and common average reference (CAR). Contamination was quantified by comparing power after paralysis (brain signal, B) with power before paralysis (brain plus muscle signal, B+M). Brain:Muscle (B:M) ratios for the methods were calculated using B and differences in power after paralysis to represent muscle (M). Results: There were very small power differences after paralysis up to 600 Hz using surface Laplacian transforms (B:M> 6 above 30 Hz in central scalp leads). Conclusions: Scalp surface Laplacian transforms reduce muscle power in central and peri-central leads to less than one sixth of the brain signal, 2-3 times better signal detection than CAR. Significance: Scalp surface Laplacian transformations provide robust estimates for detecting high frequency (gamma) activity, for assessing electrophysiological correlates of disease, and also for providing a measure of brain electrical activity for use as a ‘standard’ in the development of brain/muscle signal separation methods.
  • Item
    Evaluation: from Precision, Recall and F-measure to ROC, Informedness, Markedness and Correlation
    (Bioinfo Publications, 2011-12-15) Powers, David Martin
    Commonly used evaluation measures including Recall, Precision, F-Measure and Rand Accuracy are biased and should not be used without clear understanding of the biases, and corresponding identification of chance or base case levels of the statistic. Using these measures a system that performs worse in the objective sense of Informedness, can appear to perform better under any of these commonly used measures. We discuss several concepts and measures that reflect the probability that prediction is informed versus chance. Informedness and introduce Markedness as a dual measure for the probability that prediction is marked versus chance. Finally we demonstrate elegant connections between the concepts of Informedness, Markedness, Correlation and Significance as well as their intuitive relationships with Recall and Precision, and outline the extension from the dichotomous case to the general multi-class case.
  • Item
    Evaluation Evaluation: a Monte Carlo study
    (IOS Press, 2008-07) Powers, David Martin
    Over the last decade there has been increasing concern about the biases embodied in traditional evaluation methods for Natural Language Processing/Learning, particularly methods borrowed from Information Retrieval. Without knowledge of the Bias and Prevalence of the contingency being tested, or equivalently the expectation due to chance, the simple conditional probabilities Recall, Precision and Accuracy are not meaningful as evaluation measures, either individually or in combinations such as F-factor. The existence of bias in NLP measures leads to the ‘improvement’ of systems by increasing their bias, such as the practice of improving tagging and parsing scores by using most common value (e.g. water is always a Noun) rather than the attempting to discover the correct one. The measures Cohen Kappa and Powers Informedness are discussed as unbiased alternative to Recall and related to the psychologically significant measure DeltaP. In this paper we will analyze both biased and unbiased measures theoretically, characterizing the precise relationship between all these measures as well as evaluating the evaluation measures themselves empirically using a Monte Carlo simulation.
  • Item
    Adabook and Multibook: adaptive boosting with chance correction
    (2013-09) Powers, David Martin
    There has been considerable interest in boosting and bagging, including the combination of the adaptive techniques of AdaBoost with the random selection with replacement techniques of Bagging. At the same time there has been a revisiting of the way we evaluate, with chance-corrected measures like Kappa, Informedness, Correlation or ROC AUC being advocated. This leads to the question of whether learning algorithms can do better by optimizing an appropriate chance corrected measure. Indeed, it is possible for a weak learner to optimize Accuracy to the detriment of the more reaslistic chance-corrected measures, and when this happens the booster can give up too early. This phenomenon is known to occur with conventional Accuracy-based AdaBoost, and the MultiBoost algorithm has been developed to overcome such problems using restart techniques based on bagging. This paper thus complements the theoretical work showing the necessity of using chance-corrected measures for evaluation, with empirical work showing how use of a chance-corrected measure can improve boosting. We show that the early surrender problem occurs in MultiBoost too, in multiclass situations, so that chance-corrected AdaBook and Multibook can beat standard Multiboost or AdaBoost, and we further identify which chance-corrected measures to use when.
  • Item
    The problem with Kappa
    (Association for Computational Linguistics, 2012-04) Powers, David Martin
    It is becoming clear that traditional evaluation measures used in Computational Linguistics (including Error Rates, Accuracy, Recall, Precision and F-measure) are of limited value for unbiased evaluation of systems, and are not meaningful for comparison of algorithms unless both the dataset and algorithm parameters are strictly controlled for skew (Prevalence and Bias). The use of techniques originally designed for other purposes, in particular Receiver Operating Characteristics Area Under Curve, plus variants of Kappa, have been proposed to fill the void. This paper aims to clear up some of the confusion relating to evaluation, by demonstrating that the usefulness of each evaluation method is highly dependent on the assumptions made about the distributions of the dataset and the underlying populations. The behaviour of a number of evaluation measures is compared under common assumptions. Deploying a system in a context which has the opposite skew from its validation set can be expected to approximately negate Fleiss Kappa and halve Cohen Kappa but leave Powers Kappa unchanged. For most performance evaluation purposes, the latter is thus most appropriate, whilst for comparison of behaviour, Matthews Correlation is recommended.
  • Item
    Recall & Precision versus The Bookmaker
    (International Conference on Cognitive Science, 2003-07-13) Powers, David Martin
    In the evaluation of models, theories, information retrieval systems, learning systems and neural networks we must deal with the ubiquitous contingency matrix of decisions versus events. In general this is manifested as the result matrix for a series of experiments aimed at predicting or labeling a series of events. The classical evaluation techniques come from information retrieval, using recall and precision as measures. These are now applied well beyond this field, but unfortunately they have fundamental flaws, are frequently abused, and can prefer substandard models. This paper proposes a well-principled evaluation technique that better takes into account the negative effect of an incorrect result and is directly quantifiable as the probability that an informed decision was made rather than a random guess.
  • Item
    Verb similarity on the taxonomy of WordNet
    (Masaryk University, 2006) Yang, Dongqiang; Powers, David Martin