Machine Learning: The Art and Science of Algorithms that Make Sense of Data

Machine Learning: The Art and Science of Algorithms that Make Sense of Data

Language: English

Pages: 409

ISBN: 1107422221

Format: PDF / Kindle (mobi) / ePub


As one of the most comprehensive machine learning texts around, this book does justice to the field's incredible richness, but without losing sight of the unifying principles. Peter Flach's clear, example-based approach begins by discussing how a spam filter works, which gives an immediate introduction to machine learning in action, with a minimum of technical fuss. Flach provides case studies of increasing complexity and variety with well-chosen examples and illustrations throughout. He covers a wide range of logical, geometric and statistical models and state-of-the-art topics such as matrix factorisation and ROC analysis. Particular attention is paid to the central role played by features. The use of established terminology is balanced with the introduction of new and useful concepts, and summaries of relevant background material are provided with pointers for revision if necessary. These features ensure Machine Learning will set a new standard as an introductory textbook.

Designing the Internet of Things

Kernel Adaptive Filtering: A Comprehensive Introduction

Language and Computers

Introducing Maven

Understanding and Using C Pointers

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

analysis, about which you will learn more in Chapter 10. Finally, we scale the data to unit variance along each coordinate. Background 1.2. Linear transformations. rather than being derived from a global model built from the entire data set. There is a nice relationship between Euclidean distance and the mean of a set of 1.2 Models: the output of machine learning 25 points: there is no other point which has smaller total squared Euclidean distance to the given points (see Theorem 8.1 on

between the sixth spam e-mail and the third ham e-mail, and we can take the average of their scores as the decision threshold (0.28). An alternative way of finding the optimal point is to iterate over all possible split points – from before the top ranked e-mail to after the bottom one – and calculate the number of correctly classified examples at each split: 4 – 5 – 6 – 5 – 6 – 7 – 6 – 7 – 8 – 7 – 6. The maximum is achieved at the same split point, yielding an accuracy of 0.80. A useful trick to

will illustrate the main idea. Example 1 (Linear classification). Suppose we have only two tests and four training e-mails, one of which is spam (see Table 1). Both tests succeed for the Prologue: A machine learning sampler 3 E-mail x1 x2 Spam? 4x 1 + 4x 2 1 1 1 1 8 2 0 0 0 0 3 1 0 0 4 4 0 1 0 4 Table 1. A small training set for SpamAssassin. The columns marked x 1 and x 2 indicate the results of two tests on four different e-mails. The fourth column indicates which

10. For the moment, the following observations give some idea how we can learn a threshold on a numerical feature: Although in theory there are infinitely many possible thresholds, in practice we only need to consider values separating two examples that end up next to each other if we sort the training examples on increasing (or decreasing) value of the feature. We only consider consecutive examples of different class if our task is classification, whose target values are sufficiently different if

ensures that [5+, 1−] is ‘corrected’ to [6+, 2−] and thus considered to be of the same quality as [2+, 0−] aka [3+, 1−] (Figure 6.10 (bottom)). Another way to reduce myopia further and break such ties is to employ a beam search: rather than greedily going for the best candidate, we maintain a fixed number of alternate candidates. In the example, a small beam size would already allow us to find the more general rule: 6.2 Learning unordered rule sets 171 the first beam would include the candidate

Download sample

Download