Introduction to Artificial Intelligence (Undergraduate Topics in Computer Science)

Introduction to Artificial Intelligence (Undergraduate Topics in Computer Science)

Wolfgang Ertel

Language: English

Pages: 316

ISBN: 0857292986

Format: PDF / Kindle (mobi) / ePub

This concise and accessible textbook supports a foundation or module course on A.I., covering a broad selection of the subdisciplines within this field. The book presents concrete algorithms and applications in the areas of agents, logic, search, reasoning under uncertainty, machine learning, neural networks and reinforcement learning. Topics and features: presents an application-focused and hands-on approach to learning the subject; provides study exercises of varying degrees of difficulty at the end of each chapter, with solutions given at the end of the book; supports the text with highlighted examples, definitions, and theorems; includes chapters on predicate logic, PROLOG, heuristic search, probabilistic reasoning, machine learning and data mining, neural networks and reinforcement learning; contains an extensive bibliography for deeper reading on further topics; supplies additional teaching resources, including lecture slides and training data for learning algorithms, at an associated website.

Privacy Enhancing Technologies (Lecture Notes in Computer Science Series)

Mastering Microsoft Windows Server 2008 R2

Practical Text Mining with Perl (Wiley Series on Methods and Applications in Data Mining)

Digital Signal Processing: A Modern Introduction

CUDA Programming: A Developer's Guide to Parallel Computing with GPUs (Applications of GPU Computing Series)

Art of Computer Programming, Volume 3: Sorting and Searching





















a function symbol Formula Description ∀x frog(x)⇒green(x) All frogs are green ∀x frog(x)∧brown(x)⇒big(x) All brown frogs are big ∀x likes(x,cake) Everyone likes cake �∀x likes(x,cake) Not everyone likes cake �∃x likes(x,cake) No one likes cake ∃x ∀y likes(y,x) There is something that everyone likes ∃x ∀y likes(x,y) There is someone who likes everything ∀x ∃y likes(y,x) Everything is loved by someone ∀x ∃y likes(x,y) Everyone likes something ∀x customer(x)⇒likes(bob,x) Bob

difficult combinatorial search problems, even iterative deepening usually fails due to the size of the search space. Heuristic search helps here through its reduction of the effective branching factor. The IDA⋆-algorithm, like iterative deepening, is complete and requires very little memory. Heuristics naturally only give a significant advantage if the heuristic is “good”. When solving difficult search problems, the developer’s actual task consists of designing heuristics which greatly reduce

explosion of the search space, however, defined a very narrow window for these successes. This phase of AI was described in [RN10] as the “Look, Ma, no hands!” era. Because the economic success of AI systems fell short of expectations, funding for logic-based AI research in the United States fell dramatically during the 1980s. 1.2.3 The New Connectionism During this phase of disillusionment, computer scientists, physicists, and Cognitive scientists were able to show, using computers which were

see that the whole expression becomes zero as soon as one of the factors P(S i |D) on the right side becomes zero. Theoretically there is nothing wrong here. In practice, however, this can lead to very uncomfortable effects if the P(S i |D) are small, because these are estimated by counting frequencies and substituting them into Assume that for the variables S i : P(S i =x|D=y)=0.01 and that there are 40 training cases with D=y. Then with high probability there is no training case with S i =x

Rosenfeld. Neurocomputing: Foundations of Research. MIT Press, Cambridge, 1988. Collection of fundamental original papers. [Bis05] C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, London, 2005. [Bur98] C. J. Burges. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov., 2(2):121–167, 1998. CrossRef [ESS89] W. Ertel, J. Schumann, and Ch. Suttner. Learning heuristics for a theorem prover using back propagation. In J. Retti and

Download sample