Artificial Intelligence: A Modern Approach, 3/e (GE-Paperback)

Artificial Intelligence: A Modern Approach, 3/e (GE-Paperback)

作者: Stuart Russell Peter Norvig
出版社: Pearson FT Press
出版在: 2016-05-01
ISBN-13: 9781292153964
ISBN-10: 1292153962
總頁數: 1152 頁




內容描述


Description
 
For one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.
The long-anticipated revision of this best-selling text offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence.
 
Features  

Nontechnical learning material.

Provides a simple overview of major concepts, uses a nontechnical language to help increase understanding. Makes the book accessible to a broader range of students.

The Internet as a sample application for intelligent systems — Examples of logical reasoning, planning, and natural language processing using Internet agents.

Promotes student interest with interesting, relevant exercises.

Increased coverage of material — New or expanded coverage of constraint satisfaction, local search planning methods, multi-agent systems, game theory, statistical natural language processing and uncertain reasoning over time. More detailed descriptions of algorithms for probabilistic inference, fast propositional inference, probabilistic learning approaches including EM, and other topics.

Brings students up to date on the latest technologies, and presents concepts in a more unified manner.

Updated and expanded exercises — 30% of the exercises are revised or NEW.

More Online Software.

Allows many more opportunities for student projects on the web.

A unified, agent-based approach to AI — Organizes the material around the task of building intelligent agents.

Shows students how the various subfields of AI fit together to build actual, useful programs.

Comprehensive, up-to-date coverage — Includes a unified view of the field organized around the rational decision making paradigm.

A flexible format.

Makes the text adaptable for varying instructors' preferences.

In-depth coverage of basic and advanced topics.

Provides students with a basic understanding of the frontiers of AI without compromising complexity and depth.

Pseudo-code versions of the major AI algorithms are presented in a uniform fashion, and Actual Common Lisp and Python implementations of the presented algorithms are available via the Internet.

Gives instructors and students a choice of projects; reading and running the code increases understanding.

Author Maintained Website

Visit http://aima.cs.berkeley.edu/ to access text-related Comments and Discussions, AI Resources on the Web, and Online Code Repository, Instructor Resources, and more!

 
New to this Edition
This edition captures the changes in AI that have taken place since the last edition in 2003. There have been important applications of AI technology, such as the widespread deployment of practical speech recognition, machine translation, autonomous vehicles, and household robotics. There have been algorithmic landmarks, such as the solution of the game of checkers. And there has been a great deal of theoretical progress, particularly in areas such as probabilistic reasoning, machine learning, and computer vision. Most important from the authors' point of view is the continued evolution in how we think about the field, and thus how the book is organized. The major changes are as follows:

More emphasis is placed on partially observable and nondeterministic environments, especially in the nonprobabilistic settings of search and planning. The concepts of belief state (a set of possible worlds) and state estimation (maintaining the belief state) are introduced in these settings; later in the book, probabilities are added.
In addition to discussing the types of environments and types of agents, there is more in more depth coverage of the types of representations that an agent can use. Differences between atomic representations (in which each state of the world is treated as a black box), factored representations (in which a state is a set of attribute/value pairs), and structured representations (in which the world consists of objects and relations between them) are distinguished.
Coverage of planning goes into more depth on contingent planning in partially observable environments and includes a new approach to hierarchical planning.
New material on first-order probabilistic models is added, including open-universe models for cases where there is uncertainty as to what objects exist.
The introductory machine-learning chapter is completely rewritten, stressing a wider variety of more modern learning algorithms and placing them on a firmer theoretical footing.
Expanded coverage of Web search and information extraction, and of techniques for learning from very large data sets.
20% of the citations in this edition are to works published after 2003.
Approximately 20% of the material is brand new. The remaining 80% reflects older work but is largely rewritten to present a more unified picture of the field.


目錄大綱


Table of Contents
 
I. Artificial Intelligence

  1. Introduction
    1.1 What is AI?
    1.2 The Foundations of Artificial Intelligence
    1.3 The History of Artificial Intelligence
    1.4 The State of the Art
    1.5 Summary, Bibliographical and Historical Notes, Exercises
  2. Intelligent Agents
    2.1 Agents and Environments
    2.2 Good Behavior: The Concept of Rationality
    2.3 The Nature of Environments
    2.4 The Structure of Agents
    2.5 Summary, Bibliographical and Historical Notes, Exercises
    II. Problem-solving
  3. Solving Problems by Searching
    3.1 Problem-Solving Agents
    3.2 Example Problems
    3.3 Searching for Solutions
    3.4 Uninformed Search Strategies
    3.5 Informed (Heuristic) Search Strategies
    3.6 Heuristic Functions
    3.7 Summary, Bibliographical and Historical Notes, Exercises
  4. Beyond Classical Search
    4.1 Local Search Algorithms and Optimization Problems
    4.2 Local Search in Continuous Spaces
    4.3 Searching with Nondeterministic Actions
    4.4 Searching with Partial Observations
    4.5 Online Search Agents and Unknown Environments
    4.6 Summary, Bibliographical and Historical Notes, Exercises
  5. Adversarial Search
    5.1 Games
    5.2 Optimal Decisions in Games
    5.3 Alpha—Beta Pruning
    5.4 Imperfect Real-Time Decisions
    5.5 Stochastic Games
    5.6 Partially Observable Games
    5.7 State-of-the-Art Game Programs
    5.8 Alternative Approaches
    5.9 Summary, Bibliographical and Historical Notes, Exercises
  6. Constraint Satisfaction Problems
    6.1 Defining Constraint Satisfaction Problems
    6.2 Constraint Propagation: Inference in CSPs
    6.3 Backtracking Search for CSPs
    6.4 Local Search for CSPs
    6.5 The Structure of Problems
    6.6 Summary, Bibliographical and Historical Notes, Exercises
    III. Knowledge, Reasoning, and Planning
  7. Logical Agents
    7.1 Knowledge-Based Agents
    7.2 The Wumpus World
    7.3 Logic
    7.4 Propositional Logic: A Very Simple Logic
    7.5 Propositional Theorem Proving
    7.6 Effective Propositional Model Checking
    7.7 Agents Based on Propositional Logic
    7.8 Summary, Bibliographical and Historical Notes, Exercises
  8. First-Order Logic
    8.1 Representation Revisited
    8.2 Syntax and Semantics of First-Order Logic
    8.3 Using First-Order Logic
    8.4 Knowledge Engineering in First-Order Logic
    8.5 Summary, Bibliographical and Historical Notes, Exercises
  9. Inference in First-Order Logic
    9.1 Propositional vs. First-Order Inference
    9.2 Unification and Lifting
    9.3 Forward Chaining
    9.4 Backward Chaining
    9.5 Resolution
    9.6 Summary, Bibliographical and Historical Notes, Exercises
  10. Classical Planning
    10.1 Definition of Classical Planning
    10.2 Algorithms for Planning as State-Space Search
    10.3 Planning Graphs
    10.4 Other Classical Planning Approaches
    10.5 Analysis of Planning Approaches
    10.6 Summary, Bibliographical and Historical Notes, Exercises
  11. Planning and Acting in the Real World
    11.1 Time, Schedules, and Resources
    11.2 Hierarchical Planning
    11.3 Planning and Acting in Nondeterministic Domains
    11.4 Multiagent Planning
    11.5 Summary, Bibliographical and Historical Notes, Exercises
    12 Knowledge Representation
    12.1 Ontological Engineering
    12.2 Categories and Objects
    12.3 Events
    12.4 Mental Events and Mental Objects
    12.5 Reasoning Systems for Categories
    12.6 Reasoning with Default Information
    12.7 The Internet Shopping World
    12.8 Summary, Bibliographical and Historical Notes, Exercises
    IV. Uncertain Knowledge and Reasoning
  12. Quantifying Uncertainty
    13.1 Acting under Uncertainty
    13.2 Basic Probability Notation
    13.3 Inference Using Full Joint Distributions
    13.4 Independence
    13.5 Bayes’ Rule and Its Use
    13.6 The Wumpus World Revisited
    13.7 Summary, Bibliographical and Historical Notes, Exercises
  13. Probabilistic Reasoning
    14.1 Representing Knowledge in an Uncertain Domain
    14.2 The Semantics of Bayesian Networks
    14.3 Efficient Representation of Conditional Distributions
    14.4 Exact Inference in Bayesian Networks
    14.5 Approximate Inference in Bayesian Networks
    14.6 Relational and First-Order Probability Models
    14.7 Other Approaches to Uncertain Reasoning
    14.8 Summary, Bibliographical and Historical Notes, Exercises
  14. Probabilistic Reasoning over Time
    15.1 Time and Uncertainty
    15.2 Inference in Temporal Models
    15.3 Hidden Markov Models
    15.4 Kalman Filters
    15.5 Dynamic Bayesian Networks
    15.6 Keeping Track of Many Objects
    15.7 Summary, Bibliographical and Historical Notes, Exercises
  15. Making Simple Decisions
    16.1 Combining Beliefs and Desires under Uncertainty
    16.2 The Basis of Utility Theory
    16.3 Utility Functions
    16.4 Multiattribute Utility Functions
    16.5 Decision Networks
    16.6 The Value of Information
    16.7 Decision-Theoretic Expert Systems
    16.8 Summary, Bibliographical and Historical Notes, Exercises
  16. Making Complex Decisions
    17.1 Sequential Decision Problems
    17.2 Value Iteration
    17.3 Policy Iteration
    17.4 Partially Observable MDPs
    17.5 Decisions with Multiple Agents: Game Theory
    17.6 Mechanism Design
    17.7 Summary, Bibliographical and Historical Notes, Exercises
    V. Learning
  17. Learning from Examples
    18.1 Forms of Learning
    18.2 Supervised Learning
    18.3 Learning Decision Trees
    18.4 Evaluating and Choosing the Best Hypothesis
    18.5 The Theory of Learning
    18.6 Regression and Classification with Linear Models
    18.7 Artificial Neural Networks
    18.8 Nonparametric Models
    18.9 Support Vector Machines
    18.10 Ensemble Learning
    18.11 Practical Machine Learning
    18.12 Summary, Bibliographical and Historical Notes, Exercises
  18. Knowledge in Learning
    19.1 A Logical Formulation of Learning
    19.2 Knowledge in Learning
    19.3 Explanation-Based Learning
    19.4 Learning Using Relevance Information
    19.5 Inductive Logic Programming
    19.6 Summary, Bibliographical and Historical Notes, Exercises
  19. Learning Probabilistic Models
    20.1 Statistical Learning
    20.2 Learning with Complete Data
    20.3 Learning with Hidden Variables: The EM Algorithm
    20.4 Summary, Bibliographical and Historical Notes, Exercises
  20. Reinforcement Learning
    21.1 Introduction
    21.2 Passive Reinforcement Learning
    21.3 Active Reinforcement Learning
    21.4 Generalization in Reinforcement Learning
    21.5 Policy Search
    21.6 Applications of Reinforcement Learning
    21.7 Summary, Bibliographical and Historical Notes, Exercises
    VI. Communicating, Perceiving, and Acting
  21. Natural Language Processing
    22.1 Language Models
    22.2 Text Classification
    22.3 Information Retrieval
    22.4 Information Extraction
    22.5 Summary, Bibliographical and Historical Notes, Exercises
  22. Natural Language for Communication
    23.1 Phrase Structure Grammars
    23.2 Syntactic Analysis (Parsing)
    23.3 Augmented Grammars and Semantic Interpretation
    23.4 Machine Translation
    23.5 Speech Recognition
    23.6 Summary, Bibliographical and Historical Notes, Exercises
  23. Perception
    24.1 Image Formation
    24.2 Early Image-Processing Operations
    24.3 Object Recognition by Appearance
    24.4 Reconstructing the 3D World
    24.5 Object Recognition from Structural Information
    24.6 Using Vision
    24.7 Summary, Bibliographical and Historical Notes, Exercises
  24. Robotics
    25.1 Introduction
    25.2 Robot Hardware
    25.3 Robotic Perception
    25.4 Planning to Move
    25.5 Planning Uncertain Movements
    25.6 Moving
    25.7 Robotic Software Architectures
    25.8 Application Domains
    25.9 Summary, Bibliographical and Historical Notes, Exercises
    VII. Conclusions
    26 Philosophical Foundations
    26.1 Weak AI: Can Machines Act Intelligently?
    26.2 Strong AI: Can Machines Really Think?
    26.3 The Ethics and Risks of Developing Artificial Intelligence
    26.4 Summary, Bibliographical and Historical Notes, Exercises
  25. AI: The Present and Future
    27.1 Agent Components
    27.2 Agent Architectures
    27.3 Are We Going in the Right Direction?
    27.4 What If AI Does Succeed?
    Appendices
    A. Mathematical Background
    A.1 Complexity Analysis and O() Notation
    A.2 Vectors, Matrices, and Linear Algebra
    A.3 Probability Distributions
    B. Notes on Languages and Algorithms
    B.1 Defining Languages with Backus—Naur Form (BNF)
    B.2 Describing Algorithms with Pseudocode
    B.3 Online Help
    Bibliography
    Index



相關書籍

機器學習:應用視角

作者 David Forsyth 常虹 王樹徽 莊福振 楊雙等譯

2016-05-01

數據結構和算法 Python 和 C++ 語言描述

作者 David M. Reed John Zelle 肖鑑明譯

2016-05-01

Discovering Computer Science: Interdisciplinary Problems, Principles, and Python Programming (Paperback)

作者 Jessen Havill

2016-05-01