Utvidet returrett til 31. januar 2025

Bøker i Synthesis Lectures on Artificial Intelligence and Machine Learning-serien

Filter
Filter
Sorter etterSorter Serierekkefølge
  • av Yang Liu, Han Yu, Qiang Yang, m.fl.
    764,-

    How is it possible to allow multiple data owners to collaboratively train and use a shared prediction model while keeping all the local training data private?Traditional machine learning approaches need to combine all data at one location, typically a data center, which may very well violate the laws on user privacy and data confidentiality. Today, many parts of the world demand that technology companies treat user data carefully according to user-privacy laws. The European Union's General Data Protection Regulation (GDPR) is a prime example. In this book, we describe how federated machine learning addresses this problem with novel solutions combining distributed machine learning, cryptography and security, and incentive mechanism design based on economic principles and game theory. We explain different types of privacy-preserving machine learning solutions and their technological backgrounds, and highlight some representative practical use cases. We show how federated learning can become the foundation of next-generation machine learning that caters to technological and societal needs for responsible AI development and application.

  • av Patrik Haslum
    693,-

    Planning is the branch of Artificial Intelligence (AI) that seeks to automate reasoning about plans, most importantly the reasoning that goes into formulating a plan to achieve a given goal in a given situation. AI planning is model-based: a planning system takes as input a description (or model) of the initial situation, the actions available to change it, and the goal condition to output a plan composed of those actions that will accomplish the goal when executed from the initial situation.The Planning Domain Definition Language (PDDL) is a formal knowledge representation language designed to express planning models. Developed by the planning research community as a means of facilitating systems comparison, it has become a de-facto standard input language of many planning systems, although it is not the only modelling language for planning. Several variants of PDDL have emerged that capture planning problems of different natures and complexities, with a focus on deterministic problems.The purpose of this book is two-fold. First, we present a unified and current account of PDDL, covering the subsets of PDDL that express discrete, numeric, temporal, and hybrid planning. Second, we want to introduce readers to the art of modelling planning problems in this language, through educational examples that demonstrate how PDDL is used to model realistic planning problems. The book is intended for advanced students and researchers in AI who want to dive into the mechanics of AI planning, as well as those who want to be able to use AI planning systems without an in-depth explanation of the algorithms and implementation techniques they use.

  • av Peter Settles
    475,-

    Robotics technology has recently advanced to the point of being widely accessible for relatively low-budget research, as well as for graduate, undergraduate, and even secondary and primary school education. This lecture provides an example of how to productively use a cutting-edge advanced robotics platform for education and research by providing a detailed case study with the Sony AIBO robot, a vision-based legged robot. The case study used for this lecture is the UT Austin Villa RoboCup Four-Legged Team. This lecture describes both the development process and the technical details of its end result. The main contributions of this lecture are (i) a roadmap for new classes and research groups interested in intelligent autonomous robotics who are starting from scratch with a new robot, and (ii) documentation of the algorithms behind our own approach on the AIBOs with the goal of making them accessible for use on other vision-based and/or legged robot platforms.

  • av Kevin Gebser
    431,-

    Game theory is the mathematical study of interaction among independent, self-interested agents. The audience for game theory has grown dramatically in recent years, and now spans disciplines as diverse as political science, biology, psychology, economics, linguistics, sociology, and computer science, among others. What has been missing is a relatively short introduction to the field covering the common basis that anyone with a professional interest in game theory is likely to require. Such a text would minimize notation, ruthlessly focus on essentials, and yet not sacrifice rigor. This Synthesis Lecture aims to fill this gap by providing a concise and accessible introduction to the field. It covers the main classes of games, their representations, and the main concepts used to analyze them.

  • av Sridhar Lopez
    475,-

    Representations are at the heart of artificial intelligence (AI). This book is devoted to the problem of representation discovery: how can an intelligent system construct representations from its experience? Representation discovery re-parameterizes the state space - prior to the application of information retrieval, machine learning, or optimization techniques - facilitating later inference processes by constructing new task-specific bases adapted to the state space geometry. This book presents a general approach to representation discovery using the framework of harmonic analysis, in particular Fourier and wavelet analysis. Biometric compression methods, the compact disc, the computerized axial tomography (CAT) scanner in medicine, JPEG compression, and spectral analysis of time-series data are among the many applications of classical Fourier and wavelet analysis. A central goal of this book is to show that these analytical tools can be generalized from their usual setting in (infinite-dimensional) Euclidean spaces to discrete (finite-dimensional) spaces typically studied in many subfields of AI. Generalizing harmonic analysis to discrete spaces poses many challenges: a discrete representation of the space must be adaptively acquired; basis functions are not pre-defined, but rather must be constructed. Algorithms for efficiently computing and representing bases require dealing with the curse of dimensionality. However, the benefits can outweigh the costs, since the extracted basis functions outperform parametric bases as they often reflect the irregular shape of a particular state space. Case studies from computer graphics, information retrieval, machine learning, and state space planning are used to illustrate the benefits of the proposed framework, and the challenges that remain to be addressed. Representation discovery is an actively developing field, and the author hopes this book will encourage other researchers to explore this exciting area of research. Table of Contents: Overview / Vector Spaces / Fourier Bases on Graphs / Multiscale Bases on Graphs / Scaling to Large Spaces / Case Study: State-Space Planning / Case Study: Computer Graphics / Case Study: Natural Language / Future Directions

  • av Michael Hexmoor
    431,-

    Artificial systems that think and behave intelligently are one of the most exciting and challenging goals of Artificial Intelligence. Action Programming is the art and science of devising high-level control strategies for autonomous systems which employ a mental model of their environment and which reason about their actions as a means to achieve their goals. Applications of this programming paradigm include autonomous software agents, mobile robots with high-level reasoning capabilities, and General Game Playing. These lecture notes give an in-depth introduction to the current state-of-the-art in action programming. The main topics are knowledge representation for actions, procedural action programming, planning, agent logic programs, and reactive, behavior-based agents. The only prerequisite for understanding the material in these lecture notes is some general programming experience and basic knowledge of classical first-order logic. Table of Contents: Introduction / Mathematical Preliminaries / Procedural Action Programs / Action Programs and Planning / Declarative Action Programs / Reactive Action Programs / Suggested Further Reading

  • av Xiaojin Geffner
    475,-

    Semi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data. Traditionally, learning has been studied either in the unsupervised paradigm (e.g., clustering, outlier detection) where all the data are unlabeled, or in the supervised paradigm (e.g., classification, regression) where all the data are labeled. The goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. Semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data are scarce or expensive. Semi-supervised learning also shows potential as a quantitative tool to understand human category learning, where most of the input is self-evidently unlabeled. In this introductory book, we present some popular semi-supervised learning models, including self-training, mixture models, co-training and multiview learning, graph-based methods, and semi-supervised support vector machines. For each model, we discuss its basic mathematical formulation. The success of semi-supervised learning depends critically on some underlying assumptions. We emphasize the assumptions made by each model and give counterexamples when appropriate to demonstrate the limitations of the different models. In addition, we discuss semi-supervised learning for cognitive psychology. Finally, we give a computational learning theoretic perspective on semi-supervised learning, and we conclude the book with a brief discussion of open questions in the field. Table of Contents: Introduction to Statistical Machine Learning / Overview of Semi-Supervised Learning / Mixture Models and EM / Co-Training / Graph-Based Semi-Supervised Learning / Semi-Supervised Support Vector Machines / Human Semi-Supervised Learning / Theory and Outlook

  • av Pedro Dechter
    475,-

    Most subfields of computer science have an interface layer via which applications communicate with the infrastructure, and this is key to their success (e.g., the Internet in networking, the relational model in databases, etc.). So far this interface layer has been missing in AI. First-order logic and probabilistic graphical models each have some of the necessary features, but a viable interface layer requires combining both. Markov logic is a powerful new language that accomplishes this by attaching weights to first-order formulas and treating them as templates for features of Markov random fields. Most statistical models in wide use are special cases of Markov logic, and first-order logic is its infinite-weight limit. Inference algorithms for Markov logic combine ideas from satisfiability, Markov chain Monte Carlo, belief propagation, and resolution. Learning algorithms make use of conditional likelihood, convex optimization, and inductive logic programming. Markov logic has been successfully applied to problems in information extraction and integration, natural language processing, robot mapping, social networks, computational biology, and others, and is the basis of the open-source Alchemy system. Table of Contents: Introduction / Markov Logic / Inference / Learning / Extensions / Applications / Conclusion

  • av Nikos Vlassis
    475,-

    Multiagent systems is an expanding field that blends classical fields like game theory and decentralized control with modern fields like computer science and machine learning. This monograph provides a concise introduction to the subject, covering the theoretical foundations as well as more recent developments in a coherent and readable manner. The text is centered on the concept of an agent as decision maker. Chapter 1 is a short introduction to the field of multiagent systems. Chapter 2 covers the basic theory of singleagent decision making under uncertainty. Chapter 3 is a brief introduction to game theory, explaining classical concepts like Nash equilibrium. Chapter 4 deals with the fundamental problem of coordinating a team of collaborative agents. Chapter 5 studies the problem of multiagent reasoning and decision making under partial observability. Chapter 6 focuses on the design of protocols that are stable against manipulations by self-interested agents. Chapter 7 provides a short introduction to the rapidly expanding field of multiagent reinforcement learning. The material can be used for teaching a half-semester course on multiagent systems covering, roughly, one chapter per lecture.

  • av Csaba Grossi
    431,-

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms' merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming. We give a fairly comprehensive catalog of learning problems, describe the core ideas, note a large number of state of the art algorithms, followed by the discussion of their theoretical properties and limitations. Table of Contents: Markov Decision Processes / Value Prediction Problems / Control / For Further Exploration

  • av Colin Pigozzi
    431,-

    Support Vectors Machines have become a well established tool within machine learning. They work well in practice and have now been used across a wide range of applications from recognizing hand-written digits, to face identification, text categorisation, bioinformatics, and database marketing. In this book we give an introductory overview of this subject. We start with a simple Support Vector Machine for performing binary classification before considering multi-class classification and learning in the presence of noise. We show that this framework can be extended to many other scenarios such as prediction with real-valued outputs, novelty detection and the handling of complex output structures such as parse trees. Finally, we give an overview of the main types of kernels which are used in practice and how to learn and make predictions from multiple types of input data. Table of Contents: Support Vector Machines for Classification / Kernel-based Models / Learning with Kernels

  • av Kristen Thielscher
    475,-

    The visual recognition problem is central to computer vision research. From robotics to information retrieval, many desired applications demand the ability to identify and localize categories, places, and objects. This tutorial overviews computer vision algorithms for visual object recognition and image classification. We introduce primary representations and learning approaches, with an emphasis on recent advances in the field. The target audience consists of researchers or students working in AI, robotics, or vision who would like to understand what methods and representations are available for these problems. This lecture summarizes what is and isn't possible to do reliably today, and overviews key concepts that could be employed in systems requiring visual categorization. Table of Contents: Introduction / Overview: Recognition of Specific Objects / Local Features: Detection and Description / Matching Local Features / Geometric Verification of Matched Features / Example Systems: Specific-Object Recognition / Overview: Recognition of Generic Object Categories / Representations for Object Categories / Generic Object Detection: Finding and Scoring Candidates / Learning Generic Object Category Models / Example Systems: Generic Object Recognition / Other Considerations and Current Challenges / Conclusions

  • av Michael Thomaz
    475,-

    Automated trading in electronic markets is one of the most common and consequential applications of autonomous software agents. Design of effective trading strategies requires thorough understanding of how market mechanisms operate, and appreciation of strategic issues that commonly manifest in trading scenarios. Drawing on research in auction theory and artificial intelligence, this book presents core principles of strategic reasoning that apply to market situations. The author illustrates trading strategy choices through examples of concrete market environments, such as eBay, as well as abstract market models defined by configurations of auctions and traders. Techniques for addressing these choices constitute essential building blocks for the design of trading strategies for rich market applications. The lecture assumes no prior background in game theory or auction theory, or artificial intelligence. Table of Contents: Introduction / Example: Bidding on eBay / Auction Fundamentals / Continuous Double Auctions / Interdependent Markets / Conclusion

  • av Edith Law
    431,-

    Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoyable game. Crowdsourcing marketplaces (e.g., Amazon Mechanical Turk) are human computation systems that coordinate workers to perform tasks in exchange for monetary rewards. In identity verification tasks, users perform computation in order to gain access to some online content; an example is reCAPTCHA, which leverages millions of users who solve CAPTCHAs every day to correct words in books that optical character recognition (OCR) programs fail to recognize with certainty. This book is aimed at achieving four goals: (1) defining human computation as a research area; (2) providing a comprehensive review of existing work; (3) drawing connections to a wide variety of disciplines, including AI, Machine Learning, HCI, Mechanism/Market Design and Psychology, and capturing their unique perspectives on the core research questions in human computation; and (4) suggesting promising research directions for the future. Table of Contents: Introduction / Human Computation Algorithms / Aggregating Outputs / Task Routing / Understanding Workers and Requesters / The Art of Asking Questions / The Future of Human Computation

  • av Derek Santhanam
    504,-

    One of the grand challenges of artificial intelligence is to enable computers to interpret 3D scenes and objects from imagery. This book organizes and introduces major concepts in 3D scene and object representation and inference from still images, with a focus on recent efforts to fuse models of geometry and perspective with statistical machine learning. The book is organized into three sections: (1) Interpretation of Physical Space; (2) Recognition of 3D Objects; and (3) Integrated 3D Scene Interpretation. The first discusses representations of spatial layout and techniques to interpret physical scenes from images. The second section introduces representations for 3D object categories that account for the intrinsically 3D nature of objects and provide robustness to change in viewpoints. The third section discusses strategies to unite inference of scene geometry and object pose and identity into a coherent scene interpretation. Each section broadly surveys important ideas from cognitive science and artificial intelligence research, organizes and discusses key concepts and techniques from recent work in computer vision, and describes a few sample approaches in detail. Newcomers to computer vision will benefit from introductions to basic concepts, such as single-view geometry and image classification, while experts and novices alike may find inspiration from the book's organization and discussion of the most recent ideas in 3D scene understanding and 3D object recognition. Specific topics include: mathematics of perspective geometry; visual elements of the physical scene, structural 3D scene representations; techniques and features for image and region categorization; historical perspective, computational models, and datasets and machine learning techniques for 3D object recognition; inferences of geometrical attributes of objects, such as size and pose; and probabilistic and feature-passing approaches for contextual reasoning about 3D objects and scenes. Table of Contents: Background on 3D Scene Models / Single-view Geometry / Modeling the Physical Scene / Categorizing Images and Regions / Examples of 3D Scene Interpretation / Background on 3D Recognition / Modeling 3D Objects / Recognizing and Understanding 3D Objects / Examples of 2D 1/2 Layout Models / Reasoning about Objects and Scenes / Cascades of Classifiers / Conclusion and Future Directions

  • av Georgios Raedt
    504,-

    Cooperative game theory is a branch of (micro-)economics that studies the behavior of self-interested agents in strategic settings where binding agreements among agents are possible. Our aim in this book is to present a survey of work on the computational aspects of cooperative game theory. We begin by formally defining transferable utility games in characteristic function form, and introducing key solution concepts such as the core and the Shapley value. We then discuss two major issues that arise when considering such games from a computational perspective: identifying compact representations for games, and the closely related problem of efficiently computing solution concepts for games. We survey several formalisms for cooperative games that have been proposed in the literature, including, for example, cooperative games defined on networks, as well as general compact representation schemes such as MC-nets and skill games. As a detailed case study, we consider weighted voting games: a widely-used and practically important class of cooperative games that inherently have a natural compact representation. We investigate the complexity of solution concepts for such games, and generalizations of them. We briefly discuss games with non-transferable utility and partition function games. We then overview algorithms for identifying welfare-maximizing coalition structures and methods used by rational agents to form coalitions (even under uncertainty), including bargaining algorithms. We conclude by considering some developing topics, applications, and future research directions.

  • av Mausam Natarajan
    504,-

    Markov Decision Processes (MDPs) are widely popular in Artificial Intelligence for modeling sequential decision-making scenarios with probabilistic dynamics. They are the framework of choice when designing an intelligent agent that needs to act for long periods of time in an environment where its actions could have uncertain outcomes. MDPs are actively researched in two related subareas of AI, probabilistic planning and reinforcement learning. Probabilistic planning assumes known models for the agent's goals and domain dynamics, and focuses on determining how the agent should behave to achieve its objectives. On the other hand, reinforcement learning additionally learns these models based on the feedback the agent gets from the environment. This book provides a concise introduction to the use of MDPs for solving probabilistic planning problems, with an emphasis on the algorithmic perspective. It covers the whole spectrum of the field, from the basics to state-of-the-art optimal and approximation algorithms. We first describe the theoretical foundations of MDPs and the fundamental solution techniques for them. We then discuss modern optimal algorithms based on heuristic search and the use of structured representations. A major focus of the book is on the numerous approximation schemes for MDPs that have been developed in the AI literature. These include determinization-based approaches, sampling techniques, heuristic functions, dimensionality reduction, and hierarchical representations. Finally, we briefly introduce several extensions of the standard MDP classes that model and solve even more complex planning problems. Table of Contents: Introduction / MDPs / Fundamental Algorithms / Heuristic Search Algorithms / Symbolic Algorithms / Approximation Algorithms / Advanced Notes

  • av Burr Chen
    475,-

    The key idea behind active learning is that a machine learning algorithm can perform better with less training if it is allowed to choose the data from which it learns. An active learner may pose "e;queries,"e; usually in the form of unlabeled data instances to be labeled by an "e;oracle"e; (e.g., a human annotator) that already understands the nature of the problem. This sort of approach is well-motivated in many modern machine learning and data mining applications, where unlabeled data may be abundant or easy to come by, but training labels are difficult, time-consuming, or expensive to obtain. This book is a general introduction to active learning. It outlines several scenarios in which queries might be formulated, and details many query selection algorithms which have been organized into four broad categories, or "e;query selection frameworks."e; We also touch on some of the theoretical foundations of active learning, and conclude with an overview of the strengths and weaknesses of these approaches in practice, including a summary of ongoing work to address these open challenges and opportunities. Table of Contents: Automating Inquiry / Uncertainty Sampling / Searching Through the Hypothesis Space / Minimizing Expected Error and Variance / Exploiting Structure in Data / Theory / Practical Considerations

  • av Martin Liu
    574,-

    Answer Set Programming (ASP) is a declarative problem solving approach, initially tailored to modeling problems in the area of Knowledge Representation and Reasoning (KRR). More recently, its attractive combination of a rich yet simple modeling language with high-performance solving capacities has sparked interest in many other areas even beyond KRR. This book presents a practical introduction to ASP, aiming at using ASP languages and systems for solving application problems. Starting from the essential formal foundations, it introduces ASP's solving technology, modeling language and methodology, while illustrating the overall solving process by practical examples. Table of Contents: List of Figures / List of Tables / Motivation / Introduction / Basic modeling / Grounding / Characterizations / Solving / Systems / Advanced modeling / Conclusions

  • av Beatriz Whiteson
    431,-

    Case-based reasoning is a methodology with a long tradition in artificial intelligence that brings together reasoning and machine learning techniques to solve problems based on past experiences or cases. Given a problem to be solved, reasoning involves the use of methods to retrieve similar past cases in order to reuse their solution for the problem at hand. Once the problem has been solved, learning methods can be applied to improve the knowledge based on past experiences. In spite of being a broad methodology applied in industry and services, case-based reasoning has often been forgotten in both artificial intelligence and machine learning books. The aim of this book is to present a concise introduction to case-based reasoning providing the essential building blocks for the design of case-based reasoning systems, as well as to bring together the main research lines in this field to encourage students to solve current CBR challenges.

Gjør som tusenvis av andre bokelskere

Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.