Gjør som tusenvis av andre bokelskere
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.Du kan når som helst melde deg av våre nyhetsbrev.
The apparent authenticity of published data can be as dangerous as it is inviting. This guide points out the main dangers (sampling errors, measurement errors, and invalid or unreliable procedures) and analyzes the various ways in which these problems arise -- giving numerous examples. Jacob discusses ways to solve these problems, and when no solutions seem available, he suggests appropriate disclaimers. An appendix critically evaluates several useful data sets. This monograph also serves as a general reference volume on how to avoid the pitfalls that researchers often overlook.`Its subject is one that should find a place in many more introductory social statistics and research methods texts that it actually does.' -- The Statistician, Vol 35, 1986
Measurement models developed by Georg Rasch are renowned in the social sciences. In this introduction, the focus is on the simple logistic model, which is one of the most elementary and commonly used. The author explains the general principles behind the models, and demonstrates their procedures for measurement. Comparisons are made with other more widely-used models. Throughout the text, an example from a personality inventory is used to provide continuity as the statistical arguments are presented and procedures explained.
Although the growth of longitudinal data archives is one of the most dramatic developments in the behavioural sciences, there has been a barrier to the effective use of these files due to a lack of understanding of the relation between research questions and archival data - until now. The authors of this volume illustrate how to use the model-fitting process to select and fit the right data set to a particular research problem. Beginning with an introduction to the general issues in working with archival data, the book takes the reader through steps in the recasting of data and question, using substantive examples from the life course, such as temporal patterns of physical and emotional health as well as pathways to retirement.
Presents a discussion of fundamental statistical modeling concepts in a multiple regression framework. This book provides an introduction to GLM, exponential family distribution, and maximum likelihood estimation. It is useful for social science researchers who like to learn about advanced techniques.
Examines ways to analyze surveys, and focuses on the problems of weights and design effects. This edition incorporates practice of analyzing complex survey data, introduces the analytic approach for categorical data analysis, reviews new software and provides an introduction to the model-based analysis that can be useful analyzing social surveys.
Research Designs is a clear, compact introduction to the principles of experimental and non-experimental design -- especially written for social scientists and their students. Spector covers major designs including: single group designs; pre-test/post-test designs; factorial designs, hierarchical designs; multivariate designs; the Solomon four group design; panel designs; and designs with concomitant variables.'Bearing in mind the brevity (and hence cheapness) of the book, its coverage is extremely wide-ranging...As long as a basic grounding is achieved beforehand, attending to Spector's advice and comments should help budding researchers become aware of the issues and problems involved in practical research...the small outlay involved in buying the book will neither be regretted nor wasted.' -- Quality and Quantity, Vol 16, 1982
Provides a comprehensive introduction to the range of polytomous models available within item response theory. Practical examples of major models using real data are provided, as is a chapter on choosing an appropriate model. Figures are used throughout to illustrate important elements, as they are described.
Survey Questions is a highly readable guide to the principles of writing survey questions. The authors review recent research on survey questions, consider the lore of professional experience and finally present those findings which have the strongest implications on writing these questions.
In this volume the underlying logic and practice of maximum likelihood (ML) estimation is made clear by providing a general modelling framework that utilizes the tools of ML methods. This framework offers readers a flexible modelling strategy since it accommodates cases from the simplest linear models to the most complex nonlinear models that link a system of endogenous and exogenous variables with non-normal distributions. Using examples to illustrate the techniques of finding ML estimators and estimates, Eliason discusses: what properties are desirable in an estimator; basic techniques for finding ML solutions; the general form of the covariance matrix for ML estimates; the sampling distribution of ML estimators; the application of ML in the normal distribution as well as in other useful distributions; and some helpful illustrations of likelihoods.
Bootstrapping, a computational nonparametric technique for `re-sampling'', enables researchers to draw a conclusion about the characteristics of a population strictly from the existing sample rather than by making parametric assumptions about the estimator. Using real data examples from per capita personal income to median preference differences between legislative committee members and the entire legislature, Mooney and Duval discuss how to apply bootstrapping when the underlying sampling distribution of the statistics cannot be assumed normal, as well as when the sampling distribution has no analytic solution. In addition, they show the advantages and limitations of four bootstrap confidence interval methods: normal approximation, percentile, bias-corrected percentile, and percentile-t. The authors conclude with a convenient summary of how to apply this computer-intensive methodology using various available software packages.
This excellent introduction to stochastic parameter regression models is more advanced and technically difficult than other papers in this series. These models allow relationships to vary through time, rather than requiring them to be fixed, without forcing the analyst to specify and analyze the causes of the time-varying relationships. This volume will be most useful to those with a good working knowledge of standard regression models and who wish to understand methods which deal with relationships that vary slowly over time, but for which the exact causes of variation cannot be identified.
Feiring provides a well-written introduction to the techniques and applications of linear programming. He shows readers how to model, solve, and interpret appropriate linear programming problems. His carefully-chosen examples provide a foundation for mathematical modelling and demonstrate the wide scope of the techniques.
Using an expository style that builds from simpler to more complex topics, this text explains how to measure the centre and variation on a single variable. It also considers ways to examine the distribution of variables and measure the spread of a variable.
In recent years the loglinear model has become the dominant form of categorical data analysis as researchers have expanded it into new directions. This book shows researchers the applications of one of these new developments - how uniting ordinary loglinear analysis and latent class analysis into a general loglinear model with latent variables can result in a modified LISREL approach. This modified LISREL model will enable researchers to analyze categorical data in the same way that they have been able to use LISREL to analyze continuous data. Beginning with an introduction to ordinary loglinear modelling and standard latent class analysis, the author explains the general principles of loglinear modelling with latent variables, the application of loglinear models with latent variables as a causal model as well as a tool for the analysis of categorical longitudinal data, the strengths and limitations of this technique, and finally, a summary of computer programs that are available for executing this technique.
Discusses data access, transformation and preparation issues, and how to select the appropriate analytic graphics techniques through a review of various GIS and common data sources, such as census products, Tiger files, and CD-ROM access. It provides illustrative output for sample data using selected software.
This volume offers social scientists a concise overview of multiple attribute decision making (MADM) methods, their characteristics and applicability and methods for solving MADM problems. Real world examples are used to introduce the reader to normative models for optimal decisions.The authors explore how MADM methods can be used for descriptive purposes to model: the existing decision-making process; noncompensatory and scoring methods; accommodation of soft data; construction of a multiple-decision support systems; and the validity of methods. The advanced procedures of TOPSIS and ELECTRE are also presented.
Fuzzy set theory deals with sets or categories whose boundaries are blurry or, in other words, 'fuzzy.' This book presents an introduction to fuzzy set theory, focusing on its applicability to the social sciences. It provides a guide for researchers wishing to combine fuzzy set theory with standard statistical techniques and model-testing.
Covering the basics of the cohort approach to studying aging, social, and cultural change, this volume also critiques several commonly used (but flawed) methods of cohort analysis, and illustrates appropriate methods with analyses of personal happiness and attitudes toward premarital and extramarital sexual relations. Finally, the book describes the major sources of suitable data for cohort studies and gives the criteria for appropriate data.The Second Edition features:- a chapter on the analysis of survey data, which includes a discussion of the problems posed by question order effects when data from different surveys are used in a cohort analysis. - an emphasis on the difference between linear and nonlinear effects. - instruction on how to use available data from cohort studies.
Panel data - information gathered from the same individuals or units at several different points in time - are commonly used in the social sciences to test theories of individual and social change. This book highlights the developments in this technique in a range of disciplines and analytic traditions.
This book explores the issues underlying the effective analysis of interaction in factorial designs. It includes discussion of: different ways of characterizing interactions in ANOVA; interaction effects using traditional hypothesis testing approaches; and alternative analytic frameworks that focus on effect size methodology and interval estimation.
Derived from engineering literature that uses similar techniques to map electronic circuits and physical systems, this work utilizes a systems approach to modeling that offers social scientists a variety of tools that are both sophisticated and easily applied. It introduces a modeling tool to researchers in the social sciences.
Introduces the basis of the confidence interval framework and provides the criteria for 'best' confidence intervals, along with the trade-offs between confidence and precision. This book covers topics such as the transformation principle, confidence intervals, and the relationship between confidence interval and significance testing frameworks.
Ordinary regression analysis is not appropriate for investigating dichotomous or otherwise `limited' dependent variables, but this volume examines three techniques -- linear probability, probit, and logit models -- which are well-suited for such data. It reviews the linear probability model and discusses alternative specifications of non-linear models. Using detailed examples, Aldrich and Nelson point out the differences among linear, logit, and probit models, and explain the assumptions associated with each.
Reviews the main competing approaches to modeling multiple time series: simultaneous equations, ARIMA, error correction models, and vector autoregression. This book focuses on vector autoregression (VAR) models as a generalization of the other approaches mentioned. It also reviews arguments for and against using multi-equation time series models.
Combines time series and cross-sectional data to provide the researcher with an efficient method of analysis and improved estimates of the population being studied. With more relevant data available this analysis technique allows the sample size to be increased, which ultimately yields a more effective study.
Offers an in-depth treatment of robust and resistant regression. This work, which is geared toward both future and practicing social scientists, takes an applied approach and offers readers empirical examples to illustrate key concepts. It includes a web appendix that provides readers with the data and the R-code for the examples used in the book.
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.