Gjør som tusenvis av andre bokelskere
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.Du kan når som helst melde deg av våre nyhetsbrev.
Describes the mathematical and logical foundations at a level that does not presume advanced mathematical or statistical skills. It illustrates how to do factor analysis with several of the more popular packaged computer programs.
Carol A. Chapelle shows readers how to design validation research for tests of human capacities and performance. Any test that is used to make decisions about people or programs should have undergone extensive research to demonstrate that the scores are actually appropriate for their intended purpose. Argument-Based Validation in Testing and Assessment is intended to help close the gap between theory and practice, by introducing, explaining, and demonstrating how test developers can formulate the overall design for their validation research from an argument-based perspective.
Taking the reader step-by-step through the intricacies, theory and practice of regression analysis, Damodar N. Gujarati uses a clear style that doesn't overwhelm the reader with abstract mathematics.
Reviews the sampling methods used in surveys such as simple random sampling, systematic sampling, stratification, cluster and multi-stage sampling, sampling with probability proportional to size, two-phase sampling, replicated sampling, panel designs, and non-probability sampling.
Considers the techniques needed for exploring problems that compromise a regression analysis and for determining whether certain assumptions appear reasonable. The text covers such topics as the problem of collinearity in multiple regression, non-normality of errors and non-constant error variance.
Offers students a brief and accessible approach to systematically quantifying various types of narrative data they can collect during a research process.
This monograph is not statistical. It looks instead at pre-statistical assumptions about dependent variables and causal order. Professor Davis spells out the logical principles that underlie our ideas of causality and explains how to discover causal direction, irrespective of the statistical technique used. He stresses throughout that knowledge of the `real world'' is important and repeatedly challenges the myth that causal problems can be solved by statistical calculations alone.
Introduces the elements of experimental design and analysis. The text covers such topics as the fundamental concept of variability, hypothesis testing, how ANOVA can be extended to the multi-group situation and random designs.
Offers researchers a guide for selecting the best statistical model to use, as well as discussing such topics as contextual analysis with absolute/relative effects, and the choice between regression coefficients as fixed parameters or as random variables.
A presentation and critique of the use of multiple measures of theoretical concepts for the assessment of validity (using the multi-trait multi-method matrix) and reliability (using multiple indicators with a path analytic framework).
Ordinal data can be rank ordered but not assumed to have equal distances between categories. Using support by judges for civil rights measures and bussing as the primary example, this paper indicates how such data can best be analyzed.
An advanced study which presumes a knowledge of multiple regression and factor analysis techniques, this paper considers two techniques for comparing entire sets of data, and develops the canonical correlation model as an extension of regression analysis in which there are several dependent variables.
Discusses basic concepts and methods associated with the application of operations research toward arriving at an optimum mix, level, or choice.
As budgets tighten and costs increase, it is becoming even more necessary that workable social programmes are shown to be worthy of support. This book presents one approach to evaluation -- multiattribute utility technology -- which stresses that evaluations should be comparative, and that all the different constituencies served by a programme and its different goals have to be kept in mind.
This volume shows how odds ratios can be used as a framework for understanding log-linear models. Moving systematically from the paradigmatic 2x2 case to more complicated cases, the author defines the odds ratio and demonstrates how it is a measure of association for tabular analysis.
Written in nontechnical language, this popular and practical volume has been completely updated to bring readers the latest advice on major issues involved in longitudinal research. It covers: research design strategies; methods of data collection; how longitudinal and cross-sectional research compares in terms of consistency and accuracy of results.
Outlines a set of techniques that enable a researcher to discuss the "hidden structure" of large data bases. These techniques use proximities, measures which indicate how similar or different objects are, to find a configuration of points which reflects the structure in the data.
A statistical method which will appeal to two groups in particular: those who are currently using the more traditional technique of exploratory factor analysis; and those who are interested in the analysis of covariance structures, commonly known as the LISREL model. The first group will find that this technique may be more appropriate to the analysis of their research problems; while the second group will find that confirmatory factor analysis is a useful first step to understanding the LISREL model, for this book, and its companion volume, Covariance Structure Models, are designed to be read consecutively. The proofs presented are simple, but the reader must feel comfortable with matrix algebra in order to understand the model.
This volume introduces the theory, method, and applications of one type of conjoint analysis technique. These techniques are used to study individual judgement and decision processes. Based upon Information Integration Theory, metric conjoint analysis allows for evaluation of multi-attribute alternatives based on interval level data. The model, which justifies use of metric conjoint methods and the statistical techniques drawn from it, are the core of this monograph. Also described are applications of the model in marketing, psychology, economics, sociology, planning, and other disciplines, all of which relate to forecasting the decision-making behavior of individuals.
Secondary analysis has assumed a central position in social science research as existing survey data and statistical computing programmes have become increasingly available. This volume presents strategies for locating survey data and provides a comprehensive guide to US social science data archives, describing several major data files. The book also reviews research designs for secondary analysis.
The second edition of this book provides a conceptual understanding of analysis of variance. It outlines methods for analysing variance that are used to study the effect of one or more nominal variables on a dependent, interval level variable. The book presumes only elementary background in significance testing and data analysis.
Discusses the innovative log-linear model of statistical analysis. This model makes no distinction between independent and dependent variables, but is used to examine relationships among categoric variables by analyzing expected cell frequencies.
An introduction to the underlying principles, central concepts, and basic techniques for conducting and understanding exploratory data analysis -- with numerous social science examples.
This paper considers the possible effects of making inferences about individuals from aggregate data. It assumes a knowledge of regression analysis, and explores the utility of techniques designed to make the inferences in causal modelling more reliable, including a comparison between ecological regression models and ecological correlation.
How do we group different subjects on a variety of variables? Should we use a classification procedure in which only the concepts are classified (typology), one in which only empirical entities are classified (taxonomy), or a combination of both? Kenneth D Bailey addresses these questions and shows how classification methods can be used to improve research. Beginning with an exploration of the advantages and disadvantages of classification procedures, the book covers topics such as: clustering procedures including agglomerative and divisive methods; the relationship among various classification techniques; how clustering methods compare with related statistical techniques; classification resources; and software packages for use in clustering techniques.
Monte Carlo simulation is a method of evaluating substantive hypotheses and statistical estimators by developing a computer algorithm to simulate a population, drawing multiple samples from this pseudo-population, and evaluating estimates obtained from these samples. This book explains the logic behind the method and demonstrates its uses for research.
Repeated surveys allow researchers the opportunity to analyze changes in society as a whole. This book includes: a discussion of the classic issue of how to separate cohort, period and age effects; methods for modelling aggregate trends; and methods for estimating cohort replacement's contribution to aggregate trends.
Although clustering--the classifying of objects into meaningful sets--is an important procedure, cluster analysis as a multivariate statistical procedure is poorly understood. This volume is an introduction to cluster analysis for professionals, as wel
Empirical researchers, for whom Iversen's volume provides an introduction, have generally lacked a grounding in the methodology of Bayesian inference. As a result, applications are few. After outlining the limitations of classical statistical inference, the author proceeds through a simple example to explain Bayes' theorem and how it may overcome these limitations. Typical Bayesian applications are shown, together with the strengths and weaknesses of the Bayesian approach. This monograph thus serves as a companion volume for Henkel's Tests of Significance (QASS vol 4).
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.