Showing posts with label definition. Show all posts
Showing posts with label definition. Show all posts

Sunday, February 10, 2008

Student resources for Quantitative Approaches in Business Studies

Student resources
Excel supplement
The Excel supplement is a tutorial for Microsoft Excel. It was written by Bernard V Liengme specially as a supplement to Quantitative Approaches in Business Studies by Clare Morris.

The table below explains how the units of the tutorial link to the chapters of Quantitative Approaches in Business Studies. The student is advised to read the first two units carefully. The other units may be read in any order.

The units are in Adobe Acrobat format (PDF). The Adobe Acrobat Reader is available FREE from Adobe Systems Incorporated. The workbooks named below are in Excel 97/2000 format. Right-click on any file and choose Save As to save the file to your hard disk.

Tutorial Units Quantitative Approaches in Business Studies Chapters
1 Getting Started with Microsoft Excel 2 Spreadsheets and other computer-based resources
2 Formulas and Functions 2 Spreadsheets and other computer-based resources
3 Solving Equations 1 Tools of the Trade
19 Linear Programming
4 Creating Charts 5 Presenting the Figures
5 Regression Analysis 13 Looking for Connections
14 Spotting the Relationship
15 Multiple Regression
6 Financial Calculations 18 Allowing for Interest
7 Descriptive Statistics
Workbook: STATISTICS1
6 Summarising the Figures
8 Statistical Distributions
Workbooks: PROBABILITY, NORMALDISTA, NORMALDISTB & NORMALDISTC
9 Patterns of Probability
9 Hypothesis Testing
Workbooks: HYPOTHESIS and CHISQUARED
10 Estimating from Samples
11 Checking a Theory

Every effort has been made to be accurate but if you believe you have found an error please let the author of the supplement, Bernard Liengme, know. As stated above, the workbooks are in Excel 97/2000 format; files in Excel 5/95 format are available from the author upon request. The author will be pleased to answer questions on Excel but please make them clear and specific.

Bernard V Liengme



Copyright © 1995-2008, Pearson Education, Inc. Legal and Privacy Terms

Saturday, February 9, 2008

Quantitative Research

Quantitative research is the systematic scientific investigation of properties and phenomena and their relationships. The objective of quantitative research is to develop and employ mathematical models, theories and/or hypotheses pertaining to natural phenomena. The process of measurement is central to quantitative research because it provides the fundamental connection between empirical observation and mathematical expression of quantitative relationships.

Quantitative research is widely used in both the natural sciences and social sciences, from physics and biology to sociology and journalism. It is also used as a way to research different aspects of education. The term quantitative research is most often used in the social sciences in contrast to qualitative research.

Quantitative research is generally approached using scientific methods, which include:

  • The generation of models, theories and hypotheses
  • The development of instruments and methods for measurement
  • Experimental control and manipulation of variables
  • Collection of empirical data
  • Modeling and analysis of data
  • Evaluation of results

Quantitative research is often an iterative process whereby evidence is evaluated, theories and hypotheses are refined, technical advances are made, and so on. Virtually all research in physics is quantitative whereas research in other scientific disciplines, such as taxonomy and anatomy, may involve a combination of quantitative and other analytic approaches and methods.

In the social sciences particularly, quantitative research is often contrasted with qualitative research which is the examination, analysis and interpretation of observations for the purpose of discovering underlying meanings and patterns of relationships, including classifications of types of phenomena and entities, in a manner that does not involve mathematical models. Approaches to quantitative psychology were first modelled on quantitative approaches in the physical sciences by Gustav Fechner in his work on psychophysics, which built on the work of Ernst Heinrich Weber. Although a distinction is commonly drawn between qualitative and quantitative aspects of scientific investigation, it has been argued that the two go hand in hand. For example, based on analysis of the history of science, Kuhn (1961, p. 162) concludes that “large amounts of qualitative work have usually been prerequisite to fruitful quantification in the physical sciences”. Qualitative research is often used to gain a general sense of phenomena and to form theories that can be tested using further quantitative research. For instance, in the social sciences qualitative research methods are often used to gain better understanding of such things as intentionality (from the speech response of the researchee) and meaning (why did this person/group say something and what did it mean to them?).

Although quantitative investigation of the world has existed since people first began to record events or objects that had been counted, the modern idea of quantitative processes have their roots in Auguste Comte’s positivist framework..

Statistics in quantitative research

Statistics is the most widely used branch of mathematics in quantitative research outside of the physical sciences, and also finds applications within the physical sciences, such as in statistical mechanics. Statistical methods are used extensively within fields such as economics, social sciences and biology. Quantitative research using statistical methods typically begins with the collection of data based on a theory or hypothesis, followed by the application of descriptive or inferential statistical methods. Causal relationships are studied by manipulating factors thought to influence the phenomena of interest while controlling other variables relevant to the experimental outcomes. In the field of health, for example, researchers might measure and study the relationship between dietary intake and measurable physiological effects such as weight loss, controlling for other key variables such as exercise. Quantitatively based opinion surveys are widely used in the media, with statistics such as the proportion of respondents in favor of a position commonly reported. In opinion surveys, respondents are asked a set of structured questions and their responses are tabulated. In the field of climate science, researchers compile and compare statistics such as temperature or atmospheric concentrations of carbon dioxide.

Empirical relationships and associations are also frequently studied by using some form of General linear model, non-linear model, or by using factor analysis. A fundamental principle in quantitative research is that correlation does not imply causation. This principle follows from the fact that it is always possible a spurious relationship exists for variables between which covariance is found in some degree. Associations may be examined between any combination of continuous and categorical variables using methods of statistics.

Measurement in quantitative research

Views regarding the role of measurement in quantitative research are somewhat divergent. Measurement is often regarded as being only a means by which observations are expressed numerically in order to investigate causal relations or associations. However, it has been argued that measurement often plays a more important role in quantitative research. For example, Thomas Kuhn (1961) argued that results which appear anomalous in the context of accepted theory potentially lead to the genesis of a search for a new, natural phenomenon. He believed that such anomalies are most striking when encountered during the process of obtaining measurements, as reflected in the following observations regarding the function of measurement in science:

When measurement departs from theory, it is likely to yield mere numbers, and their very neutrality makes them particularly sterile as a source of remedial suggestions. But numbers register the departure from theory with an authority and finesse that no qualitative technique can duplicate, and that departure is often enough to start a search (Kuhn, 1961, p. 180).

In classical physics, the theory and definitions which underpin measurement are generally deterministic in nature. In contrast, probabilistic measurement models known as the Rasch model and Item response theory models are generally employed in the social sciences. Psychometrics is the field of study concerned with the theory and technique for measuring social and psychological attributes and phenomena. This field is central to much quantitative research that is undertaken within the social sciences.

Quantitative research may involve the use of proxies as stand-ins for other quantities that cannot be directly measured. Tree-ring width, for example, is considered a reliable proxy of ambient environmental conditions such as the warmth of growing seasons or amount of rainfall. Although scientists cannot directly measure the temperature of past years, tree-ring width and other climate proxies have been used to provide a semi-quantitative record of average temperature in the Northern Hemisphere back to 1000 A.D. When used in this way, the proxy record (tree ring width, say) only reconstructs a certain amount of the variance of the original record. The proxy may be calibrated (for example, during the period of the instrumental record) to determine how much variation is captured, including whether both short and long term variation is revealed. In the case of tree-ring width, different species in different places may show more or less sensitivity to, say, rainfall or temperature: when reconstructing a temperature record there is considerable skill in selecting proxies that are well correlated with the desired variable.

Quantitative methods

Quantitative methods are research techniques that are used to gather quantitative data - information dealing with numbers and anything that is measurable. Statistics, tables and graphs, are often used to present the results of these methods. They are therefore to be distinguished from qualitative methods.

In most physical and biological sciences, the use of either quantitative or qualitative methods is uncontroversial, and each is used when appropriate. In the social sciences, particularly in sociology, social anthropology and psychology, the use of one or other type of method has become a matter of controversy and even ideology, with particular schools of thought within each discipline favouring one type of method and pouring scorn on to the other. Advocates of quantitative methods argue that only by using such methods can the social sciences become truly scientific; advocates of qualitative methods argue that quantitative methods tend to obscure the reality of the social phenomena under study because they underestimate or neglect the non-measurable factors, which may be the most important. The modern tendency (and in reality the majority tendency throughout the history of social science) is to use eclectic approaches. Quantitative methods might be used with a global qualitative frame. Qualitative methods might be used to understand the meaning of the numbers produced by quantitative methods. Using quantitative methods, it is possible to give precise and testable expression to qualitative ideas. This combination of quantitative and qualitative data gathering is often referred to as mixed-methods research.

Examples of quantitative research

  • Research that consists of the percentage amounts of all the elements that make up Earth’s atmosphere
  • Survey that concludes that the average patient has to wait two hours in the waiting room of a certain doctor before being selected.
  • An experiment in which group x was given two tablets of Aspirin a day and Group y was given two tablets of a placebo a day where each participant is randomly assigned to one or other of the groups.

The numerical factors such as two tablets, percent of elements and the time of waiting make the situations and results quantitative.

Features of Qualitative & Quantitative Research

Qualitative

Quantitative

“All research ultimately has
a qualitative grounding”
- Donald Campbell

“There’s no such thing as qualitative data.
Everything is either 1 or 0″
- Fred Kerlinger

The aim is a complete, detailed description.

The aim is to classify features, count them, and construct statistical models in an attempt to explain what is observed.

Researcher may only know roughly in advance what he/she is looking for.

Researcher knows clearly in advance what he/she is looking for.

Recommended during earlier phases of research projects.

Recommended during latter phases of research projects.

The design emerges as the study unfolds.

All aspects of the study are carefully designed before data is collected.

Researcher is the data gathering instrument.

Researcher uses tools, such as questionnaires or equipment to collect numerical data.

Data is in the form of words, pictures or objects.

Data is in the form of numbers and statistics.

Subjective - individuals’ interpretation of events is important ,e.g., uses participant observation, in-depth interviews etc.

Objective – seeks precise measurement & analysis of target concepts, e.g., uses surveys, questionnaires etc.

Qualitative data is more ‘rich’, time consuming, and less able to be generalized.

Quantitative data is more efficient, able to test hypotheses, but may miss contextual detail.

Researcher tends to become subjectively immersed in the subject matter.

Researcher tends to remain objectively separated from the subject matter.

(the two quotes are from Miles & Huberman (1994, p. 40). Qualitative Data Analysis)

Main Points

  • Qualitative research involves analysis of data such as words (e.g., from interviews), pictures (e.g., video), or objects (e.g., an artifact).

  • Quantitative research involves analysis of numerical data.

  • The strengths and weaknesses of qualitative and quantitative research are a perennial, hot debate, especially in the social sciences. The issues invoke classic ‘paradigm war’.

  • The personality / thinking style of the researcher and/or the culture of the organization is under-recognized as a key factor in preferred choice of methods.

  • Overly focusing on the debate of “qualitative versus quantitative” frames the methods in opposition. It is important to focus also on how the techniques can be integrated, such as in mixed methods research. More good can come of social science researchers developing skills in both realms than debating which method is superior.

  • source: James Neill

Quantitative Marketing Research

Simply, there are five major and important steps involved in the research process:

  1. Defining the Problem.
  2. Research Design.
  3. Data Collection.
  4. Analysis.
  5. Report Writing & presentation.

The brief discussion on each of these steps are:

  1. Problem audit and problem definition - What is the problem? What are the various aspects of the problem? What information is needed?
  2. Conceptualization and operationalization - How exactly do we define the concepts involved? How do we translate these concepts into observable and measurable behaviours?
  3. Hypothesis specification - What claim(s) do we want to test?
  4. Research design specification - What type of methodology to use? - examples: questionnaire, survey
  5. Question specification - What questions to ask? In what order?
  6. Scale specification - How will preferences be rated?
  7. Sampling design specification - What is the total population? What sample size is necessary for this population? What sampling method to use?- examples: Probablity Sampling:- (cluster sampling, stratified sampling, simple random sampling, multistage sampling, systematic sampling) & Nonprobability sampling:- (Convenience Sampling,Judgement Sampling, Purposive Sampling, Quota Sampling, Snowball Sampling, etc. )
  8. Data collection - Use mail, telephone, internet, mall intercepts
  9. Codification and re-specification - Make adjustments to the raw data so it is compatible with statistical techniques and with the objectives of the research - examples: assigning numbers, consistency checks, substitutions, deletions, weighting, dummy variables, scale transformations, scale standardization
  10. Statistical analysis - Perform various descriptive and inferential techniques (see below) on the raw data. Make inferences from the sample to the whole population. Test the results for statistical significance.
  11. Interpret and integrate findings - What do the results mean? What conclusions can be drawn? How do these findings relate to similar research?
  12. Write the research report - Report usually has headings such as: 1) executive summary; 2) objectives; 3) methodology; 4) main findings; 5) detailed charts and diagrams. Present the report to the client in a 10 minute presentation. Be prepared for questions.

Descriptive techniques

The descriptive techniques that are commonly used include:

Inferential techniques

Inferential techniques involve generalizing from a sample to the whole population. It also involves testing a hypothesis. A hypothesis must be stated in mathematical/statistical terms that make it possible to calculate the probability of possible samples assuming the hypothesis is correct. Then a test statistic must be chosen that will summarize the information in the sample that is relevant to the hypothesis. A null hypothesis is a hypothesis that is presumed true until a hypothesis test indicates otherwise. Typically it is a statement about a parameter that is a property of a population. The parameter is often a mean or a standard deviation.

Not unusually, such a hypothesis states that the parameters, or mathematical characteristics, of two or more populations are identical. For example, if we want to compare the test scores of two random samples of men and women, the null hypothesis would be that the mean score in the male population from which the first sample was drawn, was the same as the mean score in the female population from which the second sample was drawn:

H0 :? 1 = ?2

where:

H0 = the null hypothesis
?1 = the mean of population 1, and
?2 = the mean of population 2.

The equality operator makes this a two-tailed test. The alternative hypothesis can be either greater than or less than the null hypothesis. In a one-tailed test, the operator is an inequality, and the alternative hypothesis has directionality:

H0 :? 1 = or < ?2

These are sometimes called a hypothesis of significant difference because you are testing the difference between two groups with respect to one variable.

Alternatively, the null hypothesis can postulate that the two samples are drawn from the same population:

H0 :? 1 ? ?2 = 0

A hypothesis of association is where there is one population, but two traits being measured. It is a test of association of two traits within one group.

The distribution of the test statistic is used to calculate the probability sets of possible values (usually an interval or union of intervals). Among all the sets of possible values, we must choose one that we think represents the most extreme evidence against the hypothesis. That is called the critical region of the test statistic. The probability of the test statistic falling in the critical region when the hypothesis is correct is called the alpha value of the test. After the data is available, the test statistic is calculated and we determine whether it is inside the critical region. If the test statistic is inside the critical region, then our conclusion is either the hypothesis is incorrect, or an event of probability less than or equal to alpha has occurred. If the test statistic is outside the critical region, the conclusion is that there is not enough evidence to reject the hypothesis.

The significance level of a test is the maximum probability of accidentally rejecting a true null hypothesis (a decision known as a Type I error).For example, one may choose a significance level of, say, 5%, and calculate a critical value of a statistic (such as the mean) so that the probability of it exceeding that value, given the truth of the null hypothesis, would be 5%. If the actual, calculated statistic value exceeds the critical value, then it is significant “at the 5% level”.

Types of hypothesis tests

  • Parametric tests of a single sample:
  • Parametric tests of two independent samples:
    • two-group t test
    • z test
  • Parametric tests of paired samples:
    • paired t test
  • Nominal/ordinal level test of a single sample:
    • chi-square
    • Kolmogorov-Smirnov one sample test
    • runs test
    • binomial test
  • Nominal/ordinal level test of two independent samples:
  • Nominal/ordinal level test for paired samples:

Point to remember:

    • If a Variable (e.g. preference of the respondences on color of a product ) is interval/ ratio scaled and meet some statistical assumption (e.g. Normality), then it is eligible for Parametric test.
    • If a Variable (e.g. gender or rank order of few products on their certain attributes) is Nominal/ Ordinal scaled and/ or does not meet some statistical assumption (e.g. Normality), then it is not eligible for Parametric test. In this situation we have to use Non-parametric test.

We should use non-parametric test only if sample/ variable is not eligible for parametric test. Remember that, the non-parametric test is mostly used and misused technique in the world.

Reliability and validity

Research should be tested for reliability, generalizability, and validity. Generalizability is the ability to make inferences from a sample to the population.

Reliability is the extent to which a measure will produce consistent results. Test-retest reliability checks how similar the results are if the research is repeated under similar circumstances. Stability over repeated measures is assessed with the Pearson coefficient. Alternative forms reliability checks how similar the results are if the research is repeated using different forms. Internal consistency reliability checks how well the individual measures included in the research are converted into a composite measure. Internal consistency may be assessed by correlating performance on two halves of a test (split-half reliability). The value of the Pearson product-moment correlation coefficient is adjusted with the Spearman-Brown prediction formula to correspond to the correlation between two full-length tests. A commonly used measure is Cronbach’s ?, which is equivalent to the mean of all possible split-half coefficients. Reliability may be improved by increasing the sample size.

Validity asks whether the research measured what it intended to. Content validation (also called face validity) checks how well the content of the research are related to the variables to be studied. Are the research questions representative of the variables being researched. It is a demonstration that the items of a test are drawn from the domain being measured. Criterion validation checks how meaningful the research criteria are relative to other possible criteria. When the criterion is collected later the goal is to establish predictive validity. Construct validation checks what underlying construct is being measured. There are three variants of construct validity. They are convergent validity (how well the research relates to other measures of the same construct), discriminant validity (how poorly the research relates to measures of opposing constructs), and nomological validity (how well the research relates to other variables as required by theory) .

Internal validation, used primarily in experimental research designs, checks the relation between the dependent and independent variables. Did the experimental manipulation of the independent variable actually cause the observed results? External validation checks whether the experimental results can be generalized.

Validity implies reliability : a valid measure must be reliable. But reliability does not necessarily imply validity :a reliable measure need not be valid.

Types of errors

Random sampling errors:

  • sample too small
  • sample not representative
  • inappropriate sampling method used
  • random errors

Research design errors:

  • bias introduced
  • measurement error
  • data analysis error
  • sampling frame error
  • population definition error
  • scaling error
  • question construction error

Interviewer errors:

  • recording errors
  • cheating errors
  • questioning errors
  • respondent selection error

Respondent errors:

  • non-response error
  • inability error
  • falsification error

Hypothesis errors:

  • type I error (also called alpha error)
    • the study results lead to the rejection of the null hypothesis even though it is actually true
  • type II error (also called beta error)
    • the study results lead to the acceptance (non-rejection) of the null hypothesis even though it is actually false

References

OfficeUsers.ORG Editorials