InThinking Revision Sites

INTHINKING REVISION SITES

Own your learning

Why not also try our independent learning self-study & revision websites for students?

We currenly offer the following DP Sites: Biology, Chemistry, English A Lang & Lit, Maths A&A, Maths A&I, Physics, Spanish B

"The site is great for revising the basic understandings of each topic quickly. Especially since you are able to test yourself at the end of each page and easily see where yo need to improve."

"It is life saving... I am passing IB because of this site!"

Basic (limited access) subscriptions are FREE. Check them out at:

Quantitative methods glossary

The following page is simply a glossary of terms to assist students with quantitative research methods in psychology.  If there is a term that is missing from this list and you would like a clear definition, please let me know.

The terms below are terms that may be applied on Paper 1 and 2 when discussing research methods or evaluating studies. They also play an important role in the internal assessment.

Research design

Alternative hypothesis: Also known as the research hypothesis. A hypothesis that states that there will be a statistically significant relationship between two or more variables.

Baseline: The level of responding before any treatment is introduced and therefore acts as a control condition. For example, measuring normal brain activity before being asked to recall a stressful event.

Confederate: A helper of a researcher who pretends to be a real participant.

Control condition: A condition that does not receive the treatment or intervention that the other conditions do. It is used to see what would happen if the independent variable were not manipulated.

Correlational research: The researcher measures two or more variables without manipulating an independent variable and with little or no attempt to control extraneous variables.

Counterbalancing: A technique used to deal with order effects when using a repeated measures design. When a study is counterbalanced, the sample is divided in half, with one half completing the two conditions in one order and the other half completing the conditions in the reverse order.

Cross-sectional design: Comparing two or more groups on a particular variable at a specific time. The opposite is a longitudinal design where the researcher measures change in an individual over time.

Dependent variable: The variable that is measured and is hypothesized to be the effect of the independent variable.

Double-blind testing: an experimental procedure in which neither the researcher doing the study nor the participants know the specific type of treatment each participant receives until after the experiment is over; a double-blind procedure is used to guard against both experimenter bias and placebo effects.

Factorial Design:  A design including multiple independent variables.

Field experiment: A study that is conducted outside the laboratory in a “real-world” setting.

Hypothesis: a testable statement of what the researcher predicts will be the outcome of the study which is usually based on established theory.

Independent samples design: also called an independent measures design and between-groups design. More than one experimental group is used and participants are only in one group. Each participant is only in one condition of the independent variable.

Independent variable: the variable that is manipulated by the researcher.

Meta-analysis: Pooling data from multiple studies of the same research question to arrive at one combined answer.

Natural experiment: The study of a naturally occurring situation in the real world. The researcher does not manipulate an independent variable or assign participants randomly to conditions.

Non-equivalent groups design: A between-subjects design in which participants have not been randomly assigned to conditions. A typical example would be to look at gender differences with regard to a certain behaviour.

Null hypothesis: A hypothesis that says there will be no statistical significance between two variables. It is the hypothesis that a researcher will try to disprove.

One-tailed hypothesis: a scientific prediction stating that an effect will occur and whether that effect will specifically increase or specifically decrease, depending on changes to the independent variable.

Operationalization: the process by which the researcher decides how a variable will be measured. For example, "marital satisfaction" cannot be measured directly, so the researcher would have to decide what traits will be measured in order to measure the construct.

Pretest-posttest design: The dependent variable is measured before the independent variable has been manipulated and then again after it has been manipulated.

p-value: The probability that, if the null hypothesis were true, the result found in the sample would occur.

Quasi-experiment: The researcher manipulates an independent variable but does not randomly assign participants to conditions.

Random allocation: A method of controlling extraneous variables across conditions by using a random process to decide which participants will be in which conditions. This includes random number generators and pulling names out of a hat.

Repeated measures design: Also called a “within groups” design. The same participants take part in each condition of the independent variable. This means that each condition of the experiment includes the same group of participants.

Single-blind testing: An experiment in which the researchers know which participants are receiving a treatment and which are not; however, the participants do not know which condition they are in.

True experiment: A study in which participants are randomly assigned to either a treatment group and a control group; an independent variable is manipulated by the researcher.

Two-tailed hypothesis: A hypothesis that one experimental group will differ from another without specification of the expected direction of the difference - that is, without predicting an increase or decrease in behaviour.

Sampling techniques

Opportunity sampling: Also called convenience sampling. A sampling technique where participants are selected based on naturally occurring groups or participants who are easily available.

Random sampling: Selecting a sample of participants from a larger potential group of eligible individuals, such that each person has the same fixed probability of being included in the sample.

Self-selected sampling: Also called volunteer sampling. Participants choose to become part of a study because they volunteer by responding to an advert or a request to take part in a study.

Stratified Random Sampling: A method of probability sampling in which the population is divided into different subgroups or “strata” and then a random sample is taken from each “stratum.”

Evaluating research

Bidirectional ambiguity: A limitation of many correlational studies.  It is not possible to know if x causes y, y causes x, if they interact to cause behaviour, or whether it is just coincidental and no relationship truly exists.

Construct validity: The degree to which a study consistently measures a variable. For example, if a researcher develops a new questionnaire to evaluate respondents’ levels of aggression, the construct validity of the instrument would be the extent to which it actually assesses aggression as opposed to assertiveness, social dominance, or irritability. 

Demand characteristics: Cues that may influence or bias participants’ behavior, for example, by suggesting the outcome or response that the experimenter expects or desires.

Ecological fallacy: A mistaken conclusion drawn about individuals based on findings from groups to which they belong. For example, if a researcher uses Japanese participants in the sample and assumes that since they are Japanese, they must be collectivistic.  The ecological fallacy is controlled for by giving a test to measure the assumed variable.

Ecological validity: The degree to which results obtained from research or experimentation are representative of conditions in the wider world.  Ecological validity is influenced by the level of control in the environment (hence, ecological).

Expectancy effect: When a researcher’s expectations about the findings of the research are inadvertently communicated to participants and influence their responses. This distortion of results arises from participants’ reactions to subtle cues unintentionally given by the researcher - for example, through body movements, gestures, or facial expressions.

External validity: the extent to which the results of a study can be generalized beyond the sample that was tested.

Extraneous variable: Also known as a confounding variable. A variable that is not under investigation in an experiment but may potentially affect the dependent variable if it is not properly controlled.

Fatigue effect: A type of order effect where a participant decreases in performance in later conditions because they are tired or bored with the activity.

Interference effect: A type of order effect where the first condition may influence the outcome of the second condition. For example, when giving to sets of words to remember, when a participant remembers a word from the first condition when trying to recall words in the second condition.

Internal validity: When an experiment was conducted using appropriate controls so that it supports the conclusion that the independent variable caused observed differences in the dependent variable.

Mundane Realism: The participants and the situation studied are representative of everyday life.  If a study is highly artificial, it is said to lack mundane realism.

Order effects: Differences in research participants' responses that result from the order in which they participate in the experimental conditions. Examples include fatigue effect, interference effects, or practice effect.

Participant attrition: the rate at which participants drop out of a study over time.  This often occurs when research has many steps or takes place over a long period of time.

Placebo effect: a beneficial effect produced by a placebo drug or treatment, which cannot be attributed to the properties of the placebo itself, and must, therefore, be due to the patient's belief in that treatment.

Practice effect: A type of order effect where a participant improves in performance in later conditions because practice has lead to the development of skill or learning.

Random error: Error that is due to chance alone. Random errors occur when unexpected or uncontrolled factors affect the variable being measured or the process of measurement.

Reactivity: When participants change their behaviour due to their awareness of being observed.

Reliability: the consistency of a measure -  that is, the degree to which a study is free of random error, obtaining the same results across time with the same population.

Sampling bias: When a sample is selected in such a way that it is not representative of the population from which it was drawn. When a sample is biased, population validity is decreased.

Type I Error: When the null hypothesis is rejected although it is true; when the research concludes there is a relationship in the population when in fact there is not.

Type II Error: When the null hypothesis is retained although it is false; when the research concludes there is no relationship in the population when in fact there is one.

Validity: the degree to which a test measures what it claims to measure.