The major features of science. Objectivity – objective knowledge should be free of opinion or bias and based purely on empirical evidence. Repeatability – the ability to check and verify the information by repeating a study. Fallibility – the ability to demonstrate that a theory is wrong. Validating new knowledge and the role of peer review. New knowledge Is published In scientific Journals which act as permanent records of research. It Is checked by professionals In the specific field of research who peer review It. Only high quality work is published.
There are a few issues with the peer review system including: The rejection of new research which does not reflect the current paradigm. The file drawer phenomenon – research which does not support the hypothesis ends up in a drawer rather than published. Reviewer bias may stop a piece from being published – if the expert doesn’t agree or if the research doesn’t come from a specific institution, it may not be published. Research Methods (selecting and applying appropriate methods). Interviews and Surveys – choose when you need to get detailed information or want o know in-depth responses.
Observations – choose when you want to gather data without largely influencing the participant’s behavior or when you are interested in how behavior may appear in a natural environment. Case Study – In depth study of a single person or small group of people. Correlation Study – when looking for a relationship between two variables e. G. Amount of time spent watching TV and aggression. Lab Experiment – when looking for cause and effect between two specific variables. Sampling Strategies and Implications of. Chance of being chosen.
Every member of the target population is identified and a random sampling technique is employed to select the sample. Opportunity – a sample that consists of those people available to the researcher. The researcher approaches people and asks them to take part. This reduces the population validity and leads to a high chance of bias within the sample. Volunteer – participants volunteer to take part in the research (usually in response to an advert placed by the researcher). This reduces the population validity as a specific type of person is likely to volunteer for the research. Validity and Issues of.
Types – Internal = the controlling of variables within the study, e. G. Does the DVD measure what it was designed to measure? – External = how well the results of a study can be generalized beyond the study itself (e. G. Beyond the APS used and setting of the original study). Improving validity – Internal = use measures or scales which have been previously validated, carefully control all variables and/or run a pilot study. – External = replicate with different groups of people/in different settings. Reliability and Issues of. – Internal thing. = the consistency within a test, e. G. W well items measure the same results each time. Improving reliability – Internal = use the split-half method to compare scores from one half of the test to another. – External = use the test-retest method to retest on the same or similar APS. A high correlation between all replications equals a reliable test. Ethical considerations in the design and conduct of research. Informed consent, deception, debriefing, right to withdraw, confidentiality, protection from physical and psychological harm. Choosing graphs. Bar Chart – shows summary statistics. Useful to show differences in data (e. Means) between groups. The bars on the graph are separate as the data is discontinuous. Histogram – shows the whole distribution off group of data. The bars are together as the data is continuous. The column areas of the bards represent the frequency of the score. Scattergun – show the relationship between two variables. They show the strength and direction of the correlation. Probability and Significance (interpreting and errors). Probability is the likelihood of something happening and is expressed between the numbers O and 1 the event will not happen and 1 means it definitely will).
We use statistical tests in psychological research to work out the probability of our results being due to chance. In order to avoid accepting or rejecting our experimental hypothesis incorrectly, we set the level of probability at 5% (0. 05). This tells us that a maximum of 5% of our results are due to chance. If we are too strict or too lenient with our data, a Type 1 or 2 error can occur. We should be looking at (e. G. 0. 10 or 10%) which results in us accepting our hypothesis when we should reject it. Type 2 error – being too strict. By looking to see whether our results are significant at strict level (e. . 0. 001 or 1%) we run the risk of rejecting our experimental hypothesis when it is in fact correct. How to choose a statistical test. You need to know the type of data (nominal, ordinal or interval), whether you are looking for a difference OR a relationship and what type of experimental design was used. Nominal data is data which is put into categories (or named), Ordinal data is data which can be put in order (e. G. 1st 2nd, 3rd) and Interval data is data which is obtained when there are equal measurements on a scale e. G. CM or seconds.
Mann-Whitney – if you have an independent groups design, are looking for a difference between two sets of data and have ordinal/interval data. Wilcox – if you have a repeated measures design, are looking for a difference between two sets of data and have ordinal/interval data. Superman’s RYO – if you are looking for a relationship (correlation) between two variables and have ordinal/interval data. Chi- Square – if you have nominal data. Analyzing and interpreting qualitative data. Qualitative data (collected by interviews or open-ended questions in questionnaires) s best analyses using content analysis.
There are many different forms – one of the most straight forward being Interpretive Phenomenological Analysis (PA). It goes like so: Transcribe the data into written form – read and re-read the data – code or organism the data into key themes (emergent themes) – arrange into groups (key themes) – reflect on what the data tells us. How to report a psychological investigation. A published psychological report contains the following sections (in order) – Title, Abstract, Introduction, Method, Results, Discussion, References and Appendices. Writing a Big One! 10/12 marker) What to include if asked to write up a methods section: Not strictly necessary but a nice addition if you want an A! Stating a directional or non-directional hypothesis would be fine but you may also wish to include a null hypothesis Design: Lab/Field/Natural Experiment Independent Groups/Repeated Measures/Matched Pairs and DVD Control of any extraneous variables Dealing with any ethical issues Participants Sample size Sampling method Control Groups Materials: This will vary hugely but may include items such as a questionnaire, paper, pens, topnotch, consent form, standardized instructions etc.
Procedure (step by step guide to allow for replication) Duration/Whereabouts/Details of who how many people were recording data/How the data was recorded. What to include if asked to write up a results section: A clearly labeled table of the results. A well labeled bar graph/histogram/scatter gram of the data. Explanation of which statistical test was used and why. Explanation and Justification of which level of significance the results were compared to. Whether the hypothesis/sees can be accepted or rejected based on the results of the statistical test.