1. A brief summary of the methodological philosophy you have selected. (Approximately 500 words.)
‘All advances of scientific understanding, at every level, begin with a speculative adventure, an imaginative preconception of what might be true… it is the invention of a possible world or a tiny fraction of that world…’ (Medawar, 1972)
In an on-going attempt to understand our universe, man, is in a perpetual state of exploration and discovery of underlying cause. From the beginnings of human consciousness and the development of symbolism we have strived to broaden our understanding and ‘impose some sort of meaning on the world. Through concepts, which come to ‘express generalisations from particulars’, reality is given sense, order and coherence’ (Cohen et al,2001, p13). Fear, suspicion, and superstition are gradually being replaced in mankind, by knowledge of cause and effect.
The ‘general doctrine’ of ‘Positivism’, (the term first coming from the ancient Greeks) was brought into its popular use by the French philosopher Compte in the nineteenth century. Compte was first to apply the concepts of the scientific method to the examination of the world of social science phenomena. The tenet of logical positivism centers on the ‘method of verification’ of any statement, hypothesis or theory that is advanced, (Cohen et al,2001, p8, SG p.80), in other words: on the application of the scientific method.
The method of verification resides within the concept of the scientific method: hypothesis and theories to be verified and ‘investigated empirically’, utilising scientific methodologies, and the ‘end-product’ must be expressed ‘in laws, or law-like generalizations’ (Cohen et al. 2001, p8). The methods and tools that had stood science in good stead in the surge of scientific epistemology were to bring order through unifying concepts and ‘impose some sort of meaning on the world’ (Cohen et al, 2001, p13; see Appendix I-III). Human nature with its ‘complexity’; and ‘elusive and intangible quality;’ while contrasting with the order of the natural world (Cohen et al,2001, p9), would yield up its concepts and generalisations through the Positivists’ approach[a].
Pursuing this line of investigation, Positivists, in believing that the ‘science’ of human social life was also based on cause and effect could thus undermine ‘beliefs and practices that were based solely on superstition or tradition’, and ‘pave the way for substantial social and political progress’ (SG, p79). The reliance on ‘experimental method’, ‘statistical analysis’, ‘careful measurement of phenomena’, and looking for a ‘causal or statistical relationship among variables’ have all indicated a reliance on ‘quantitative data’ (SG, p.79; Cohen et al.(2001) p.16), in other words the application of empiricism.
However; when applying the’ tenets of scientific faith’ to the extraordinarily complex concept that is human nature as it impacts on the educational world, we must bear in mind the assumptions that underlie this faith: ’an assumption of determinism’ – cause and effect and that these are there to be discovered and conceptualised; ‘empiricism’ – the hypothesis must be verifiable by direct experiential observation, and parsimony (Cohen et al.(2001), p.10), or in the words of Einstein: ‘ everything should be as simple as possible, but not simpler’, or Medwar (1981) speaks of ‘logical immediacy’ of a good hypotheses and concepts by which he means ‘an explanation of whatever it is that needs to be explained and not an explanation of a great many other phenomena besides…’ (Cohen et al, 2001, p15).
Of course the whole tenet of positivism as applied to the social science has the overarching, inbuilt assumptions of an ontological and epistemological nature – that the ‘very nature or essence of the social phenomena’ conforms to scientific principles (Cohen et al.2001, p.5) and therefore discoverable by scientific methods.
Of course, as with all research philosophies, postitivism has its critics, this will be addressed in part 3&4 below[b].
1. An outline of what is involved in structured observation. (Approximately 500 words.)
In the education field, structured (systematic) and unstructured observation are the ‘two main strategies’ used by researchers in the recording of observed phenomena. Structured data originates from the categorisation of activities being observed at frequent intervals (‘point sampling’ – Media Guide p.30, Research Methods in Education, 2001, p. 44) to form a table of results representing frequency of occurrence of the various categories of activity. This process results in numerical or quantitative (QN) data which may then be subject to empirical statistical analysis. (Research Methods in Education, 2001,p.44). Structured observation (SO) studies translate potential qualitative(QL) data into quantitative data (Media Guide p.30), the advantage, of for example frequency, data generated in this manner from observation, is accommodation of the full strength of statistical analysis being brought to bear on the phenomenon, or variable, under investigation. Thus, adding validity and opportunities such as ‘generalisation to concept’ and perhaps even laws (see part 2 above). SO data is essentially qualitative in nature, with all the attendant advantages of richness and depth as discussed in TMA03, but with researcher bias, personal judgment ‘formulated by means of socio-cultural or discursive sources’ minimised, (SG, p135)
‘Structured observational studies’ ‘have been guided by at least some of the assumptions of positivism’ (see part 2 above) and the criticisms of positivism are therefore those of SO, see part 3 and 4 for more detail[c].
By way of contrast: in unstructured observation the researcher does not accumulate empirical, quantitative data but seeks to describe and paint the occurrences with rich descriptions, seeking to capture the voice and language of the observation. The challenge here is to avoid personal bias of preconceived assumptions that may skew the interpretation of events. (Research Methods in Education, 2001, p. 44). However; later categorisation of recorded interviews, tests, questionnaires,classroom settings and so forth may be translated to QN data (Data based exercise 5 Media Guide, p.30-33; E891 DVD; SG p.142).
Categorisation presents major challenges in SO:
1. Comprehensive listing of all possibilities to prevent anomalies at recording stages and difficulty in the later analysis phase (Research Methods in Education, 2001, p.44; SG, p141; Wright & Walkuski, 1995, p.67-70; Junod et al. 2006, p.100-101)
2. Categories may not be synchronous with other research preventing cumulation of knowledge (Hargreaves, 2007, p5; SG, p.141; Wright & Walkuski, 1995, p.67-70), and may change with time and conceptions and understanding (SG p.141), however it must be noted that the later is true of all social phenomena studied in any field and even in the natural world – global warming for instance. The difficulty arises when these temporal (the position in cycle, for instance school evolution point, or weekly cycle and so forth (Research Methods in Education, 2001, p. 59)),spatial (environment, geography and so forth), and social (for instance the researcher’s cultural, socio-economic constructs, (Research Methods in Education, 2001, p.)) aspects are not taken into account when making claims and generalisations.
3. The question of degrees, quality or extent of the category is rarely recorded (SG p.141)
4. Validity and Reliability of data reflecting the need for accuracy and trustworthiness (SG p144) draws attention to the need for adequate training and testing of coders is imperative (Wright & Walkuski, 1995, p.67; SG p143), and consistency and sustaining of attention for a prolonged period is required (SG, 143).
5. Difficulty in ‘handling marginal cases’ where categories are not adequate or the unusual occurs (SG p.144[d])
[Word count: 563]
1. An assessment of the strengths and weaknesses of structured observation from this point of view. (Approximately 1000 words.)
‘There is no automatic connection between positivism, and the use of quantitative methods of enquiry’ (SG p79) but an embracing of the scientific method as one of its tools clearly defines the connection, and indeed ‘quantitative research has been shaped by … positivism’ (SG. p..78)
As in the natural science world, both QN and QL researchers rely on ‘qualitative knowing’ (SG,p.135) or ‘experience’ (Moulty, 1978, quoted by Cohen et al.2001, p10, Medawar, 1981 as quoted in Cohen et al,2001, p.15) which will help refine and define the hypothesis being tested. Interestingly Polanyi (1959), recognised the subjective nature of ‘qualitative knowing’ in both the natural and social science world, while acknowledging that the validity of claims and ‘conclusions of scientific research’ are not necessarily negated due to this factor(SG, p.135). Even in the world of the much esteemed randomised-control tests of evidence-based medical research, Hargreaves recognises the idiosyncratic nature of directed research (Hargreaves, 2007, p.6) and hypothesising!
This understanding is fundamental in the recognition of the strength that SO has brought to educational, and social research in general – the richness of QL data and qualitative knowing (see part 2), combined with the strength of QN data characteristics, that comes from validity, reliability and adhering to the demands of the scientific methodologies.
To exemplify some of the weaknesses as documented in the analysis of categorisation difficulties (see part 2 above), let us examine the Cheffer’s Adaptation to Flander’s Interaction Analysis System (CAFIAS) and the Academic Learning Time – Physical Education (AET-PE) SO systems and methods (Wright & Walkuski, 1995; Junod et al. 2005). CAFIAS affords recording of categories of verbal and non-verbal code in each of 6 teacher and 4 student behaviour patterns (Wright & Walkuski, 1995, p.66) but no qualifying or quality of the behaviour and no recording of outcomes for the student which might provide a measure of the whole focus of education – student learning. Data is therefore ‘skewed’ in that it primarily records teacher behaviour ((Wright & Walkuski, 1995, p67). But CAFIAS does provide an important measures which when subjected to analysis, has been used as a tool in successful habit-training of teachers (Wright & Walkuski, 1995, p66,67). AET-PE on the other hand focuses on student outcome but again in terms of behaviour as an indicator of teaching activity which promotes learning (Wright & Walkuski, 1995, p66,67), and neither system takes any account or carries any category which might illuminate the importance of emotion in the learning process – Bloom’s Affective Domain (Bloom’s taxonomy of affective levels – Anderson et al. 2001, p.47).
However; (ignoring the quality of this particular research) as demonstrated by Junod et al. (2005) researchers may and do establish categories to best suit the variable under consideration. Controls may be dependently, temporally, and spatially paired (paired within same classroom); and/or independently, temporarily and spatially controlled (control group of same age, same era, but in different classroom and may therefore vary ethnographically and have very different stimuli to the treatment, or targeted, study group). And indeed re-brand whole new categories and concepts such as categorising silent reading as a passive rather than an active engaged task Junod et al. 2005, p87-104).
This latter however demonstrates one of the major difficulties and attendant weakness in social research and educational research in particular – the non-accumulative nature of research and therefore the building of knowledge. That is researchers tend to ignore the possibility, widely used within the sciences as an extremely important tool in the epistemology toolbox, of replication, verification and authentification of other’s research work (Hargreaves, 2005, p5; (Research Methods in Education, 2001, p.138; Schofield, 2005, p.195). The contextual reasons for this deficiency have been given variously and accumulatively as: geographical, social, cultural and temporal differences within studies, not to mention the idiosyncrasy of human nature (Cohen et al. 2001 p3-9;Gage, 2005, p151-180), but… is not the natural world subjected to the same inconsistencies: animal behaviour; changing environments; changing climates; seasonal effects; inter and intra species difference, and so forth? (see part 4)
Directly utilising positivism in the field of SO brings all the weaknesses and strengths that attend to positivism consequently to SO.
Strengths of applying positivism in SOs:
1. Bringing all the strengths of QN methods (see part 2) to the field
2. Relatively fast processing of data when compared to transcription, thus sample size maybe larger and large number of replications (adding validity and reliability) (Research Methods in Education, 2001, p.238)
3. Controls groups, randomised allocation, pairing, correlation, population testing, association, identification of confounding variables, probability, degree of represenation within the population testing, reproducibility, critical and self critical nature and much, much more, in short: the entire strength of statistical analysis, validity and reliability (Research Methods in Education, 2001, p.238; SG Part 4)
1. Coders restricted to recording only those behaviours pre-assigned to categories in the SO (Research Methods in Education, 2001, p.241)
2. Methods of isolated point sampling ignores the ‘essential cumulative nature of classroom talk as a continuous, contextualised pattern (Research Methods in Education, 2001, p.241)
3. Other research methods not reliant on positivism such as ethnography and socio-cultural discourse analysis claim more authentic methodologies focusing on interrogating culture and real-life classroom, thus raising ‘critical awareness’ (Research Methods in Education, 2001, p.241). Despite the strength of statistical analysis it cannot decipher the building of meaning within a classroom (Research Methods in Education, 2001, p.243), no way to measure identification of student misconceptions and the consequent conflict which leads to learning (Driver, et al, 2008,60, 61), nor the origination of motivation and self-directed learning (Driver et al., 2004, p70,71)
4. Hypothesis testing and prior categorisation is to identify prior assumptions and this has been recognised as a weakness when compared to pure qualitative research
5. Difficulties attendant to true randomisation, representative sample size selection, avoidance of unintentional bias, systematic error introduced for instance through non-responsiveness, confounding variables, assignment to control and treatment groups, minimising backgroud effects, ambiguity of meaning of response / coding, significance testing, error in ascribing causal agents, assumptions derived from mere associated or correlated variables, failure to identify the real causal variable when it is hidden or unknown yet correlated to a co-variant, spurious, documented variable (SG, Part 4)
6. And yet the greatest of these is probably that recognised by Hampen-Turner’s (1970) (referenced in Cohen et al.(2001, p.18)) objection to positivism because it ignores the all important ‘human qualities’ and rather focuses on a predictable, repetititious nature which, by their very nature alien to the human spirit (Cohen et al. (2001), p17[e])
1. A conclusion exploring the implications of the assessment you have carried out for the use of structured observation in educational research. (Approximately 500 words.)
The most limiting weakness of the positivism / SO marriage is the limitation placed on it by categorisation, the difficulty in ascribing degrees or quality (although Junod et al. (2005) did reach a compromise in subdividing categories and reassigned a long-standing active engagement to a passive (silent reading) and thereby found how important active engagement is to learning in ADHD students (Junod et al. 2005, pp 84-104)), and also the dehumanisation methodologies inherent in this research philosophy[f].
It appears to me that one very important step that is missing in Educational epistemology is recognisable by comparison with tables in Appendix I-III: Cohen et al’s stages in scientific epistemology. The very basic principal principle is testing & repeating (by others!) and thus, validation or rejection by the scientific community. This appears to rarely happen in the education research world, with all vying to pursue his/her own pet theory. Thus there is no solidarity within the community of learners, ‘accepted’ knowledge is not being accumulated, and practice guidelines emanating from the research are very much muddied by the entirety of the quantity of conflicting research – the epistemological framework is faulty. Is this because of the arguments focusing on the research philosophy? But couldn’t the SO/positivists group together for instance and form a constructive- community of learners, rather than a community made up of ones? (TR).
The real answer to this conundrum, I suspect, may[g] be that presented in the Research Methods Handbook where it is recognised that the vulnerability and exposure of practice and institution to peers is to ‘to open oneself and one’s colleagues to self-doubt and criticism’ (Research Methods in Education, 2001, p.138).
But… isn’t this what the scientist faces also? Are the social scientists such delicate flowers? Do they not have the courage of their convictions and confidence in their research methodologies and tools? Or are we all prima-donnas not willing to take the support role but rather star ourselves? Food for thought!
One way or another some key implications (besides the epistemology), is the recognition of the power of well derived hypotheses and categories in SO research, various authors speak of an ‘experiential knowing’ – this knowing comes from initial, informal qualitative research and analysis – problem solving and awareness. Another key factor is the power of statistical strength in handling quantitative data to illuminate the true patterns that may form the basis for generalisations and laws.
It is in the combination of the use of these two factors: qualitative and quantitative methods – one informing the other in continuous circles, each of which leads to an offshoot that again forms another circle, the whole (having been independently verified by the learning community!) contributing to the development of generalisations, concepts, and perhaps even laws. Or, of course leading to rejection via perhaps the rejection of one’s null hypothesis (!) and back to the drawing board for another spiral, each time building on the previous.
Ethics: not to be forgotten in the assessment is the benefit of the true guidance of ‘absolute ethical standards’ which allow no degree of freedom’ (pardon the pun!) for the ‘ends to justify the means’ or allowing for a ‘watered-down’ adherence to principles (Cohen et al. (2001), p58). Strict adherence to ethical guidelines when researching in the social sciences is crucial and extends the generalisability of findings[h].
Anderson, L.W., ; Krathwohl, D. (Eds.) (2001). A Taxonomy for Learning, Teaching and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. New York: Longman.
Cohen, L., Manion, L., and Morrison, K. (2001). Research Methods in Education 5th Edition. Routledge Falmer.
Driver, R., Asoko, H., Leach, J., Mortimer, E., and Scott, P. (2004) ‘Constructing Scientific Knowledge in the Classroom’. in Scanlon, E., Murphy, P., Thomas, J. and Whilelegg, E. (eds) Reconsidering Science Learning. Routledge Falmer, London and New York.
Driver, R., Leach, J., Scott, P., and Wood-Robinson, C. (2008) ‘Young People’s Understanding of Science Concepts’, Block 2 Articles. SEH806 Contemporary Issues in Science Learning. Milton Keynes, TheOpen University.
Hargreaves, D. (2007) ‘Teaching as a research-based profession: possibilities and prospects (The Teacher Training Agency Lecture 1996)’ in Hammersley, M. (ed) Educational Research and Evidence-based Practice, London, Sage in association with The Open University (the Course Reader).
Hartley, J. Chesworth, K. (2000) Qualitative and Quantitative Methods in Research on Essay Writing: No One Way. Journal of Further and Higher Education, Vol. 24:1
Junod, E.V., DuPaul, J., Jitendra, A.K., Volpe, R.J., Cleary, K.S. (2006) Classroom observations of students with and without ADHD: Differences across types of engagement. Journal of School Psychology. Vol. 44 pp 87-104
Medawar, P.B. (1972) The Hope of Progress. London: Methuen. Review by Rafe Champion, available at http://www.the-rathouse.com/Medawar_PlutoRepublic.html Last accessed 6 May, 2011.
Mouly,G.J. (1978) Educational Research: the Art and Science of Investigation. Boston: Allyn & Bacon.
Schofield, J, (2007) ‘Increasing the generalizability of qualitative research’ in Hammersley, M. (ed) Educational Research and Evidence-based Practice, London, Sage in association with The Open University (the Course Reader).
Wright, S., Salkuski, J. (1995) The use of systematic observation in physical education. Teaching and Learning. 16(1), pp 65-71.