Empirical research is a method of gathering data about a phenomenon. Empirical method - what does it mean, types and methods of empirical knowledge

The methods of empirical research in science and technology include, along with some others, observation, comparison, measurement and experiment.

Observation is understood as a systematic and purposeful perception of an object of interest to us for some reason: things, phenomena, properties, states, aspects of the whole - both material and ideal nature.

This is the simplest method, which, as a rule, acts as part of other empirical methods, although in a number of sciences it acts independently or as the main one (as in weather observation, in observational astronomy, etc.). The invention of the telescope allowed man to extend observation to the previously inaccessible region of the mega world, the creation of the microscope marked an intrusion into the micro world. The X-ray apparatus, the radar, the ultrasound generator, and many other technical means of observation have led to an unprecedented increase in the scientific and practical value of this research method. There are also methods and methods of self-observation and self-control (in psychology, medicine, physical culture and sports, etc.).

The very concept of observation in the theory of knowledge generally appears in the form of the concept of "contemplation", it is associated with the categories of activity and activity of the subject.

In order to be fruitful and productive, observation must satisfy the following requirements:-

be deliberate, that is, carried out to solve quite specific problems within the framework of the general goal (goals) of scientific activity and practice; -

systematic, that is, consist of observations following a certain plan, scheme, arising from the nature of the object, as well as the goals and objectives of the study; -

purposeful, that is, to fix the attention of the observer only on the objects of interest to him and not dwell on those that fall out of the tasks of observation. Observation aimed at the perception of individual details, sides, aspects, parts of the object is called fixing, and covering the whole, subject to repeated observation (return), is called fluctuating. The combination of these types of observation in the end gives a complete picture of the object; -

to be active, that is, when the observer purposefully searches for the objects necessary for his tasks among a certain set of them, considers individual properties of interest to him, aspects of these objects, while relying on the stock of his own knowledge, experience and skills; -

systematic, that is, when the observer conducts his observation continuously, and not randomly and sporadically (as in simple contemplation), according to a certain scheme thought out in advance, in various or strictly specified conditions.

Observation as a Method scientific knowledge and practice gives us facts in the form of a set of empirical statements about objects. These facts form the primary information about the objects of knowledge and study. Note that in reality itself there are no facts: it simply exists. Facts are in people's heads. The description of scientific facts occurs on the basis of a certain scientific language, ideas, pictures of the world, theories, hypotheses and models. It is they that determine the primary schematization of the representation of a given object. Actually, it is precisely under such conditions that the "object of science" arises (which should not be confused with the object of reality itself, since the second is a theoretical description of the first!).

Many scientists have specially developed their ability to observe, that is, observation. Charles Darwin said that he owes his success to the fact that he intensively developed this quality in himself.

Comparison is one of the most common and universal methods of cognition. Famous aphorism: "Everything is known in comparison" - the best proof of this. Comparison is the establishment of similarities (identities) and differences of objects and phenomena of various kinds, their aspects, etc., in general - objects of study. As a result of the comparison, something common is established that is inherent in two or more objects - at a given moment or in their history. In the sciences of a historical nature, comparison was developed to the level of the main method of research, which was called comparative historical. Revealing the common, repeating in phenomena, as you know, is a step on the way to the knowledge of the regular.

In order for a comparison to be fruitful, it must satisfy two basic requirements: only such parties and aspects, objects as a whole, between which there is an objective commonality, should be compared; the comparison should be based on the most important features that are essential in a given research or other task. Comparison on non-essential grounds can only lead to misconceptions and errors. In this regard, we must be careful about the conclusions "by analogy". The French even say that "comparison is not proof!".

Objects of interest to a researcher, engineer, designer can be compared either directly or indirectly through a third object. In the first case, qualitative assessments of the type are obtained: more - less, lighter - darker, higher - lower, closer - farther, etc. True, even here you can get the simplest quantitative characteristics: "twice as high", "twice as heavy" and etc. When there is also a third object in the role of a standard, measure, scale, then especially valuable and more accurate quantitative characteristics are obtained. Such a comparison through a mediating object I call measurement. The comparison also prepares the basis for a number of theoretical methods. It itself often relies on inferences by analogy, which we will discuss later.

Measurement has historically evolved from observation and comparison. However, unlike a simple comparison, it is more efficient and accurate. Modern natural science, which was initiated by Leonardo da Vinci, Galileo and Newton. It owes its heyday to the use of measurements. It was Galileo who proclaimed the principle of a quantitative approach to phenomena, according to which the description of physical phenomena should be based on quantities that have a quantitative measure - number. He said that the book of nature was written in the language of mathematics. Engineering, design and construction in their methods continue the same line. We will here consider measurement, in contrast to other authors who combine measurement with experiment, as an independent method.

Measurement is a procedure for determining the numerical value of some characteristic of an object by comparing it with a unit of measurement accepted as a standard by a given researcher or by all scientists and practitioners. As you know, there are international and national units for measuring the main characteristics of various classes of objects, such as hour, meter, gram, volt, bit, etc.; day, pood, pound, verst, mile, etc. Measurement implies the presence of the following basic elements: an object of measurement, a unit of measurement, that is, a scale, measure, standard; measuring device; measurement method; observer.

Measurements are either direct or indirect. With direct measurement, the result is obtained directly from the measurement process itself (for example, using measures of length, time, weight, etc.). With indirect measurement, the required value is determined mathematically on the basis of other values ​​obtained earlier by direct measurement. So get, for example, the specific gravity, area and volume of bodies correct form, speed and acceleration of the body, power, etc.

Measurement allows finding and formulating empirical laws and fundamental world constants. In this regard, it can serve as a source for the formation of even entire scientific theories. Thus, Tycho de Brahe's long-term measurements of the motion of the planets later allowed Kepler to create generalizations in the form of the well-known three empirical laws of planetary motion. The measurement of atomic weights in chemistry was one of the foundations for Mendeleev's formulation of his famous periodic law in chemistry, and so on. Measurement provides not only accurate quantitative information about reality, but also allows new qualitative considerations to be introduced into the theory. So it happened in the end with the measurement of the speed of light by Michelson in the course of the development of Einstein's theory of relativity. Examples can be continued.

The most important indicator of the value of a measurement is its accuracy. Thanks to it, facts can be discovered that are not consistent with current theories. At one time, for example, deviations in the magnitude of the perihelion of Mercury from the calculated (that is, consistent with the laws of Kepler and Newton) by 13 seconds per century could only be explained by creating a new, relativistic concept of the world in general theory relativity.

The accuracy of measurements depends on the available instruments, their capabilities and quality, on the methods used and the training of the researcher itself. Measurements are often costly, often take a long time to prepare, many people are involved, and the result may be either zero or inconclusive. Often, researchers are not ready for the results obtained, because they share a certain concept, theory, but it cannot include this result. So, at the beginning of the 20th century, the scientist Landolt very accurately tested the law of conservation of weight of substances in chemistry and became convinced of its validity. If his technique would be improved (and the accuracy increased by 2 - 3 orders), then it would be possible to derive the well-known Einstein relation between mass and energy: E = mc . But would it be convincing for the scientific world of that time? Hardly! Science was not yet ready for this. In the 20th century, when, by determining the masses of radioactive isotopes by the deflection of an ion beam, the English physicist F. Aston confirmed Einstein's theoretical conclusion, this was perceived in science as a natural result.

Keep in mind that there are certain requirements for the level of accuracy. It must be in accordance with the nature of the objects and with the requirements of the cognitive, design, engineering or engineering task. So, in engineering and construction, they constantly deal with measuring mass (that is, weight), length (size), etc. But in most cases, precision accuracy is not required here, moreover, it would look generally ridiculous if, say, weight the supporting column for the building was checked to thousandths or even smaller fractions of a gram! There is also the problem of measuring massive material associated with random deviations, as happens in large populations. Similar phenomena are typical for microworld objects, for biological, social, economic and other similar objects. Here, searches for the statistical average and methods specially oriented to the processing of the random and its distributions in the form of probabilistic methods are applicable, etc.

To eliminate random and systematic measurement errors, to identify errors and errors associated with the nature of the instruments and the observer (human), a special mathematical theory of errors has been developed.

In connection with the development of technology, measurement methods under conditions of fast processes, in aggressive environments, where the presence of an observer is excluded, etc., have acquired particular importance in the 20th century in connection with the development of technology. The methods of auto- and electrometry, as well as computer processing of information and control of measurement processes, came to the rescue here. In their development, an outstanding role was played by the developments of scientists from the Novosibirsk Institute of Automation and Electrometry of the Siberian Branch of the Russian Academy of Sciences, as well as NNSTU (NETI). These were world class results.

Measurement, along with observation and comparison, is widely used at the empirical level of cognition and human activity in general; it is part of the most developed, complex and significant method - experimental.

An experiment is understood as such a method of studying and transforming objects, when the researcher actively influences them by creating artificial conditions necessary to identify any properties, characteristics, aspects of interest to him, consciously changing the course of natural processes, while regulating, measuring and observing. The main means of creating such conditions are various devices and artificial devices, which we will discuss below. The experiment is the most complex, comprehensive and effective method empirical knowledge and transformations of objects of various kinds. But its essence is not in complexity, but in purposefulness, premeditation and intervention by means of regulation and control during the studied and transformed processes and states of objects.

Galileo is considered the founder of experimental science and experimental method. Experience as the main path for natural science was first identified at the end of the 16th and beginning of the 17th centuries by the English philosopher Francis Bacon. Experience is the main path for engineering and technology.

The distinguishing features of the experiment are the possibility of studying and transforming an object in a relatively pure form, when all side factors obscuring the essence of the matter are eliminated almost entirely. This makes it possible to study objects of reality in extreme conditions, that is, at ultra-low and ultra-high temperatures, pressures and energies, process rates, electric and magnetic fields, interaction energies, etc.

Under these conditions, one can obtain unexpected and surprising properties of ordinary objects and, thereby, penetrate deeper into their essence and transformation mechanisms (extreme experiment and analysis).

Examples of phenomena discovered under extreme conditions are superfluidity and superconductivity at low temperatures. The most important advantage of the experiment is its repeatability, when observations, measurements, tests of the properties of objects are carried out repeatedly under varying conditions in order to increase the accuracy, reliability and practical significance of previously obtained results, to make sure that a new phenomenon exists in general.

An experiment is called for in the following situations:-

when they try to discover previously unknown properties and characteristics of an object - this is a research experiment; -

when they check the correctness of certain theoretical propositions, conclusions and hypotheses - a test experiment for the theory; -

when checking the correctness of previously performed experiments - a verification (for experiments) experiment; -

educational demonstration experiment.

Any of these types of experiment can be carried out both directly with the object being examined, and with its deputy - models of various kinds. Experiments of the first type are called full-scale, the second - model (simulation). Examples of experiments of the second type are studies of the hypothetical primary atmosphere of the Earth on models from a mixture of gases and water vapor. The experiments of Miller and Abelson confirmed the possibility of the formation of organic formations and compounds during electrical discharges in the model of the primary atmosphere, and this, in turn, became a test of the theory of Oparin and Haldane on the origin of life. Another example is the simulation experiments on computers, which are becoming more and more common in all sciences. In this regard, physicists today talk about the emergence of "computational physics" (the operation of a computer is based on mathematical programs and computational operations).

The advantage of the experiment is the possibility of studying objects in a wider range of conditions than the original allows, which is especially noticeable in medicine, where it is impossible to conduct experiments that violate human health. Then they resort to the help of living and non-living models that repeat or imitate the features of a person and his organs. Experiments can be carried out both on real-field and information objects, and with their ideal copies; in the latter case, we have a thought experiment, including a computational one, as an ideal form of a real experiment (computer simulation of an experiment).

Currently, there is increasing attention to sociological experiments. But there are features here that limit the possibilities of such experiments in accordance with the laws and principles of humanity, which are reflected in the concepts and agreements of the UN and international law. Thus, no one, except criminals, will plan experimental wars, epidemics, etc., in order to study their consequences. In this regard, the scenarios of a nuclear missile war and the consequences of it in the form of a "nuclear winter" were played on computers in our country and in the United States. The conclusion from this experiment is that a nuclear war will inevitably bring the death of all mankind and all life on Earth. The importance of economic experiments is great, but even here the irresponsibility and political engagement of politicians can and does lead to disastrous results.

Observations, measurements and experiments are mainly based on various instruments. What is a device in terms of its role for research? In a broad sense, devices are understood as artificial, technical means and various kinds of devices that allow us to study any phenomenon, property, state, characteristic of interest to us from a quantitative and / or qualitative side, as well as create strictly defined conditions for their detection, implementation and regulation; devices that allow at the same time to conduct observation and measurement.

It is equally important to choose a reference system, to create it specially in the device. Reference systems are understood as objects that are mentally taken as initial, basic and physically resting, motionless. This is most clearly seen when measured using different scales for reading. In astronomical observations, this is the Earth, the Sun, other bodies, fixed (conditionally) stars, etc. Physicists call "laboratory" that frame of reference, an object that coincides with the place of observation and measurement in the space-time sense. In the device itself, the reference system is an important part of the measuring device, conventionally calibrated on the reference scale, where the observer fixes, for example, the deviation of an arrow or a light signal from the beginning of the scale. In digital measurement systems, we still have a reference point known to the observer on the basis of knowledge of the features of the countable set of measurement units used here. Simple and understandable scales, for example, for rulers, clocks with a dial, for most electrical and thermal measuring instruments.

In the classical period of science, among the requirements for instruments were, firstly, sensitivity to the influence of an external measurable factor for measuring and regulating experimental conditions; secondly, the so-called "resolution" - that is, the limits of accuracy and maintenance of specified conditions for the process under study in an experimental device.

At the same time, it was tacitly believed that in the course of the progress of science they could all be improved and increased. In the 20th century, thanks to the development of the physics of the microcosm, it was found that there is a lower limit of the divisibility of matter and field (quanta, etc.), there is a lower value of the electric charge, etc. All this caused a revision of the previous requirements and drew special attention to the systems of physical and other units known to everyone from the school physics course.

An important condition for the objectivity of describing objects was also considered to be the fundamental possibility of abstracting, abstracting from frames of reference by either choosing the so-called "natural frame of reference" or by discovering such properties in objects that do not depend on the choice of frames of reference. In science they are called "invariants" In nature itself, there are not so many such invariants: this is the weight of the hydrogen atom (and it became a measure, a unit for measuring the weight of other chemical atoms), this is an electric charge, the so-called "action" in mechanics and in physics (its dimension is energy x time), the Planck quantum of action (in quantum mechanics), the gravitational constant, the speed of light, etc. turn of XIX and XX centuries, science has found out, it seemed, paradoxical things: mass, length, time are relative, they depend on the speed of movement of particles of matter and fields and, of course, on the position of the observer in the frame of reference. In the special theory of relativity, as a result, a special invariant was found - the "four-dimensional interval".

The importance and role of studies of reference systems and invariants has been growing throughout the 20th century, especially in the study of extreme conditions, the nature and speed of the processes, such as ultra-high energies, low and ultra-low temperatures, fast processes, etc. The problem of measurement accuracy also remains important. All instruments used in science and technology can be divided into observational, measuring and experimental. There are several types and subspecies according to their purpose and functions in the study:

1. Measuring partings of various kinds with two subspecies:

a) direct measurement (rulers, measuring vessels, etc.);

b) indirect, mediated measurement (for example, pyrometers that measure body temperature through the measurement of radiation energy; strain gauges and sensors - pressure through electrical processes in the device itself; etc.). 2.

Strengthening the natural organs of a person, but not changing the essence and nature of the observed and measured characteristics. These are optical devices (from glasses to a telescope), many acoustic devices, etc. 3.

Transforming natural processes and phenomena from one type to another, accessible to the observer and / or his observational and measuring devices. Such are the X-ray machine, scintillation sensors, etc.

4. Experimental instruments and devices, as well as their systems, including observational and measuring instruments as an integral part. The range of such devices extends to the size of giant particle accelerators like Serpukhov. In them, processes and objects of various kinds are relatively isolated from the environment, they are regulated, controlled, and phenomena are distinguished in the purest form (that is, without other extraneous phenomena and processes, interference, disturbing factors, etc.).

5. Demonstration devices that serve to visually demonstrate various properties, phenomena and patterns of various kinds during training. These include also test benches and simulators of various kinds, since they are visual, and often imitate certain phenomena, as if deceiving students.

There are also devices and devices: a) for research purposes (they are the main thing for us here) and, b) for mass consumer purposes. The progress of instrumentation is the concern not only of scientists, but also of designers and instrument engineers in the first place.

One can also distinguish between model devices, as if the continuation of all the previous ones in the form of their deputies, as well as reduced copies and models of real devices and devices, natural objects. An example of models of the first kind will be cybernetic and computer simulations of real ones, which allow studying and designing real objects, often in a wide range of somewhat similar systems (in control and communications, designing systems and communications, networks of various kinds, in CAD). Examples of models of the second kind are real models of a bridge, an aircraft, a dam, a beam, a machine and its components, any device.

In a broad sense, a device is not only some artificial formation, but it is also an environment in which some process takes place. The computer can also act as the latter. Then they say that we have a computational experiment (when operating with numbers).

The computational experiment as a method has a great future, since the experimenter often deals with multifactorial and collective processes, where huge statistics are needed. The experimenter also deals with aggressive environments and processes that are dangerous for humans and living things in general (in connection with the latter, there are ecological problems scientific and engineering experiment).

The development of the physics of the microcosm has shown that in our theoretical description of the objects of the microcosm, we cannot, in principle, get rid of the influence of the device on the desired answer. Moreover, here, in principle, we cannot simultaneously measure the coordinates and momenta of a microparticle, etc.; after the measurement, it is necessary to build complementary descriptions of the behavior of the particle due to the readings of different instruments and non-simultaneous descriptions of the measurement data (W. Heisenberg's uncertainty principles and N. Bohr's principle of complementarity).

Progress in instrumentation often creates a genuine revolution in a particular science. Examples of discoveries made thanks to the invention of the microscope, telescope, X-ray machine, spectroscope and spectrometer, the creation of satellite laboratories, the launching of instruments into space on satellites, etc. are classic examples. Expenses for instruments and experiments in many research institutes often make up the lion's share of their budgets. Today there are many examples when experiments are not affordable for entire rather big countries, and therefore they go for scientific cooperation (like CERN in Switzerland, in space programs, etc.).

In the course of the development of science, the role of instruments is often distorted and exaggerated. So in philosophy, in connection with the peculiarities of the experiment in the microworld, as mentioned a little higher, the idea arose that in this area all our knowledge is entirely of instrumental origin. The device, as if continuing the subject of knowledge, interferes with the objective course of events. Hence the conclusion is drawn: all our knowledge about the objects of the microworld is subjective, it is of instrumental origin. As a result, a whole trend of philosophy arose in the science of the 20th century - instrumental idealism or operationalism (P. Bridgman). Of course, response criticism followed, but such an idea is still found among scientists. In many ways, it arose due to the underestimation of theoretical knowledge and cognition, as well as its capabilities.

observation. Observation is a descriptive psychological research method, which consists in purposeful and organized perception and registration of the behavior of the object under study. Together with introspection, observation is considered the oldest psychological method. Scientific observation was widely used in those areas of scientific knowledge where special meaning has a fixation of the characteristics of human behavior in various conditions. Also, when it is either impossible or not permissible to interfere with the natural course of the process.

Observation can be carried out both directly by the researcher, and by means of observation devices and fixing its results. These include audio, photo, video equipment, including surveillance cards.

Has several options.
External observation is a way of collecting data about the psychology and Introduction of a person by direct observation of him from the side.
Internal observation, or self-observation, is used when a research psychologist sets himself the task of studying a phenomenon of interest to him in the form in which it is directly represented in his mind. Internally perceiving the corresponding phenomenon, the psychologist, as it were, observes it (for example, his images, feelings, thoughts, experiences) or uses similar data communicated to him by other people who themselves conduct introspection on his instructions.

Free observation does not have a predetermined framework, program, procedure for its implementation. It can change the subject or object of observation, its nature in the course of the observation itself, depending on the wishes of the observer.

Standardized observation, in contrast, is predetermined and clearly limited in terms of what is observed. It is carried out according to a certain pre-thought-out program and strictly follows it, regardless of what happens in the process of observation with the object or the observer himself.

When observation is included, the researcher acts as a direct participant in the process, the course of which he is observing. Another variant of participant observation: when investigating people's relationships, the experimenter can engage himself in communication with the observed people, without stopping at the same time observing the relationships that develop between them and these people.

Third-party observation, unlike included observation, does not imply the observer's personal participation in the process he is studying.

Each of these types of observation has its own characteristics and is used where it can give the most reliable results. External observation, for example, is less subjective than self-observation, and is usually used where the features to be observed can be easily isolated and evaluated from the outside. Internal observation is indispensable and often acts as the only available method for collecting psychological data in cases where there are no reliable external signs of the phenomenon of interest to the researcher.

Free observation is advisable to carry out in those cases when it is impossible to determine exactly what should be observed, when the signs of the phenomenon under study and its probable course are not known in advance to the researcher. Standardized observation, on the contrary, is best used when the researcher has an accurate and fairly complete list of features related to the phenomenon under study.

Involved observation is useful when a psychologist can give a correct assessment of a phenomenon only by experiencing it for himself. However, if, under the influence of the researcher's personal participation, his perception and understanding of the event can be distorted, then it is better to turn to third-party observation, the use of which allows you to more objectively judge what is being observed.

Systematic observation is divided into:
- Non-systematic observation, in which it is necessary to create a generalized picture of the behavior of an individual or a group of individuals under certain conditions and does not aim to fix causal dependencies and give strict descriptions of phenomena.
- (Systematic observation, carried out according to a certain plan and in which the researcher registers the features of the appearance and classifies the conditions of the external environment.

Systematic observation is carried out during the field study. Result: creation of a generalized picture of the behavior of an Individual or a group under certain conditions. Systematic monitoring is carried out according to a specific plan. Result: registration of behavioral features (variables) and classification of environmental conditions.

For fixed objects, observation happens:
- Total observation. The researcher tries to fix all the features of behavior.
- Selective observation. The researcher fixes only certain types of behavioral acts or parameters of behavior.

Observation has a number of advantages:
- Observation allows you to directly capture and fix the acts of behavior.
- Observation allows you to simultaneously capture the behavior of a number of people in relation to each other or to certain tasks, objects, etc.
- Observation allows you to conduct a study regardless of the readiness of the observed subjects.
- Observation allows you to achieve multidimensional coverage, that is, fixation on several parameters at once - for example, verbal and non-verbal behavior.
- Efficiency of obtaining information.
- Relative cheapness of the method.

However, at the same time, there are also disadvantages. The disadvantages of observation include:
- Numerous irrelevant, interfering factors, the results of observation can affect:
- mood of the observer;
- the social position of the observer in relation to the observed;
- observer bias;
- complexity of observed situations;
- effect of the first impression;
- fatigue of the observer and the observed;
- estimation errors (“halo effect”, “leniency effect”, averaging error, modeling errors, contrast error).
- The one-time occurrence of the observed circumstances, leading to the impossibility of making a generalizing conclusion based on single observed facts.
- The need to classify the results of observation.
- Small representativeness for large populations.
- Difficulty in maintaining operational validity.

Questioning. Questioning, like observation, is one of the most common research methods in psychology. Questionnaires are usually conducted using observational data, which (along with data obtained using other research methods) are used in the design of questionnaires.

There are three main types of questionnaires used in psychology:
- composed of direct questions and aimed at identifying the perceived qualities of the subjects.
- questionnaires of a selective type, where the subjects are offered several ready-made answers for each question of the questionnaire; The subject's task is to choose the most appropriate answer.
- questionnaires-scales; when answering the questions of questionnaires-scales, the subject must not only choose the most correct of the ready-made answers, but analyze (evaluate in points) the correctness of the proposed answers.

Questionnaires-scales are the most formalized type of questionnaires, as they allow for a more accurate quantitative analysis of the survey data.

The indisputable advantage of the questionnaire method is the rapid receipt of mass material.

The disadvantage of the questionnaire method is that it allows you to open, as a rule, only the most upper layer factors: materials using questionnaires and questionnaires (composed of direct questions to the subjects) cannot give the researcher an idea about many patterns and causal dependencies related to psychology. Questioning is a means of first orientation, a means of preliminary intelligence. To compensate for the noted shortcomings of the survey, the use of this method should be combined with the use of more meaningful research methods, as well as repeated surveys, disguise the true objectives of the surveys from the subjects, etc.

Conversation is a method of studying human behavior that is specific to psychology, since in other natural sciences communication between the subject and the object of research is impossible.

The method of conversation is a dialogue between two people, during which one person reveals the psychological characteristics of the other.

The conversation is included as an additional method in the structure of the experiment at the first stage, when the researcher collects primary information about the subject, gives him instructions, motivates, etc., and on last step- in the form of a post-experimental Interview.

Compliance with all the necessary conditions for conducting a conversation, including the collection of preliminary information about the subjects, makes this method a very effective means of psychological research. Therefore, it is desirable that the interview be conducted taking into account the data obtained using methods such as observation and questionnaires. In this case, its purpose may include verification of preliminary conclusions arising from the results of psychological analysis and obtained using these methods of primary orientation in the studied psychological characteristics test subjects.

A survey is a method in which a person answers a series of questions asked of him. There are several survey options and each of them has its own advantages and disadvantages.

Oral questioning is used in cases where it is desirable to observe the behavior and reactions of the person answering the questions. This type of survey allows you to penetrate deeper into human psychology than a written one, but it requires special training, education, and, as a rule, a large investment of time for research. The answers of the subjects received during an oral survey depend significantly on the personality of the person who conducts the survey, and on the individual characteristics of the one who answers the questions, and on the behavior of both persons in the survey situation.

A written survey allows you to reach a larger number of people. Its most common form is a questionnaire. But its disadvantage is that, using the questionnaire, it is impossible to take into account the reactions of the respondent to the content of its questions in advance and, based on this, change them.

Free survey - a kind of oral or written survey, in which the list of questions asked and possible answers to them is not limited in advance to a certain framework. A survey of this type allows you to flexibly change the tactics of research, the content of the questions asked, and receive non-standard answers to them.

Standardized survey - questions and the nature of possible answers to them are predetermined and usually limited to fairly narrow limits, which makes it more economical in time and material costs than a free survey.

Tests are specialized methods of psychodiagnostic examination, using which you can get an accurate quantitative or qualitative characteristic of the phenomenon under study. Tests differ from other research methods in that they imply a clear procedure for collecting and processing primary data, as well as the originality of their subsequent interpretation. With the help of tests, psychology can be studied and compared with each other. different people to give differentiated and comparable assessments.

The test questionnaire is based on a system of pre-designed, carefully selected and tested questions in terms of their validity and reliability, the answers to which can be used to judge the psychological qualities of the subjects.

The test task involves assessing the psychology and behavior of a person based on what he does. In tests of this type, the subject is offered a series of special tasks, the results of which are used to judge the presence or absence and the degree of development of the quality being studied.

Test Questionnaire and Test Item Applicable to Humans different ages belonging to different cultures, having different levels of education, different professions and unequal life experience. This is their positive side.

The disadvantage of tests is that when they are used and. The candidate can consciously influence the results obtained at will, especially if he knows in advance how the test works and how psychology and behavior will be assessed based on its results. In addition, the test questionnaire and test task are not applicable in cases where psychological properties and characteristics are subject to study, the existence of which the subject cannot be, is completely sure, is not aware of, or consciously does not want to accept their presence in himself. Such characteristics are, for example, many negative personal qualities and behavioral motives. In these cases, the third type of tests is usually used - projective.

Projective tests. Projective tests are based on the projection mechanism, according to which a person tends to attribute unconscious personal qualities, especially shortcomings, to other people. Projective tests are designed to study the psychological and behavioral characteristics of people that cause a negative attitude. Using tests of this kind, the psychology of the subject is judged on the basis of how he perceives and evaluates situations, the psychology and behavior of people, what personal properties, motives of a positive or negative nature he ascribes to them.

Using the projective test, the psychologist introduces the subject into an imaginary, plot-indefinite situation that is subject to arbitrary interpretation.

Projective-type tests impose increased requirements on the level of education and intellectual maturity of the subjects, and this is the main practical limitation of their applicability. In addition, such tests require a lot of special training and high professional qualifications on the part of the psychologist himself.

Experiment. The specificity of the experiment as a method of psychological research lies in the fact that it purposefully and thoughtfully creates an artificial situation in which the studied property is distinguished, manifested and evaluated in the best way. The main advantage of the experiment is that it allows more reliable than all other methods to draw conclusions about the cause-and-effect relationships of the phenomenon under study with other phenomena, to scientifically explain the origin of the phenomenon and its development.

There are two main types of experiment: natural and laboratory.

A natural experiment is organized and carried out in ordinary life conditions, where the experimenter practically does not interfere in the course of events, fixing them in the form in which they unfold on their own.

A laboratory experiment involves creating some artificial situation in which the property under study can be best studied.

The data obtained in a natural experiment best of all correspond to the typical life behavior of an individual, the real psychology of people, but are not always accurate due to the lack of the experimenter's ability to strictly control the influence of various factors on the property being studied. The results of a laboratory experiment, on the contrary, win in accuracy, but they are inferior in the degree of naturalness - correspondence to life.

Modeling as a method is used when the study of a phenomenon of interest to a scientist through simple observation, questioning, test or experiment is difficult or impossible due to complexity or inaccessibility. Then they resort to creating an artificial model of the phenomenon under study, repeating its main parameters and expected properties. On this model, this phenomenon is studied in detail and conclusions about nature are drawn.

Models can be technical, logical, mathematical, cybernetic.

A mathematical model is an expression or formula that includes variables and relationships between them, reproducing elements and relationships in the phenomenon under study.

Technical modeling involves the creation of a device or device that, in its action, resembles what is being studied.

Cybernetic modeling is based on the use of concepts from the field of informatics and cybernetics as elements of the model.

Logic modeling is based on the ideas and symbolism used in mathematical logic. Most famous examples mathematical modeling in psychology are formulas that express the laws of Bouguer - Weber, Weber - Fechner and Stevens. Logic modeling is widely used in the study of human thinking and its comparison with the solution of problems by a computer.

In addition to the above methods intended for collecting primary information, psychology widely uses various methods and techniques for processing these data, their logical and mathematical analysis to obtain secondary results, i.e. facts and conclusions arising from the interpretation of the processed primary information. For this purpose, in particular, various methods of mathematical statistics are used, without which it is often impossible to obtain reliable information about the phenomena under study, as well as methods of qualitative analysis.

Methods of scientific research are those techniques and means by which scientists obtain reliable information that is used further to build scientific theories and develop practical recommendations.

It is customary to distinguish two main levels of scientific knowledge: empirical and theoretical. This division is due to the fact that the subject can gain knowledge empirically (empirically) and through complex logical operations, that is, theoretically.

The empirical level of knowledge includes

observation of phenomena

Accumulation and selection of facts

Establishing links between them.

The empirical level is the stage of collecting data (facts) about social and natural objects. At the empirical level, the object under study is reflected mainly from the side of external relations and manifestations. Factifying activity is central to this level. These tasks are solved using appropriate methods.

The theoretical level of knowledge is associated with the predominance of mental activity, with the comprehension of empirical material, its processing. At the theoretical level, it reveals

Internal structure and patterns of development of systems and phenomena

Their interaction and conditionality.

Empirical research (from the Greek empeiria - experience) is “the establishment and generalization of social facts through direct or indirect registration of past events characteristic of the studied social phenomena, objects and processes”)


Top