Skip to ContentGo to accessibility pageKeyboard shortcuts menu
OpenStax Logo
Organizational Behavior

A | Scientific Method in Organizational Research

Organizational BehaviorA | Scientific Method in Organizational Research

Students of management often complain about “theoretical” or “abstract” approaches to a subject; they argue instead in favor of “relevant” and “applied” approaches. The feeling is that there usually exist two distinct ways to study a topic, and from a managerial standpoint, a focus on application is the preferred way. Serious reflection about this problem may suggest a somewhat different approach, however. Consider the following situation.

As a personnel manager for a medium-sized firm, you have been asked to discover why employee turnover in your firm is so high. Your boss has told you that it is your responsibility to assess this problem and then to offer suggestions aimed at reducing turnover. What will you do? Several possible strategies come to mind:

  • Talk with those who have quit the organization.
  • Talk with those who remain.
  • Talk to the employees’ supervisors.
  • Consult with personnel managers in other companies.
  • Measure job satisfaction.
  • Examine company policies and practices.
  • Examine the jobs where most turnover occurs.

None of these actions will likely be very successful in helping you arrive at sound conclusions, however. Talking with those who have left usually yields a variety of biased responses by those who either want to “get back at” the company or who fear that criticism will negatively affect their chances for future recommendations. Talking with those still employed has similar problems: why should they be candid and jeopardize their jobs? Talking with supervisors will not help if they themselves are the problem. Asking other personnel managers, while comforting, ignores major differences between organizations. Measuring job satisfaction, examining company policies, or examining the jobs themselves may help if one is fortunate enough to hit upon the right problem, but the probability of doing so is minimal. In short, many of the most obvious ways a manager can choose to solve a problem may yield biased results at best, and possibly no results at all.

A more viable approach would be to view the situation from a research standpoint and to use accepted methods of scientific inquiry to arrive at a solution that minimizes biased results. Most of what we know about organizational behavior results from efforts to apply such methods in solving organizational problems (e.g., How do we motivate employees? How do we develop effective leaders? How do we reduce stress at work?). An awareness of the nature of scientific inquiry is useful for understanding how we learned what we know about organizations as well as in facilitating efforts to solve behavioral problems at work.

Theory Building in Organizations

Briefly stated, a theory is a set of statements that serves to explain the manner in which certain concepts or variables are related. These statements result both from our present level of knowledge on the topic and from our assumptions about the variables themselves. The theory allows us to deduce logical propositions, or hypotheses, that can be tested in the field or laboratory. In short, a theory is a technique that helps us understand how variables fit together. Their use in research and in management is invaluable.

Uses of a Theory

Why do we have theories in the study of organizational behavior? First, theories help us organize knowledge about a given subject into a pattern of relationships that lends meaning to a series of observed events. They provide a structure for understanding. For instance, rather than struggling with a lengthy list of factors found to relate to employee turnover, a theory of turnover might suggest how such factors fit together and are related.

Second, theories help us to summarize diverse findings so that we can focus on major relationships and not get bogged down in details. A theory “permits us to handle large amounts of empirical data with relatively few propositions.” [M. E. Shaw and P. R. Costanzo, Theories of Social Psychology (New York: McGraw-Hill, 1970), p. 9.]

Finally, theories are useful in that they point the way to future research efforts. They raise new questions and suggest answers. In this sense, they serve a useful heuristic value in helping to differentiate between important and trivial questions for future research. Theories are useful both for the study and for the management of organizations. As Kurt Lewin said, “There is nothing so practical as a good theory.”

What Is a Good Theory?

Abraham Kaplan discusses in detail the criteria for evaluating the utility or soundness of a theory. [A. Kaplan, The Conduct of Inquiry (San Francisco: Chandler, 1964).] At least five such criteria can be mentioned:

  1. Internal consistency. Are the propositions central to the theory free from contradiction? Are they logical?
  2. External consistency. Are the propositions of a theory consistent with observations from real life?
  3. Scientific parsimony. Does the theory contain only those concepts that are necessary to account for findings or to explain relationships? Simplicity of presentation is preferable unless added complexity furthers understanding or clarifies additional research findings.
  4. Generalizability. In order for a theory to have much utility, it must apply to a wide range of situations or organizations. A theory of employee motivation that applies only to one company hardly helps us understand motivational processes or apply such knowledge elsewhere.
  5. Verification. A good theory presents propositions that can be tested. Without an ability to operationalize the variables and subject the theory to field or laboratory testing, we are unable to determine its accuracy or utility.

To the extent that a theory satisfies these requirements, its usefulness both to researchers and to managers is enhanced. However, a theory is only a starting point. On the basis of theory, researchers and problem solvers can proceed to design studies aimed at verifying and refining the theories themselves. These studies must proceed according to commonly accepted principles of the scientific method.

Scientific Method in Organizational Behavior Research

Cohen and Nagel suggested that there are four basic “ways of knowing.” [M. Cohen and E. Nagel, An Introduction to Logic and Scientific Inquiry (New York: Harcourt, Brace and Company, 1943). See also E. Lawler, A. Mohrman, S. Mohrman, G. Ledford, and T. Cummings, Doing Research That Is Useful for Theory and Practice (San Francisco: Jossey-Bass, 1985).] Managers and researchers use all four of these techniques: tenacity, intuition, authority, and science. When managers form a belief (e.g., a happy worker is a productive worker) and continue to hold that belief out of habit and often in spite of contradictory information, they are using tenacity. They use intuition when they feel the answer is self-evident or when they have a hunch about how to solve a problem. They use authority when they seek an answer to a problem from an expert or consultant who supposedly has experience in the area. Finally, they use science—perhaps too seldom—when they are convinced that the three previous methods allow for too much subjectivity in interpretation.

In contrast to tenacity, intuition, and authority, the scientific method of inquiry “aims at knowledge that is objective in the sense of being intersubjectively certifiable, independent of individual opinion or preference, on the basis of data obtainable by suitable experiments or observations.” [C. G. Hempel, Aspects of Scientific Explanation (New York: The Free Press, 1965), p. 141.] In other words, the scientific approach to problem-solving sets some fairly rigorous standards in an attempt to substitute objectivity for subjectivity.

The scientific method in organizational behavior consists of four stages: (1) observation of the phenomena (facts) in the real world, (2) formulation of explanations for such phenomena using the inductive process, (3) generation of predictions or hypotheses about the phenomena using the deductive process, and (4) verification of the predictions or hypotheses using systematic, controlled observation. This process is shown in Exhibit A1. When this rather abstract description of the steps of scientific inquiry is shown within the framework of an actual research study, the process becomes much clearer. A basic research paradigm is shown in Exhibit A2. In essence, a scientific approach to research requires that the investigator or manager first recognize clearly what research questions are being posed. To paraphrase Lewis Carroll, if you don’t know where you’re going, any road will take you there. Many managers identify what they think is a problem (e.g., turnover) only to discover later that their “problem” turnover rate is much lower than that in comparable industries. Other managers look at poor employee morale or performance and ignore what may be the real problem (e.g., poor leadership).

A diagram illustrates a model depicting the scientific method in organizational behavior.
Exhibit A1 A Model Depicting the Scientific Method Source: Adapted from E. F. Stone, Research Methods in Organizational Behavior (Glenview, III.: Scott, Foresman and Company, 1978), p. 8. (Attribution: Copyright Rice University, OpenStax, under CC BY-NC-SA 4.0 license)
A diagram illustrates the model of the empirical research process.
Exhibit A2 A Model of the Empirical Research Process Source: Adapted from E. F. Stone, Research Methods in Organizational Behavior (Glenview, III.: Scott, Foresman and Company, 1978), p. 17. (Attribution: Copyright Rice University, OpenStax, under CC BY-NC-SA 4.0 license)

On the basis of the research questions, specific hypotheses are identified. These hypotheses represent best guesses about what we expect to find. We set forth hypotheses to determine if we can predict the right answer so we can select a study design that allows for suitable testing. On the basis of the study design (to be discussed shortly), we observe the variables under study, analyze the data we collect, and draw relevant conclusions and management implications. When we follow this process, the risks of being guided by our own opinions or prejudices are minimized, and we arrive at useful answers to our original research questions.

Basic Research Designs

Although a detailed discussion of the various research designs is beyond the scope of this Appendix, we can review several common research designs that have been used to collect data in the study of people at work. Specifically, we will examine five different research designs that are frequently used to study behavior at work: (1) naturalistic observation, (2) survey research, (3) field study, (4) field experiment, and (5) laboratory experiment. In general, the level of rigor of the design increases as we move from naturalistic observation toward laboratory study. Unfortunately, so do the costs, in many cases.

Criteria for Evaluating Research Designs

Before examining the five designs, it will be helpful to consider how a researcher selects from among the various designs. Clearly, no one strategy or design is superior in all cases. Each has its place, depending upon the research goals and the constraints placed on the research.

However, when choosing among the potential designs, researchers generally must consider several things. For example, does the design require that you specify hypotheses a priori? If you specify appropriate hypotheses and are able to confirm them, then you can predict behavior in organizations. As a manager, being able to predict behavior in advance allows you to intervene and make necessary changes to remedy problem situations. The ability to accurately predict behavior is clearly superior to simply being able to explain behavior after the fact.

Other factors to examine are the method of measurement and the degree of control to be used. Does the method of measurement use qualitative or quantitative measures? Although qualitative measures may be useful for generating future hypotheses, quantitative measures add more perceived rigor to results. Also, if you are interested in demonstrating casual relationships, it is necessary to have a high degree of control over the study variables. You must be able to manipulate the primary study variable to determine the results of this manipulation while at the same time keeping other potentially contaminating variables constant so they do not interfere in the results.

In addition, a researcher must know to what extent they can generalize the results from the study to apply to other organizations or situations. Results that are situation-specific are of little use to managers. External validity is of key importance. And, of course, in practical terms, how much is it going to cost to carry out the study and discover a solution? Cost can be measured in many ways, including time and money.

The analysis of the previous five criteria provides insight concerning the overall level of rigor of the research design. The more rigorous the design, the more confidence one has in the results. This is because more rigorous designs typically employ more accurate measures or interventions and attempt to control for contaminating influences on study results. With this in mind, we can now consider various research designs.

Naturalistic Observation

Naturalistic observations represent the most primitive (least rigorous) method of research in organizations. Simply put, naturalistic observations represent conclusions drawn from observing events. At least two forms of such research can be identified: (1) authoritative opinions and (2) case studies.

Authoritative opinions are the opinions of experts in the field. When Henri Fayol wrote his early works on management, for example, he was offering his advice as a former industrial manager. On the basis of experience in real work situations, Fayol and others suggest that what they have learned can be applied to a variety of work organizations with relative ease. Other examples of authoritative opinions can be found in Barnard’s The Functions of the Executive, Sloan’s My Years with General Motors, and Peters and Waterman’s In Search of Excellence. Throughout their works, these writers attempt to draw lessons from their own practical experience that can help other managers assess their problems.

The second use of naturalistic observation can be seen in the case study. Case studies attempt to take one situation in one organization and to analyze it in detail with regard to the interpersonal dynamics among the various members. For instance, we may have a case of one middle manager who appears to have burned out on the job; his performance seems to have reached a plateau. The case would then review the cast of characters in the situation and how each one related to this manager’s problem. Moreover, the case would review any actions that were taken to remedy the problem. Throughout, emphasis would be placed on what managers could learn from this one real-life problem that can possibly relate to other situations.

Survey Research

Many times, managers wish to know something about the extent to which employees are satisfied with their jobs, are loyal to the organization, or experience stress on the job. In such cases, the managers (or the researchers) are interested mainly in considering quantitative values of the responses. Questionnaires designed to measure such variables are examples of survey research. Here we are not attempting to relate results to subsequent events. We simply wish to assess the general feelings and attitudes of employees.

Surveys are particularly popular with managers today as a method of assessing relative job attitudes. Hence, we may make an annual attitude survey and track changes in attitudes over time. If attitudes begin to decline, management is alerted to the problem and can take steps to remedy the situation.

Field Study

In a field study, the researcher is interested in the relationship between a predictor variable (e.g., job satisfaction) and a subsequent criterion variable (e.g., employee turnover or performance). Measures of each variable are taken (satisfaction, perhaps through a questionnaire, and turnover, from company records) and are compared to determine the extent of correlation. No attempt is made to intervene in the system or to manipulate any of the variables, as is the case with experimental approaches.

To continue the simple example we began with, a manager may have a hypothesis that says that satisfaction is a primary indicator of employee turnover. After measuring both, it is found that there is a moderate relationship between the two variables. Hence, the manager may conclude that the two are probably related. Even so, because of the moderate nature of the relationship, it is clear that other factors also influence turnover; otherwise, there would be a much stronger relationship. The manager concludes that, although efforts to improve job satisfaction may help solve the problem, other influences on turnover must also be looked at, such as salary level and supervisory style.

Field Experiment

A field experiment is much like a field study, with one important exception. Instead of simply measuring job satisfaction, the manager or researcher makes efforts to actually change satisfaction levels. In an experiment, we attempt to manipulate the predictor variable. This is most often done by dividing the sample into two groups: an experimental group and a control group. In the experimental group, we intervene and introduce a major change. Perhaps we alter the compensation program or give supervisors some human relations training. The control group receives no such treatment. After a time, we compare turnover rates in the two groups. If we have identified the correct treatment (that is, a true influence on turnover), turnover rates would be reduced in the experimental group but not in the control group.

In other words, in a field experiment, as opposed to a field study, we intentionally change one aspect of the work environment in the experimental group and compare the impact of the change with the untreated control group. Thus, we can be relatively assured that the solution we have identified is, in fact, a true predictor variable and is of use to management.

Laboratory Experiment

Up to this point, we have considered a variety of research designs that all make use of the actual work environment, the field. In this last design, laboratory experiments, we employ the same level of rigor as that of the field experiment and actually manipulate the predictor variable, but we do so in an artificial environment instead of a real one.

We might, for instance, wish to study the effects of various compensation programs (hourly rate versus piece rate) on performance. To do this, we might employ two groups of business students and have both groups work on a simulated work exercise. In doing so, we are simulating a real work situation. Each group would then be paid differently. After the experiment, an assessment would be made of the impact of the two compensation plans on productivity.

Comparing Research Designs

Now that we have reviewed various research designs, we might wonder which designs are best. This is not an easy call. All designs have been used by managers and researchers in studying problems of people at work. Perhaps the question can best be answered by considering the relative strengths and weaknesses of each, on the basis of our earlier discussion of the criteria for evaluating research designs (see Table A1). We should then have a better idea of which design or designs would be appropriate for a particular problem or situation.

Specification of Hypotheses in Advance. It was noted earlier that the ability to specify a priori hypotheses adds rigor to the study. In general, hypotheses are not given for naturalistic observations or survey research. These two techniques are used commonly for exploratory analyses and for identifying pertinent research questions for more rigorous future study. On the other hand, the remaining three designs (field study, field experiment, and laboratory experiment) do allow explicitly for a priori hypotheses. Hence, they are superior in this sense.

A Comparison of Various Research Designs
Research Design A Priori Hypotheses Qualitative vs. Quantitative Measures Control External Validity Cost Overall Level of Rigor
Naturalistic observation No Qualitative Low Low Low Low
Survey research No Qualitative and quantitative Low High Low Medium
Field study Yes Quantitative Medium High Medium Medium
Field experiment Yes Quantitative High High High High
Laboratory experiment Yes Quantitative High Low High High
Note: This table represents general trends; exceptions can clearly be identified.
Table A1 (Attribution: Copyright Rice University, OpenStax, under CC BY-NC-SA 4.0 license)

Qualitative versus Quantitative Measures. Naturalistic observations typically involve qualitative data, whereas field studies and both forms of experiment typically involve quantitative data. Survey research most often provides for both. Hence, if it is important to collect hard data concerning a problem (e.g., what is the magnitude of the relationship between satisfaction and turnover?), quantitative designs are to be preferred. On the other hand, if one is more concerned about identifying major reasons for turnover and little prior knowledge about the problem exists, qualitative data may be preferred, and survey research may be a better research strategy. The selection of an appropriate design hinges in part on the intended uses for the information.

Control. As noted earlier, control represents the extent to which potentially contaminating influences can be minimized in a study. Clearly, experimental procedures allow for better control than do nonexperimental ones. The researcher or manager can systematically structure the desired work environment and minimize irrelevant or contaminating influences. As a result, conclusions concerning causal relations between variables can be made with some degree of certainty. Where it is not possible to secure such high control, however—perhaps because the organization does not wish to make a major structural change simply for purposes of an experiment—a field study represents a compromise design. It allows for some degree of control but does not require changing the organization.

External Validity. The question of external validity is crucial to any study. If the results of a study in one setting cannot be applied with confidence to other settings, the utility of the results for managers is limited. In this regard, survey research, field studies, and field experiments have higher levels of external validity than naturalistic observations or laboratory experiments. Naturalistic observations are typically based on nonrandom samples, and such samples often exhibit characteristics that may not allow for transfers of learning from one organization to another. A clear example can be seen in the case of a small company in which the president implemented a unique compensation plan that proved successful. It would be impossible to predict whether such a plan would work in a major corporation, because of the different nature of the organizations. Similarly, there is some question about how realistic a work environment is actually created in a laboratory situation. If managers are to learn from the lessons of other organizations, they should first learn the extent to which the findings from one kind of organization are applicable elsewhere.

Cost. As one would expect, the quality of information and its price covary. The more rigorous the design (and thus the more accurate the information), the higher the cost. Costs can be incurred in a variety of ways and include actual out-of-pocket expenses, time invested, and residue costs. The organization is left with the aftermath of an experiment, which could mean raised employee expectations and anxieties, as well as the possibility of disillusionment if the experiment fails. It should be noted that, in survey research, a large amount of general information can be gathered quickly and cheaply.

Overall Level of Rigor. In summary, then, the real answer to the question concerning which strategy is best lies in the degrees of freedom a manager has in selecting the design. If an experiment is clearly out of the question (perhaps because one’s superior doesn’t want anything altered), a field study may be the best possible strategy, given the constraints. In fact, field studies are often considered a good compromise strategy in that they have a medium amount of rigor but are also fairly quick and inexpensive. On the other hand, if one simply wishes to take an attitude survey, survey research is clearly in order. If one is not allowed to do anything, authoritative opinions from others may be the only information available. However, if constraints are not severe, experimental methods are clearly superior in that they allow for greater certainty concerning major influences on the criterion variable and on the problem itself.

Order a print copy

As an Amazon Associate we earn from qualifying purchases.

Citation/Attribution

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Attribution information
  • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
    Access for free at https://openstax.org/books/organizational-behavior/pages/1-introduction
  • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
    Access for free at https://openstax.org/books/organizational-behavior/pages/1-introduction
Citation information

© Jan 9, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.