SoTL Research Basics

The Scholarship of Teaching and Learning (SoTL) is an evidence-based, academic community-facing enterprise in which faculty and colleagues use a scholarly lens to examine how learning happens in higher education. A major part of SOLER’s mission is to facilitate SoTL Research – SoTL that uses formal scientific approaches to investigate teaching and learning – in Columbia’s academic contexts. In other words, SoTL Research leverages scientific methods to formulate and address questions about the ways students gain knowledge, develop skills, and otherwise think, feel, and behave in higher ed.

Read below for a primer on SoTL Research subfields, topics, and experimental designs.

A number of different subfields fall within the umbrella term SoTL Research. Projects that examine the process through which students develop mastery in a particular discipline reflect a perspective called Discipline-Based Education Research (DBER). (In contrast, other projects address questions that are not closely linked to a disciplinary context, questions that may concern student motivation, time management, and other broad ideas.) Projects of all kinds may incorporate elements of another field called Design-Based Research, which adopts an engineering-like perspective to investigate the design and efficacy of "learning products" (i.e., instructional strategies and related media and technology). 

The list of topical areas in SoTL Research below was primarily informed by the NRC analysis of DBER.*

  • Conceptual content understanding: misconceptions and preconceptions; learning progressions
  • Cognitive domain: (interdisciplinary) problem-solving; quantitative/temporal/spatial reasoning
  • Instructional strategies and technology: design, use, and evaluation of different instructional strategies and their effectiveness in various settings (e.g., large lecture, small seminar, lab) and modalities (e.g., online, hybrid learning, “metaverse”); community-based and cross-cultural learning
  • Students' self-regulated learning and metacognition: encouraging students to reflect; learning assessment methods (e.g., minute papers, surveys); developing students' study skills
  • The affective domain: attitudes, motivations, and values of students; attitudes of faculty towards students' career paths; attitudes of faculty towards inquiry-based teaching methods; science/math self-efficacy

*Singer, S., & Smith, K. A. (2013). Discipline-based education research: Understanding and improving learning in undergraduate science and engineering. Journal of Engineering Education, 102(4), 468-471.

Like the social (and natural) sciences in general, the various types of SoTL Research utilize a variety of designs including data mining, correlational quasi-experiments, and randomized controlled trials. Let’s consider a typical project that aims to compare two or more instructional approaches with respect to student outcomes. Designing a suitable study to achieve this aim can be summarized as a three-step process:

  • Define factors and levels. SoTL Research investigators compare different instructional approaches that constitute different levels of an experimental factor, also called an independent variable. The interventions should ideally align well with the research question and be derived from a relevant theoretical framework. The point of comparison for a novel instructional strategy should be the “gold standard,” i.e., a way of teaching and learning that is already accepted to be effective at the time of study; using bad teaching practices as a control condition will not only impair students' learning experiences but also yield little scientific insight. 
  • Define outcome variables. Student learning and attitude outcomes should be assessed by outcome measures with established validity and reliability. Researchers may administer a pre-intervention test of students’ knowledge or skills in order to establish comparability across groups (see below). Depending on project goals, assessment measures may be selected for generalizability to other courses, departments, or institutions. These measures are also called dependent variables.
  • Devise research design. Participants (i.e., students) can be assigned to conditions via two schemes:
    • In a between-subjects design, different individuals experience different conditions; each participant is exposed to a single level of the intervention.
    • In a within-subjects (“crossover”) design, each participant experiences all the conditions, typically in a randomized order. Most studies with this design are "balanced": all participants receive the same number of treatments and participate for the same number of periods. Because each participant generates data for all conditions, this design can be especially useful when splitting the participants into groups (as in the between-subjects design) would yield very small sample sizes for each group.  

For example, suppose a researcher is examining how learning in virtual reality (VR) affects academic performance. The researcher would need to first decide on the factors by clarifying the pedagogical interventions (e.g., comparing learning in a VR environment as opposed to learning in person). Then the researcher would choose the outcome variable (e.g., scores from a weekly quiz) and choose a research design (i.e., how students will be assigned to the different levels of the independent variable).

For studies featuring a pedagogical intervention, it is important to ensure equity, i.e., fair treatment of students. For example, if a researcher is comparing the effect of two instructional strategies on graded exam scores via a between-subjects design, the researcher should be aware that one group of students may be adversely affected in learning and grade achievement relative to the other. Equity can be enhanced through study features such as the following:

  • Within-subjects (“crossover”) designs. As noted above, exposing participants to the same series of treatments throughout the study will ensure that every student has the same total experience.
  • Between-subjects designs with non-graded assessments. Targeting the intervention to impact non-graded outcomes (e.g., a non-graded quiz that assesses learning with respect to “experimental” content) will reduce the impact of interventions on students’ grades
  • Comparison of present and past course iterations. Compare learning outcomes to student performance in a previous semester means that students are not simultaneously compared to one another and presented with different experiences. However, this design is less controlled and therefore yields less clear interpretations of intervention impacts.