After drafting a series of principles and checklists related to learning efficiency, I decided to see if I had enough material to draft a proof of concept (POC) learning efficiency scale (LES). Here is an updated version of that draft. I think enough exists for teachers to conduct mental experiments and informal evaluations of learning efficiency of individual as well as aggregates of learners. Observers of lessons may also find LES useful as a guide to monitor critical mechanics of teaching-learning processes.
Measures of learning efficiency indicate the extent to which instruction and learners’ attention meet. From a view of a learner, learning efficiency means spending less time, effort, and other personal resources acquiring a given set of information or skills. It also means gaining something of personal value in exchange for those resources. (I’ll address this latter index in a future version.)
In this sense, teachers choose the level of learning efficiency that their students may earn. Teachers know that one way of presenting a lesson can take a few seconds, another way may take a few hours. Teachers decide the learning objective, learning criteria, timeframe, degree of difficulty a student will encounter, evaluation standards, etc. for each lesson.
LES provides guidelines for teachers to increase student learning efficiency by fine tuning observable mechanical aspects of instruction scientists found critical to learning efficiency.
Tablet PCs, UMPCs, MIDs, and other mobile PCs offer ways for teachers to provide more efficient learning options for students. An Ink based software version of this scale is in preparation.
Learning Efficiency Scale
LES yields a measure of instructional competence, e.g., power or proficiency. It provides a framework for students and school observers to rank the relative capacity of school lessons and instructional material to yield intended student academic behavior. Learning efficiencies describe which instruction assists a student to reach a learning criterion quicker, easier, or with less effort when compared with other possible ways of reaching the same criterion (Heiny, 2007).
LES uses the assumption that learning efficiencies reside with the instructor. That is, teachers or teaching material simplify presentations to the bare minimum number of steps and content necessary for a student to demonstrate learning each lesson. Instructional simplicity contrasts with other rating scales that use an assumption that learning efficiencies reside with the learner.
LES also uses the assumption that people use the same one step mechanics (behavior patterns) to learn. They use trial and error behavior until they meet learning criteria for a lesson.
LES uses several enduring scientific principles that indicate state-of-the-art (SOTA) of instructional presentations that yield efficient learning. These principles have legacies that extend from millennia old instruction practices and 19th century studies of learning. Many instruction programs and materials use eclectic informal and unidentified mixtures of these principles. Names of SOTA principles derive from several theories:
Trial-and-error learning – as adapted for use with mobile PCs (Heiny, L., 2005) as Direct Learning in beginning academic subjects.
Stimulus-Response learning – as adapted to instruction-learning equations by numerous behavioral specialists in a wide range of academic subjects, and by educators mostly in rhetoric.
Two Choice Visual Discrimination Analysis – as adapted and elaborated beyond the Try Another Way technology.
Direct Instruction – based on ancient instruction practices used in many cultures to sustain social institutions, such as families, economies, polities, and religions; adapted into one of the most sophisticated and largest empirically tested, technically based instruction practices and curricula available from preschool through high school.
Each principle has a body of peer reviewed experimental study reports that offer technical definitions and procedures in order to replicate results and to apply these principles in other settings, such as schools and mobile PC learning venues.
Teachers already know these four principles, but have few guides for using them quickly and concisely to plan and conduct instruction.
LES brings into one instrument assessments of how use of these learning based instructional principles influence academic behavior, specifically learning from a given lesson.
LES assesses how variations in use of technical details in the instruction-learning equation affect the likelihood of a student reaching a technical criterion for learning each lesson. These assessments allow instructors to make technical refinements to instruction and content in order to increase student learning rates.
Some teachers hold that they must first consider the circumstances of a learner before they can select an instructional process or material for a lesson. They contend that they teach humans, not other animals, and therefore, must respect learners differently from applying scientific principles and predetermined procedures through lessons.
They also say that each learner must come to class ready to learn as defined by each teacher’s idea of readiness.
Some people hold that instruction is an art, an indefinable process with nuances acquired only through professional training and practice. Teaching is not a mechanical process. Many teachers consider it an insult to suggest that anyone can reduce instruction to several objectively observable and repeatable principles that affect student learning rates.
Also, teachers blog daily and comment frequently in professional association and union meetings about heavy workloads that detract from their best instruction. They say they know what’s best to do, but cannot do so, because they lack sufficient resources. They assert that non-educators cannot accurately assess their competence.
The referent used in constructing LES for indexing the level of learning efficiency rests on what-is-possible according to scientific data. References do not account for state-of-practice and other reasons for lower than maximum assessments defined by the four scientific data principles.
LES reports instruction as Highly Efficient, Efficient, Normally Efficient, Less Efficient, or Inert (Laissez-faire) Efficiency.
The Highly Efficient learning assessment uses five stars to symbolize it, Efficient uses four stars, Normally Efficient uses three stars, Less Efficient uses two stars, and the Inert (Laissez-faire) Efficiencies learning assessment uses one star as its symbol.
Dimensions of instructional resources measured to categorize efficiency levels include: use of task sequencing; of forward and of backward chaining; of redundant cues; of behavioral reinforcers; of shape, color, size, and position; of clock time; and of number of instruction-trial blocks. Other resources may be added or substituted when empirical, experimental data indicate their contributions to learning efficiencies.
Each resource has technical definitions and observable, countable indices that accumulate to rank an instructional lesson or material according to its learning efficiencies.
The number of stars assigned to an efficiency level symbolizes the instructional capacity to yield efficient learning. The higher the efficiency, the more stars the instruction receives. For example, the highest level of instruction, as indexed by student learning efficiency, receives a Five Star Rating.
The easiest index of learning efficiency is to count the number of minutes (or seconds) that elapse from the beginning of instruction until all students meet criteria for learning that lesson. A lesson that takes five minutes to meet a criterion is more efficient than one that takes 50 minutes to reach the same result.
A second index is to count the number of directed instructions offered during the lesson. A lesson that offers fewer directed instructions for students to follow to reach criterion yields more efficient learning. However, lessons can include fewer directions than needed to yield effective instruction.
A third index consists of the frequency of student attempts to reach criterion. The fewer trials-and-errors the more efficient the learning.
A fourth index consists of the frequency of prompts used, such as rewards-punishments, examples, redundant cues, repeated explanations, student questions to clarify learning criteria, etc. The fewer of these that occur, the more efficient the lesson.
A fifth index consists of the frequency of tools used for students to reach criterion, such as the number of textbooks, assignment sheets, whiteboard or mobile PC screens, etc. required to complete the assignment. The fewer of these tools, the more efficient the learning.
Teachers and students, or a third party observer, using LES POC 1.1 may collect data manually on tally sheets.
Beginning users select one index you consider the most important. More experienced observers may choose more than one index to monitor. Write a list of observables for each index. Place a hash mark next to it each time that event occurs. Tally the frequency of these events at the end of the lesson.
(The software under development will collect some of these data automatically on teachers’ and learners’ mobile PCs.)
Calculating Learning Efficiency
In general, teachers and students may calculate learning efficiency manually or your mobile PC software will complete it for you automatically. For those using LES POC 1.1, the individual raw frequencies point to changes in instruction that a teacher may make without further calculations. Based on experience, a few such changes can increase learning efficiency.
Reporting Learning Efficiency
Users of LES POC 1.1 may report learning efficiency manually for aggregates as well as individual students. I’ll suggest separately ways to chart and draft narrative results for these.
(The software includes several reporting formats that teachers and students may receive automatically. In general in the spirit of transparency, teachers and students will receive the same reports.)
(Thanks for your interest. The software we’re developing will include some of these data collections, calculations, and reports. I’ll share them after further testing.)
Please let me know what value you find in this incomplete draft. What makes sense, what seems a stretch beyond available empirical data you use, etc. And, again, thank you for sharing your earlier comments about LES, LER, and other drafts about learning efficiency. I think I’ll post an index of these, because they don’t all carry the same keywords.
(I’ll edit this post and add more later.)