Multimedia Learning (2009) — Ch. 4-8 (Richard E. Mayer)

This post covers Section II: Principles for Reducing Extraneous Processing in Multimedia Learning:

  • Chapter 4: Coherence Principle
  • Chapter 5: Signaling Principle
  • Chapter 6: Redundancy Principle
  • Chapter 7: Spatial Contiguity Principle
  • Chapter 8: Temporal Contiguity Principle

[*Note: Much of this material was copied directly from Mayer’s book.]

Extraneous material is information from the lesson that is not needed to achieve the instructional goal. Extraneous processing is cognitive processing during learning that does not serve the instructional goal, such as attending to irrelevant information or trying to make up for confusing layout of the lesson.

Chapter 4: Coherence Principle (p.89-107)

  • People better understand an explanation from a multimedia lesson containing essential material (concise lesson) than from a multimedia lesson containing essential material and additional material (expanded lesson).
  • May be particularly important for learners with low working memory capacity or low domain knowledge.
  • Seductive details refers to interesting but irrelevant material that is added to a passage in order to spice it up. Seductive text, seductive illustrations.
  • The major theoretical justification for adding seductive details is arousal theory — adding interesting but irrelevant material energizes learners so that they pay more attention and learn more overall. In this case emotion affects cognition, that is, a high level of enjoyment induced by the seductive details causes the learner to pay more attention and encode more material from the lesson.
  • In spite of its commonsense approach, arousal theory is based on an outmoded view of learning as knowledge transmission, the idea that learning involves taking information from the teacher and putting it into the learner. By contrast, the cognitive theory of multimedia learning is based on the view of learning as knowledge construction, the idea that learner’s actively build mental representations based on what is presented and what they already know.
    • Dewey argued against viewing interest as an ingredient that could be added to spice up an otherwise boring lesson. Dewey noted “when things have a have to be made interesting, it is because interest itself is wanting. Moreover, the phrase is a misnomer. The thing, the object, is no more interesting than it was before.”  (p.95)
    • Kintsch (1980) used the term cognitive interest to refer to the idea that students enjoy lessons that they can understand. According to this view, cognition affects emotion, that is, when students can make sense out of the lesson they tend to enjoy the lesson.

Coherence Principle I: Learning is improved when interesting but irrelevant words and pictures are excluded from a multimedia presentation. (p.95)

Boundary conditions: students who are low in working memory capacity tended to spend more time looking at the irrelevant illustrations than did students who were high in working memory capacity. This finding suggests that seductive details may be particularly distracting for learners who have difficulty controlling their information processing in working memory.

Coherence Principle II: Learning is improved when interesting but irrelevant sounds and music are excluded from a multimedia presentation. (p.98)

When additional auditory information is presented, it competes with the narration for limited processing capacity in the auditory channel. The cognitive theory of multimedia learning predicts a coherent effect in which adding interesting material in the form of music accounts for students way.

Boundary conditions: Adults may be better able to ignore irrelevant sounds than are children, but this hypothesis need to be subjected to experimental research. Music and background sounds may support certain kinds of instructional materials, such as those with emotional content or in cases where the music and sounds are part of the essential content, but this hypothesis needs testing.

Coherence Principle III: Student learning is improved when unneeded words and symbols are removed from a multimedia presentation. (p.102)

Cognitive processes involved in sense making can be facilitated by a clear and concise summary. The summary greatly facilitates these processes because the key words are in the captions, they are presented in order, and they are presented near the corresponding illustrations.

Boundary conditions: If students had been more knowledgeable, they may have been better able to benefit from the quantitative details–a pattern that Kalyuga (2005) refers to as the expertise reversal effect.

Implications for Multimedia Instruction:

  • Do not add extraneous words and pictures to a multimedia presentation.
  • Do not add unneeded sounds and music to a multimedia presentation.
  • Keep the presentation short and to the point.
  • A concise presentation allows the learner to build a coherent mental representation, that is, to focus on the key elements and to mentally organize them in a way that makes sense.
  • Needed elaboration should be presented after the learner has constructed a coherent mental representation of the concise cause and effect system.

Chapter 5: Signaling Principle (p.108-117)

  • People learn better when cues that highlight the organization of the essential material are added. Signaling reduces extraneous processing by guiding the learner’s attention to the key elements in the lesson and guiding the learner’s building of connections between them.
  • Signaling may be particularly useful when the signals are used sparingly, when the learner has low reading skill, and when the multimedia lesson is disorganized or contains extraneous material.
  • Signals do not add any new information but rather highlight or repeat the essential material in the lesson. They help the reader attend to and mentally organize the essential words in the incoming narration.
  • Verbal signaling involves: adding cues such as an outline or outline sentences at the start of the lesson; headings that are keyed to the outline; vocal emphasis on keywords; and pointer words such as “first…, second…, third…”
  • Visual signaling involves: adding visual cues such as arrows; distinctive colors; flashing; pointing gestures; or graying out of nonessential areas.
    • Visual signaling in the form of flashing a part of the display, was effective when the display was complex but not when it was simple.
  • Given that effects are not strong and are based on only six tests, support for the signaling principle in multimedia learning should be considered promising but preliminary.

Chapter 6: Redundancy Principle (p.118-134)

  • People learn better from graphics and narration than from graphics, narration, and printed text.
  • Redundancy creates extraneous processing because the visual channel can become overloaded by having to visually scan between pictures and on-screen text, and because learners expend mental effort in trying to compare the incoming streams of printed and spoken text.
  • The redundancy principle may be less applicable when:
    • the captions are shortened to a few words and placed next to the part of the graphic they describe
    • the spoken text is presented before the printed text rather than concurrently
    • there are no graphics and the verbal segments are short.
  • Multimedia explanations consists of concise narrated animations (or CNAs).
  • Kalyuga, Chandler, and Sweller (1998) have used the term redundancy effect in a broad sense to refer to any multi-media situation in which “eliminating the redundant material results in better performance than when the redundant material is included.”
  • High experience learners may have so much free processing capacity that they did not suffer any ill effects from processing redundant materials.
  • Verbal redundancy is defined as a lesson in which printed words are read aloud for the learner but no graphics are presented.
    • In situations where the verbal segments are short such as a sentence at a time, verbal redundancy tends to result in better transfer performance as compared to receiving printed words alone.
    • In situations where the verbal material is long, such as an entire passage, verbal redundancy tends to result in worse performance as compared to receiving printed words alone.
  • Plass et al (1998) found that allowing students to choose between pictorial and verbal definitions of words helped them learn the words while reading a story in a second language learning multimedia environment.
  • Some limitations and future directions: It might be useful to present summary slides, or to write key ideas on a chalkboard in the course of a verbal presentation or lecture. Similarly, redundant on screen text might be useful when the text contains unfamiliar or technical terms, when the learners are non-native speakers, or when the text passages are long and complex. The negative effect of redundancy may be eliminated when the presentation is slow-paced or under learner control.

Chapter 7: Spatial Contiguity Principle (p.135-152)

  • Students learn better when corresponding words and pictures are presented near (separated presentation) rather than far from (integrated presentation) each other on the page or screen.
  • Most applicable when the learner is not familiar with the material, the diagram is not fully  understandable without words, and the material is complex.
  • In separated presentations, learners must search the screen or page to try to fight the graphics that corresponds to a printed sentence; this process requires cognitive effort (extraneous processing) that could have been used to support the processes of active learning.
  • Integrated presentations minimize extraneous processing and serve as aids for building cognitive connections between words and pictures.
  • Represents a subset of what Sweller and his colleagues call the split-attention principle. Refers to “avoiding formats that require learners to split their attention between, and mentally integrate, multiple sources of information.”
  • Students learn better when words and diagrams are both presented on a computer screen rather than having some material in a manual and some on a computer screen.
  • Boundaries:
    • Kalyuga (2005) summarized evidence for an expertise reversal effect in which instructional methods that help less experienced learners such as integrated diagrams and text do not help more experienced learners.
    • Mayer and colleagues reported that integrated presentations were better than separated presentations for low knowledge learners but not for high knowledge learners. More experienced learners are able to generate their own verbal commentary for graphics that they study.
    • Sweller (2005) noted that “the principle only applies when multiple sources of information are unintelligible in isolation.” If words are not needed for understanding a graphic, then it is not effective to place words close to rather than far from the corresponding parts of the graphic. Learners can learn from the diagram alone and mentally add the needed verbal explanation from their long-term memory.
    • Sweller also found that integrating words and pictures is less likely to be effective when the material is very simple.
  • In book-based contexts, illustrations should be placed next to the sentences that describe them, or better, the most relevant phrases may be placed within the illustrations themselves.
  • In computer-based contexts, on-screen words should be presented next to the part of the graphic that they describe.
  • Some limitations and future directions: Future research is needed to pinpoint the boundary conditions. In particular it would be useful to know how the learner’s prior knowledge mitigates poor instructional design. Unobtrusive techniques for measuring prior knowledge would also be helpful. Future research is also needed to determine how many words to put into segments that are embedded within graphics. Finally, the use of printed text conflicts with the modality principle, so research is needed to determine when to use printed text rather than spoken text.

Chapter 8: Temporal Contiguity Principle (p.153-169)

  • Students learn better when corresponding words and pictures are presented simultaneously rather than successively.
  • May be less applicable when successive lesson involves alternations between short segments rather than a long continuous presentation or when the lesson is under learner control rather than under system control.
  • Simultaneous presentation increases the chances that a learner will be able to hold corresponding visual and verbal representations of the same event in working memory at the same time. This increases the chances that the learner will be able to mentally integrate the verbal and visual representations, a major cognitive process in meaningful learning.
  • Spatial contiguity is important for the layout of a page in a textbook or a frame on a computer screen. Involves material that is processed, at least initially, by the eyes — printed text and graphics such as illustrations or animations. In this situation, the temporal processing of the material is not controlled by the instructional designer, that is, the reader can choose to focus first on the text or first on the graphics.
  • Temporal continguity is important for the timing of computer-based presentations. Involves material that is processed by the eyes–for example, animation–and material that is processed by the ears– for example, narration. In this situation, the temporal processing of the material is controlled by the instructional designer, that is, the instructional designer can choose to present first only words and next only graphics or vice versa.
  • Split attention affect refers to any situation in which the learner must process incoming information from diverse sources; in particular, Mousavi, Low, & Sweller (2005) refer to the temporal contiguity effect as a “temporal example of split attention.” Split attention refers to the need to integrate material from disparate sources, which is a broader concept than temporal contiguity. Spatial contiguity effects, temporal contiguity effects, and modality effects, all could be considered forms of split attention. (p.160)
  • “Bite-sized segments” in Mayer’s research constitutes clips that were 8 to 10 seconds long. Further research is needed to determine what constitutes an ideal segment size.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: