Lecture Animation
The moving overhead transparency was trialled in the lecture
in Semester 1, 1995, and an evaluation of the lecture itself
was undertaken. Using the Minute Paper evaluation technique
(modified after Cross, 93) students were asked to identify
the best or most useful thing from the lecture. Of 300 students
who completed the form, 64% stated that the computer animation
was the highlight of the lecture, and other comments indicated
that students had enjoyed the lecture and felt they had
understood the process of muscle contraction. The lecturer
noted that although she used the animation a number of times
in the lecture, manipulating its speed and direction and
talking about the significance of the various components
represented, the process of explanation took only 20% of
the time usually allocated to the explanation of this particular
concept.
Self-paced Tutorial
An essential part of the development process has been
the formative evaluation of the user interface. Much of
this took part within the development team as part of the
comprehensive paper design. The use of the requirements
specification enabled us to walk through the details of
the user interface. For example, we decided to use hot spots
on the animation to access other screens of information.
The team then needed to explore the ways by which users
would return to the animation. Possibilities were identified
and the side effects of these potential solutions were analysed
to identify the most natural functionality consistent with
the rest of the interface.
A questioning strategy was chosen which matched questions
typically asked by students. However, our design team were
extremely conscious of the fact that a list of questions
on the screen would look like the main menu of typical objectivist
IMM programs. A list of questions would imply a ranking
or logical order, inviting students to start at the top
and work through one by one. This would seriously weaken
the effectiveness of the questioning strategy as a means
for students to construct their own knowledge.
The approach we chose initially was to make the "what,
where, how, why" questions into the four sides of a
spinning top. When this screen was entered, the top would
spin a random number of times to leave a different question
above. For example, when "What" is shown, its
relevant sub-questions appear. Each sub-question could be
clicked on to take the user to a single screen containing
an answer to the question. Clicking on "Where",
"How" or "Why" rotates these questions
to the top.
Questioning via the spinning top.
Most of our team really liked the spinning top idea. It
was a quirky way of presenting a series of questions without
a list, it was fun to spin, and the design looked great.
It was displayed at a residential IMM workshop held at Muresk
in June 95, with the aim of gaining some peer review.
Two serious problems arose with this aspect of the user
interface when a prototype program was constructed. Formative
evaluation with novice users of the program revealed that
the spinning top metaphor was not successful. Users didn't
make the visual connection between "What" and
"does it do?". A revised strategy uses a rotating
wheel with questions in full on the outside rim. The second
problem became evident when some content was produced to
put in the prototype. The idea of having a single discrete
screen for the answer of each question was not effective
because of the varying amount of material for each answer.
Some questions could be answered by a few words and a picture;
others needed much more detail. Still others were more appropriately
dealt with by linking the user to another part of the resource
smorgasbord.
Based on these observations, the second prototype contains
only one screen per topic, with the answers to all questions
contained in a scrollable window, like a World-wide Web
page. The size problems are alleviated, and the student
has more context about the relationships of various aspects
of the content.