DESIGN EFFECTIVENESS VERSUS INTENDED OUTCOMES
I think the students are learning about as well, or perhaps
slightly better, than they did under the previous approach
(small class tutorials).
UNEXPECTED LEARNING OUTCOMES
When we first used online tutorial systems, in 1995, we had
no clear idea about collaborative behaviour. Within a few
days of starting, however, it was obvious to us that students
were going to work very closely together on the problems.
We thought about this and decided to encourage it I
still think this was the correct response. Nearly all collaborative
behaviour is helpful for learning, and students can be trained
to recognize the few cases when it is not helpful (e.g. when
one student is just taking answers from another).
HOW LEARNER ENGAGEMENT IS SUPPORTED
Are learner expectations identified and built upon?
No.
Are learners prior experiences taken into account?
No. However in some years we have administered a standard
non-assessed "entry" test (the Baseline Mechanics
Test) which always reveals a generally poor level of fundamental
dynamics knowledge.
Do learners experience key concepts in multiple ways?
Yes. The important concepts return again and again in the
problem set and the past exams. Where possible we also illustrate
them with simple physical demonstrations in the lectures.
Is there opportunity for peer interaction and feedback?
Definitely. Students work together and hammer out a solution
to each problem. They also have the shared experience of the
Forum (although students are anonymous to each other online).
Do the assessment tasks support engagement?
Students work very attentively and hard while they are in
the lab, trying to get the expected answers. I think this
is quite addictive because of the instant feedback. This is
something they dont often get.
Are learners encouraged to reflect on their learning experience?
We tell them to think about how they are making use of their
collaborative learning experiences. I dont think many
of them understand what we mean. So we put it bluntly and
say, "Remember that you will be ALONE in the exam!".
As for reflection on what has been learnt, no, not really.
A good mark in the exam is enough for them and us.
Are learners given a sense of control in conducting the
activities?
We encourage them to tell us when they have an idea for change,
and we always run an anonymous written survey at the end,
which has a place for comments. On a day-to-day basis they
communicate by the Forum and email so we quickly hear about
any ideas for change.
Does the learning experience engage students affectively?
I think some students can become frustrated while working
on the problems. This is acceptable up to a point Engineering
professionals have to learn to deal with frustration in technical
matters. Many students have an emotional reaction to the assessed
problems: either a "YESSSS" when the correct answer
is accepted, or an "ARRRRGGGG" when marks are taken
away.
On a deeper level, I think most of our students will remember
the course fondly because of the teaching of Professor Stone.
A typical comment from the survey is, "Stoners is a legend".
ACKNOWLEDGMENT OF LEARNING CONTEXT
Do the activities link both specifically to the field of
study/professional practice and consider the broader context
(such as social, political, economic, and environmental) circumstances?
The subject is an introductory course in a narrow technical
field. It does not set out to address those larger questions.
Examples are drawn from real engineering fields such as aviation
and road accidents.
The consequences of incorrect reasoning are strongly emphasized.
If a student makes a serious error, we might say, "Well,
the bridge you just designed has collapsed and you are now
being interviewed on Channel 9".
Do assessment tasks match the intended learning outcomes?
The examination is a good measure of competence in the technical
areas.
Does the learning design assist students to see how their
learning can be used in other situations than the ones given?
Not explicitly.
Are there cultural assumptions built into the learning
design?
Not intentionally. However in reality many assumptions are
made; perhaps they must be made. Our approach and material
are designed for students from a specific, narrow range of
backgrounds: school leavers from Western Australia. On the
other hand, students from other countries have taken our courses
and seem to do quite well.
HOW THE LEARNING DESIGN CHALLENGES LEARNERS
Are students given the opportunity to question their knowledge
and experience thus becoming self-critical of the limits of
their knowledge base and their assumptions?
This is an important part of our teaching approach. Students
enter our courses with many incomplete or, frankly, incorrect
ideas about motion. It is part of our intention to challenge
them with situations they cannot explain.
Does the learning setting assist students to go beyond
the resources provided for them?
Not really. In fact we try to provide everything the students
need close to hand. The students in this course have quite
enough to do without also having to look for resources.
Are students able to make decisions about planning, directing
and assessing their own learning?
They are able, but many of them do not do it well. Some leave
all assessed work until the last possible moment, and then
complain when the computer lab is too full to find a seat.
We respond to these observations by changing the form of the
tutorial system to cope with "Student Nature". For
example our deadlines are now "staggered" to reduce
the peak load on the physical and computing resources.
OPPORTUNITIES FOR PRACTICE
Are students encouraged to articulate and demonstrate to
themselves and to others what they are learning?
Yes. Students are told to work together as much as possible
i.e. NOT to work at home by modem. We also tell them that
the best way to learn something can be to explain it to someone
else.
Is sufficient practice provided to enable expertise to
be realised?
Yes provided it is attempted in the right spirit. A
student who tries only to get the answers, without really
working at Why, will not get much from the problem set. Most
students do have an encounter with "Why", however.
Does the learning design help students to apply criteria
that indicate they are learning appropriately?
No, in fact some students end up in a deceived state just
before the exam. They solve all the tutorial problems in a
superficial way and assume they know the subject. They do
not bother to try past exams (or they would have a shock!).
Then they get into the exam and find they cant even
start the problems. We warn them about this in the lectures
but some just dont listen.
Is appropriate feedback available at key points in the
learning process?
Feedback is provided very frequently, when students submit
answers to problems. However I am not sure whether it is appropriate
or not. I see a great deal of poor working in log books and
it is clear that the students need more feedback about this.
After all, it is the main skill assessed in the Exam. It is
a question of resources.
Is there a clear alignment between the activities conducted
and how the students are assessed?
Our dynamics problem set has answers of the form "number
units" e.g. "3.2 m/s". We have cases where
students have worked for hours on a problem, and their working
is correctly structured, but there is some arithmetic error
that staff cant find. Students do focus very much on
the answer and they dont relax until they get it. This
is in stark contrast to the assessment approach in the exams
where the bulk of marks are at stake. In the exams
18/20 is given for correct equations and reasoning and only
2/20 is given for a correct numerical answer.
|