A Study Into Effectiveness ofAutomated Faculty Fe


2023年12月17日发(作者:due diligence)

US-China Foreign Language, ISSN 1539-8080

August 2013, Vol. 11, No. 8, 636-639

WANG Ji-jun

DAVID PUBLISHING

A Study Into Effectiveness of Automated Faculty Feedback

Inner Mongolia University, Hohhot, China

We explored the automated faculty feedback issue from a computer-based teaching perspective. This research

demonstrated the importance of automated faculty feedback in the study of the effect of automated faculty feedback

on the writing performance. Through experiment, we attempted to verify the effectiveness of the automated faculty

feedback. We started with a brief introduction of the automated faculty feedback. Then we conducted a experiment

to test. We ended the study with the conclusion that automated faculty feedback was superior to human faculty

feedback. We identified in our study that the automated faculty feedback could have a positive effect on the

students’ writing performance. Based on the results of the study, we can propose that automated faculty feedback

can improve the students’ writing performance.

Keywords: writing performance, automated faculty feedback, teaching effect

Introduction

Technically speaking, automation refers to reducing the demand for human interference to a great extent to

l Rights e goods and services by using advanced information technologies and artificial control systems.

Automation provides the possibility of replacing the need of the muscular labor with automated machines,

artificial software, and computer programs. Also, automation contributes to reducing the demand for both human

sensory and mental need. To be more exact, in the world today, computer technology and automation occupy an

important position in all walks of life, such as education, research, and innovation. The rapid development of

computer technology and automation has promoted the transformation of the traditional human faculty feedback

to writing into automated faculty feedback, and the traditional writing into e-writing. The use of automated essay

faculty feedback is gradually holding the dominant position in the new approach to evaluating writing curricula.

While writing is an essential part of the educational process, many instructors find it difficult to incorporate large

numbers of writing assignments in their courses due to the effort required to evaluate them (Foltz, Laham, &

Landauer, 1999). Williams (2001) argued, in proceedings of the 10th Annual Teaching Learning Forum, that

teaching staff around the world are faced with a perpetually recurring problem: How do they minimize the

amount of time spent on the relatively monotonous tasks associated with grading their students’ essays and with

the advent of large student numbers, the grading load has become both time consuming and costly, so the system

that can automate the tasks is currently just a dream for most staff. The obvious advantages of using AES

(automated essay scoring) tools for large-scale assessment include timely feedback, low cost, and consistency of

scoring (WANG & Brown, 2008). Attali and Burstein (2004) argued that computers can function as a cognitive

WANG Ji-jun, postgraduate, Foreign Language College, Inner Mongolia University.

A STUDY INTO EFFECTIVENESS OF AUTOMATED FACULTY FEEDBACK637tool in a more effective way from the research on AES. With the development of computer science, programming

technology, artificial intelligence, computational linguistics and cognitive science or other related disciplines,

automated machine scoring assessment and e-writing gradually replace the traditional human assessment and

writing on the paper for the great convenience, superb proficiency, and high accuracy.

The latest AES system examined in this paper is the computer-assisted intelligent TRP (Teaching Rescores

Program), which was developed by YANG Yong-lin, at Tsinghua University. The computer-assisted TRP has

functions of AES, automated essay analyzing, and automated faculty feedback on the e-writing pieces. TRP can

generate the detailed report of the e-writing pieces in the respects of the marks, the total number of sentences,

paragraphs, mean word length, mean paragraph length, and the positions of any given e-writing piece in a ranking

of lists (YANG, LUO, & ZHANG, 2005).

Experiment

In order to testify the reliability and validity of automated faculty feedback on e-writing pieces, this part

conducts an experiment by adopting the quantitative method.

Subjects

In this research, we choose 100 freshmen (50 boys and 50 girls) from Biology major at Inner Mongolia

University as the subjects. Their education background, ages, and writing performance and ability are also the

same. For the effect of experiment, 100 students are equally divided into two groups: One group acts as the

EG (Experimental Group), and the other acts as the CG (Control Group). There are no significant differences

l Rights English writing performance between the two groups according to the data collected from the pre-test.

They are taught English writing by the same teacher by using the same teaching methods and books but

different faculty feedback. The EG is conducted with automated faculty feedback, and the CG follows the

traditional human faculty feedback.

Research Procedures

A comparative study is conducted to display the difference between human faculty feedback and automated

faculty feedback in the aspect of time used, work load, and the students’ writing performance. We adopt the

instrument TRP to generate automated faculty feedback and the statistic software SPSS16.0 (Statistical Package

for the Social Sciences) (see Table 1) to analyze the results. The results of the pre-test can be used to have a

general understanding of the subjects’ writing performance and they prove that two groups have no significant

differences in wring ability before the experiment. However, the results of the post-test show that after adopting

TRP as the tool of assessment in the writing instruction and learning, the students’ writing performances have

been improved a lot. The results show, in the significance level test, p ≤ 0.01. Namely, there exists statistically

significant correlation between the students’ writing performance in two groups.

As shown in Table 1, the improvement in writing scores was also obvious in the process of the study. It can

be easily seen that the mean writing score of students in the EG after being trained by using TRP has been

improved 1.65 over time, compared with the results of pre-test, while the changes in the mean writing score of the

CG were not obvious, displaying only a slight change.

638

A STUDY INTO EFFECTIVENESS OF AUTOMATED FACULTY FEEDBACKTable 1

The Comparison of Scores Between Pre-test and Post-test

Mean value

EG (n = 100) CG (n = 100)

Standard deviation

EG (n = 100)

1.62

CG (n = 100)

1.18 3.52 0.00

t Sig.

Pro-test 9.63 9.82 1.15 1.17 0.03 0.66

Post-test 11.28 9.90

Research Questions

In order to testify the effectiveness of automated faculty feedback on writing pieces, this research

proposes the questions: Firstly, whether can automated faculty feedback help to promote students’ scores in

English writing? Secondly, whether is the application of automated faculty feedback superior to than

traditional human faculty feedback?

Discussion and Results

As can be seen from Table 1, the students’ scores in the EG and the CG are almost the same in the

pro-test. After training by using English Writing Teaching Resources Platform System, the EG composition

score from pre-test are improved by 1.65 points. The scores of the EG in the pro-test and in the post-test have

shown significant difference (p < 0.05). Thus, the automated faculty feedback by using English Writing

Teaching Resources Platform System can improve the students’ English writing scores significantly.

l Rights ted faculty feedback decreases operation time and work handling time significantly. Automated

faculty feedback frees up teachers to take on more other teaching roles. Automated faculty feedback have the

capacity of providing assessment on the different levels in all kinds of standardized testing. Automated

faculty feedback can be regarded as valuable addition and replacement provided to human raters (Phillips,

2007). With the development of instructional technology, automated faculty feedback can efficiently provide

assistance, such as, timely feedback and faculty comment for teachers of writing instruction, and content

feedback for students of writing class.

Conclusions

Automated faculty feedback can, on the behalf of teachers, perform tasks of analyzing, grading, and

commenting on the e-writing pieces that involve hard physical or monotonous workload. With the increasing of

the enrolling number of students, providing effective and timely evaluations on the ever-increasing e-writing

pieces is beyond human capabilities of size, speed, and endurance. Automation in the evaluating on e-writing

pieces can improve the efficiency of schools, universities, research institutions, and test service institutions (KUI,

2005; Mikulas & Kern, 2006). In a word, automated faculty feedback can provide a series of automated services

for those involved with disciplines of writing instruction and assessment in the aspect of replicating human

modes of judging in the scoring of e-writing pieces. Automated faculty feedback can help teachers or raters in

assigning and grading a large number of writing assignments.

A STUDY INTO EFFECTIVENESS OF AUTOMATED FACULTY FEEDBACK639References

Attali, Y., & Burstein, J. (2004). Automated essay scoring with e-rater V.2.0. Paper presented at The Conference of International

Association for Educational Assessment (IAEA), Philadelphia, P.A..

Foltz, P. W., Laham, D., & Landauer, T. K. (1999). Automated essay scoring: Applications to educational technology. Proceedings

of the ED-MEDIA’99 Conference, AACE, Charlottesville.

KUI, X. Y. (2005). How to use experience English writing corpus to conduct writing evaluation. Foreign Languages in China, (8),

65-68.

Mikulas, C., & Kern, K. (2006). A comparison of the accuracy of automated essay scoring: Using prompt-specific and

prompt-independent training. The Annual Meeting of the American Educational Research Association, San Francisco, C.A..

Phillips, S. M. (2007). Automated essay scoring: A literature review. Society for the Advancement of Excellence in Education

(SAEE), 37.

WANG, J. H., & Brown, M. S. (2008). Automated essay scoring versus human scoring: A correlational study. Contemporary Issues

in Technology and Teacher Education, 8(4).

Williams, R. (2001). Automated essay grading: An evaluation of four conceptual models. In A. Herrmann & M. M. Kulski (Eds.),

Expanding horizons in teaching and learning. Proceedings of the 10th Annual Teaching Learning Forum, Curtin University of

Technology, Perth.

YANG, Y. L., LUO, L. S., & ZHANG, W. X. (2005). A study into experiencing English writing. Foreign Languages in China, 8, 6.

l Rights Reserved.


本文发布于:2024-09-22 01:11:09,感谢您对本站的认可!

本文链接:https://www.17tex.com/fanyi/9926.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

上一篇:prevalent名词
下一篇:faculty的用法
标签:
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2024 Comsenz Inc.Powered by © 易纺专利技术学习网 豫ICP备2022007602号 豫公网安备41160202000603 站长QQ:729038198 关于我们 投诉建议