Did you ever have a dream where you arrive at class and realized you had not studied for an important test? Or maybe you thought you were doing poorly in a class only to be pleasantly surprised that you received an A at the end of the semester? How did that make you feel? What if you always knew what your score for every aspect of the class was and you didn’t have to take as many traditional tests?
It can be difficult for instructors to balance helping students learn and effectively evaluating what they are learning, while also letting them know how they are doing in the class. CGScholar Analytics is a new tool to evaluate students and help them learn.
CGScholar is a suite of e-learning apps developed by a team of researchers and software developers in the College of Education led by Bill Cope and Mary Kalantzis that is meant to change the way students and teachers share and evaluate their knowledge.
CGScholar is a peer-to-peer “social knowledge” technology for learning communities and is designed for fourth grade and above. The platform generates massive amounts of highly usable data for varied purposes, including student feedback. The software also records the progress of learners and offers course data that identifies areas of strength and weakness from a teaching point of view.
The Analytics app within CGScholar sets out to end the distinction between instruction and assessment. There is no learning without immediate machine and human feedback. There is no specialized assessment because everything is assessed, hopefully, eliminating the need for large standardized tests.
CGScholar offers approximately fifty different kinds of repeatable feedback through datapoints. Every datapoint can (indeed must) provide immediate prospective and constructive feedback to the learner so they can improve their work. Examples of this feedback include:
- A computer suggestion, such as a writing suggestion that includes an explanation of the reason for that suggestion.
- A peer idea offered in response to a review criterion.
- A coded self-annotation.
- A comment in a virtual class discussion.
- A response to a question that has a right or wrong answer.
For a student, there may be thousands of such datapoints in a course. The purpose of the CGScholar: Analytics area is to provide learners and their teachers with clear progress visualizations based on this data. Learners can access personal data presentations; instructors can access both class visualizations and individual students’ performance presentations. Users can look at the big picture, as well as drilling down to any datapoint, with learners and instructors able to receive meaningful and understandable feedback from each datapoint.
Student progress is measured in an “aster plot,” which is a colorful, flower-like graphic that opens outward as students complete their work. Each “petal” is a measure of learning, and the instructor’s expectations are clearly spelled out as students hover over each one using their mouse. There are three groups of petals:
- Knowledge, which shows demonstrated learning achievements
- Focus, which shows the amount of effort
- Help, which shows the quality of peer collaborations
Everyone starts a course with zero, and gradually, as students work through the class, they see the colored petals and their overall score grow. The instructor could state that 80 would be an “A,” and students can watch their progress grow during the semester. This then gives students more control of their outcomes than a typical class in which they get a grade at the end of the semester.
Conventional tests are small samplings of individual memory of facts and correct application of procedures, followed by blanket, retrospective judgments. In contrast, CGScholar: Analytics collects data on everything students do while they’re learning, giving them large amounts of incremental feedback as they go.
The “big data” part of this is the number of tiny data points such as feedback in a peer interaction or an intelligent machine response. The data-collection process is also a feedback and learning process. Traditional tests, the researchers predict, will go away because they’re not as accurate, and more importantly, they’re not very helpful to learning.
This work has been made possible with the support of grants from the National Science Foundation, the Institute of Educational Sciences in the US Department of Education, and the Bill and Melinda Gates Foundation.
-Education Building. This is where the CGScholar app was developed.
Cope, B. (n.d.) Scholar’s New Analytics App: Towards Mastery Learning. Retrieved from https://cgscholar.com/community/community_profiles/new-learning/community_updates/54189
Cope, B., Kalantzis, M. McCarthey, S. J., Vojak, C., & Kline, S. (2011). Technology-Mediated Writing Assessments: Paradigms and Principles. Computers and Composition 28(2), 79-96.
Cope, B. & Kalantzis, M. (eds). (2016a). E-Learning Ecologies: Principles for New Learning and Assessment. New York NY: Routledge.
Cope, B. and Kalantzis, M. (2016b). Big Data Comes to School: Implications for Learning, Assessment and Research. AERA Open 2(2), 1-19.
Kalantzis, M. & Cope, B. (2012). New Learning: Elements of a Science of Education (Edn 2). Cambridge UK: Cambridge University Press.
Nudo, S. (2018). The Test is Dead. Cope and Kalantzis Redefine ‘Mastery Learning’. Retrieved from https://cgscholar.com/community/community_profiles/new-learning/community_updates/71905