Monday, August 21, 2017

Research: "Chunking" Semester Projects


Schuessler, J. H. (2017). "Chunking" Semester Projects: Does it Enhance Student Learning?". Journal Of Higher Education Theory and Practice, 17(6).

My university is a teaching institution. We value quality teaching and keep students at the center of our focus. You can see it the way we write our learning objectives, the various student programs, and student organizations. While my college us pursuing AACSB accreditation which places more emphasis on research, the university provides for faculty development through our Center for Innovation and Instruction (CII). They put on all kinds of programs, including one geared towards the convergence of conducting research around/involving teaching. Referred to as the Scholarship of Teaching and Learning (SoTL), I think this is a great. I had never done pedagogical/andragogical research before. Sure, I've used students as a convenient sample, but the focus of my research has never been on my teaching methods, assignments, etc. Well, thanks to our CII's efforts, that is no longer true.

I teach a couple of networking classes, undergraduate and graduate. I place a lot of emphasis on the semester project, specifically on some of the different components of network design: logical design, physical design, wireless design, financial analysis, risk assessment, and management design. I traditionally assigned this as a semester project due at the end of the semester. In addition to covering each section in more detail throughout the semester, I also included topic specific discussion boards in which teams of students could share their projects and see the pros and cons of each design with the idea that they could in turn, reflect and improve their own team's design. As kind of an aside, the projects are all case based so each group was working on the same project. This makes it easier to compare the submissions from each group.

In our graduate version of this class, we actually use some of this data for program assessment purposes. To my disillusionment, I had attempted a variety of things to improve student scores, particularly as it relates to logical and physical designs, but students routinely seem to struggle with not only the diagramming aspects of such design, but also the logical aspects of logical and physical design.

The crux of the issue was that semester project scores were only 'ok'. Clearly, some students got it. Others clearly were not "getting it". As part of the SoTL project, I began to think of ways to modify the project such that students could get more substantive feedback earlier in the semester and work on and improve their final project submissions at the end of the semester.

To address these issues, I decided to break the sections up over the course of the semester. Before I get into the details, I make extensive use of Google in my classes. For example, I require students use Google Docs for the papers they submit. In this particular class, since it is a group project, this works particularly well due to the collaborative features of Google Docs. So, for the first submission, I created a discussion board topic in Blackboard, our university's Learning Management System (LMS). It required each group to create a thread within that discussion board in which they included a read only link to their team's Google Doc. With each team doing this, and as you recall, they were all working from the same case, each team could see and reflect on the similarities and differences from their peer teams. I should note that within the initial discussion board were details to help students develop a more thorough understanding of the topic (i.e. Logical Diagrams, Physical Diagrams, etc.). Additionally, the rubrics for each discussion board were quite detailed, associating points with very specific items targeted for each particular topic.

The goal was for each group to be able to look at their peer's work, make suggestions, and perhaps more importantly, incorporate the very best ideas into their own design such that everyone's project should have improved. They did...sort of.

When comparing the results to those of students during prior semester who were assigned and submitted more traditional, monolithic semester projects, the results were somewhat mixed.
Table 1: Results
Students performed better on the wireless design and the financial analyses, but it was inconclusive regarding their performances in the other areas. Overall, their performance improved if you use the .10 level of statistical significance. Given the low sample size, this may be appropriate. Regardless, I felt as though student learning did improve by chunking the semester project in this way. Positive signs for example included, with the exception of H1, that the mean scores on each section improved in the treatment group. The variance in scores also, for the most part, decreased in the treatment group indicating that there was more consistency in student understanding of the topic in the treatment group. In a post-hoc analysis, I also examined student evaluations in each group to see if there were any differences. The control group were evenly split on the item for agreeing or strongly agreeing on effective instructor feedback and communication. But, the control group overwhelminly strongly agreed that instructor feedback and communication was effective. This pattern held for the remaining factors of the evaluation as well; appropriateness of readings and assignments, technological tools, instructor feedback and communication, course organization, clarity of outcomes and requirements, and content format (Rothman, Romeo, Brennan, and Mitchell, 2011).

In general, chunking semester projects seems to improve student learning. There is a clear relationship between chunking semester projects and improved student evaluations. But, there are clear limitations to this study, perhaps most significantly, the sample size. Additionally, the quasi-experiment utilized different students so as a result, differences could potentially be attributed to differences among the groups themselves rather than the way in which the semester project was implemented. At least as it relates to computer self-efficacy, this does not appear to be an issue in these two groups as there was no statistical difference in scores submitted by each group.

Future research should center around replicating the student with a larger sample. The low sample used in this study could have potentially influenced the statistical results and thus, the interpretation of those results. Other avenues for future research could be directed at student learning as it relates to semester projects and student scores on final exams and overall course grades. Presumably, such holistic measures of student performance provide a more monolithic measure of student learning which could be used to determine the effect of chunked and non-chunked semester projects on such scores.

References

Bodie, G. D., Powers, W. G., & Fitch-Hauser, M. (2006). Chunking, priming and active learning: Toward an innovative and blended approach to teaching communication-related skills. Interactive Learning Environments, 14(2), 119-135. doi:10.1080/10494820600800182

Carstens, D. S., Malone, L. C., & McCauley-Bell, P. (2007). Applying Chunking Theory in Organizational Password Guidelines. Journal Of Information, Information Technology & Organizations, 297-113.

Cowan, N. (2001). The magical number 4 in short-term memory: a reconsideration of mental storage capacity. The Behavioral And Brain Sciences, 24(1), 87-114.

Eligibility Procedures and Accreditation Standards for Business Accreditation. (2016) (pp. 1-53). Tampa. Retrieved from http://www.aacsb.edu/-/media/aacsb/docs/accreditation/standards/businessstds_2013_update-3oct_final.ashx?la=en

Hambley, A. R. (1994). Electronics: A Top-Down Approach to Computer-Aided Circuitry Design. Prentice Hall PTR.

Kurose, James F., and Keith W. Ross. Computer Networking: A Top-down Approach Featuring the Internet. Harlow: Pearson Education, 2012. Print.

Mathy, F., & Feldman, J. (2012). What’s magic about magic numbers? Chunking and data compression in short-term memory. Cognition, 122(3), 346-362. doi:10.1016/j.cognition.2011.11.003

Miller, G. A. (1956). The magical number seven plus or minus two: some limits on our capacity for processing information. Psychological Review, 63(2), 81-97.

Nelson, T. (2011). Assessing Internal Group Processes in Collaborative Assignments. The English Journal, 100(6), 41-46. Retrieved from http://www.jstor.org/stable/23047879

Rothman, T., Romeo, L., Brennan, M., & Mitchell, D. (2011). Criteria for assessing student satisfaction with online courses. International Journal for e-Learning Security, 1(1-2), 27-32.

Sweller, John. "Cognitive Load Theory, Learning Difficulty, and Instructional Design." Learning and Instruction 4.4 (1994): 295-312. Print.

Syn, T., & Batra, D. (n.d). Improving Sequence Diagram Modeling Performance: A Technique Based on Chunking, Ordering, and Patterning. Journal Of Database Management, 24(4), 1-25.

Vaughan-Nichols, Steven J. "204.5-Million Lines of Code Equals One Great Linux Distribution." Computerworld. Computerworld, 22 Oct. 2008. Web. 23 Mar. 2017.

Xu, X., & Padilla, A. M. (2013). Using Meaningful Interpretation and Chunking to Enhance Memory: The Case of Chinese Character Learning. Foreign Language Annals, 46(3), 402-422. doi:10.1111/flan.12039

No comments:

Post a Comment