computing students and exam questions

This commit is contained in:
Wouter Groeneveld 2023-07-01 12:07:29 +02:00
parent 73d0008c8b
commit f3fc9d007d
2 changed files with 72 additions and 1 deletions

View File

@ -32,7 +32,7 @@ Usually, micro managing is done out of fear: fear that your employees aren't doi
That kind of control---out of fear---most definitely squanders creative potential. More on that in [The Creative Programmer](/works/the-creative-programmer). These types of managers are encountered in any company, and that's independent of the employed software development methodology, so perhaps the researchers asked the wrong questions (to the wrong employees)?
Researchers Hodgson and Briand published related work, [Controlling the uncontrollable: 'Agile' teams and illusions of autonomy in creative work](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=c882822e09be6103267e32db161cf6203e29242c). THeir conclusion? Agile is not the answer to creative freedom:
Researchers Hodgson and Briand published related work, [Controlling the uncontrollable: 'Agile' teams and illusions of autonomy in creative work](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=c882822e09be6103267e32db161cf6203e29242c). Their conclusion? Agile is not the answer to creative freedom:
> While Agile/Scrum here resulted in a system where the members of Gameteam, collectively, had influence over the choice of tasks, work methods and, to a degree, quality standards, more substantial decisions related to work effort, targets, resource allocation and the selection of team members were imposed on them externally. The teams developers and leaders enjoyed autonomy, but only within a context wherein governing and surveillance mechanisms were defined and activated by actors or authorities outside the team.

View File

@ -0,0 +1,71 @@
---
title: Computing Students and Exam Questions
date: 2023-07-01T09:58:00+02:00
categories:
- education
tags:
- students
---
The recurring pattern dutifully repeated itself this academic year: exam questions we predicted that would be badly answered have been badly answered, and questions that students performed well on in previous years were all right this year too. That means the system is consistent, but we'd rather see _all_ questions filled in adequately instead of some. If you take a closer look at the kinds of questions, a few things become clear. I'll go over a small selection of questions and let you figure out whether or not those were the difficult ones. These are part of our second year _Software Engineering Skills_ and _Operating Systems and C_ courses.
**Question**: Explain the following concepts/statements:
- Cooperative scheduling
- Critical section
Answering trend: _good_. We're simply asking them to reproduce the _definition_ of the concepts. These are easy questions that can be learned a few days before the exam by what is called "cramming"---knowledge that is very likely to be promptly forgotten after a month or two. In my view, asking to explain concepts that have been explained during lectures and are clearly well-defined in the course material is useless: we're simply asking students to remember something. They don't need to be _computing_ students at all to remember this.
- The role of the artifact repository in a build system
- Disadvantages of semaphores
Answering trend: _bad_. Students are very likely to simply write down the definition of an artifact repository or a semaphore---but we didn't ask that. We asked them to _situate_ and _interpret_ the concept: what could be a possible disadvantage? How does this fit into that? "A semaphore is used when..." is not the right answer here. I'm not even strict when it comes to grading this one: if your answer shows that you're capable of reasoning about these concepts, you earn the points. Unfortunately, most don't. I think a part of the problem is that we don't reason and discuss concepts thoroughly during a lecture: we mostly provide definitions and exercise implementations, thinking students will figure out themselves how to reason. I guess not.
**Question**: Here's what a `TreeSet` data structure does in Java, which stores unique elements sorted:
```java
var set = new TreeSet<String>();
set.add("Dog");
set.add("Cat"); // "C" comes before "D": added before Dog
set.add("Dog"); // returns false: already added
set.add("Zombie"); // last element; 3 elements in total
```
- Emulate a `TreeSet` in C. Focus on correct pointer usage. Propose one or more data structure(s) and give the correct _signature_ for `add()`. Describe in your own words that the function should do, implementing it is not needed.
Answering trend: _bad_. At this point, students don't know the principles behind a tree set---which is exactly the point. They've learned how linked lists work and should simply come up with a variant that could resemble a set. We even provide what makes a set a set: its (1) unique elements and (2) sorted structure. Some students answered: "I don't know Java" or "I don't know a TreeSet". All there is to know is right here in the question... But it didn't say "implement a `LinkedList` in C", which would have a better answering trend (we tried).
It gets worse: most students are not very good at thoroughly _reading the question_. We hinted that `add()` returns `false` if an element is already present, yet 90% of the answered signatures did not return anything. Why? Those were basically free points you just squandered!
- How would you conceptually implement a `set.remove("Cat")` function? Elaborate all steps, again no need to write the code.
Answering trend: _good_. `remove()` isn't anything special compared to a linked list remove they've been practicing, but we do expect pointer relinking and memory cleanup mentions---which most students dutifully do.
- Explain why a `contains()` function of a set works more efficient than with a conventional `ArrayList` represented in C as a simple linked list.
Answering trend: _mixed_. I expected this one to be better, but it's not too bad. The answer was already given in the question: a set is sorted.
**Question**: Here are four tasks with different arrival times, priorities, and durations. For the three algorithms given (variants of round-robin), draw a timeline that shows how these can be scheduled and calculate AJCT and CPU efficiency.
Answering trend: _good_. This is an exercise we explained during the lectures, we did together, and they had to do at home. Again, free points: I expected this to be a good one. The only difficulty is the given algorithm which is not an exact copy-paste but comes with slight variances that students have to take into account.
And then we asked: "which of these three algorithms has conceptually the _least_ chance of a deadlock? Why?" which was kind of a trap, as there's too little information in the question to be able to give a waterproof answer here: it also depends on the implementation of the software. Some students were smart enough to figure that out. Most just chose one of the three and gave a vague (partially incorrect) reason.
**Question**: related to design patterns:
- Give two examples in code of dependency injection (1) in combination with singleton and (2) without. What are the advantages and disadvantages of both approaches?
- Give two reasons why design patterns can be useful, and a reason or situation why a design pattern might not be suitable.
Answering trend: _good_. DI and Singleton have been well-practiced, are easy to spot and implement, and have clear pros and cons. Consider me relieved. Why a pattern might _not_ be useful is something we let students ponder in their own time, and most answers were all valid ones. But then...
- Here's some engine code, a (simplified) piece that's part of a bigger open source project \[Explain context + code\]. Suppose we want to extend this by introducing \[something related\] by leveraging a design pattern. Modify accordingly and explain your choice.
Answering trend: _bad_. The initial difficulty of making sense of someone else's code paired with the complexity of altering it to implement a new feature seems to be too much for most students, even though they did exactly that (in a very limited fashion) as part of an open source software contribution assignment. This is an open ended question: it doesn't matter whether or not to pick a factory, facade, or strategy: as long as you correctly explain why and manage to at least somewhat make believable adjustments, the points are yours.
---
What can we learn from these answer trends? That reproducing knowledge in short notice---for instance, for an exam---is not a problem, even though this knowledge is pretty much useless to software professionals. What _is_ useful, however, is the appliance of that knowledge in an entirely new situation, which seems to be much more difficult to get right. I presumed that reasoning about the concepts you learn would come natural once you've mastered the concepts. The bad answers tell me this is not the case. "Begging" for points by simply giving the definition instead only makes it worse.
The way to a good answer starts with the correct analysis and interpretation of the question. If the question states "give me the method signature", then a return value obviously is a critical part of that! If the question states "clarify why", then simply stating an answer without a single sentence of context just won't do.
When it comes to (creative) [critical thinking](/post/2021/11/creative-critical-thinking/), there's still a lot of work to do.