Objectives
The objectives of the January tests are several-fold:
- To test the ability of relative beginners to produce succinct and elegant solutions to well-defined problems using the core aparatus of the Haskell language. This includes recursion, list processing, list comprehensions, basic higher-order functions, elementary type classes and data types.
- To expose students to interesting algorithms and data structures that can be considered part of core computer science. The archive of past tests, all of which have been made available as revision exercises, play an important role in this aspect of the students’ education.
- To test students of all abilities. You will notice that each test comprises a number of relatively simple problems that all students are expected to be able to solve and that these build up into increasingly harder and typically less well defined problems designed to stretch the most able students. Put simply, I want every student to pass, but I don’t want them to leave early because they’ve “aced” the test! To a large extent I think these tests have succeed on both fronts.
Administration
The Haskell tests are three hours long and are sat under examination conditions using an on-line administration system called LEXIS. This isolates each machine from the network, provides access to a limited set of resources defined by the administrator and takes frequent checkpoints of each student’s work. LEXIS is freely available and I thoroughly recommend it. Visit the LEXIS page to find out more.
The Archive
The archive includes all tests sat since 2009. I have taken the liberty of making minor edits to the text of the original specifications, for example, correcting typographical errors, improving the wording of the text and reformatting the documents to make them broadly consistent in look and feel. In one or two cases I have also modified and/or re-ordered some questions where I have felt that the problem description and/or solution could have been improved with the benefit of hindsight. I have included a short overview of each exercise which includes a summary of, and rationale for, any such changes. I have also included with each exercise a short digest of additional information. The intention is to provide a small number of pointers to mostly on-line material that is directly relevant to the test problem. It is not intended as a detailed bibliography.
Difficulty
I have had a go at rating the difficulty of each test using a star system with * being the easiest and *** the hardest. This is based on how difficult I think the test is, rather than on how well the students performed. I would not take it too seriously.
Marking Criteria
The first year programming courses at Imperial place a lot of emphasis on code clarity, elegance and structure and the marking criteria used in the on-line tests are designed to assess these as much as they are to assess correctness. All submissions are auto-tested against a test suite but the results are only used to help with the marking and feedback. A common misconception is that ‘correct’ programs, i.e. those that pass the the tests in the test suite, will automatically receive high marks. This is not the case.
Marking Scheme
For completeness I have included a mark allocation scheme which is pretty much that used in the original test, but adjusted where necessary to reflect any structural changes alluded to earlier. The scheme is designed to help the weaker students to pass whilst giving the stronger students the opportunity to shine. Generally, I have been very pleased with the mark distribution, which, it turns out, is emphatically devoid of ‘two humps’. Each test overview includes a brief summary of how my own students performed on the day. You may notice that the average mark is quite high – typically over 70%. I have no problem with this, as I think this reflects pretty well the students’ ability to write working Haskell code under very stressful conditions. They are a good bunch!
Optimality
The exercises are not designed to yield the ‘best’ or most efficient solution to the problem at hand and there are plenty of reminders in the narrative to this effect. Where appropriate I have included in the overviews a short discussion of key performance/complexity issues and, in some cases, how more efficient solutions can be developed.
Solutions
I have made the decision not to publish model solutions to these problems, so please don’t ask me to provide them. I will, however, be happy to discuss specific implementation details with course instructors.
Errors/Feedback
Feel free to notify me (ajf@imperial.ac.uk) of any errors in the specifications or template files. All feedback will be gratefully received.
Acknowledgements
A very special thanks to the following without whom these tests could not have happened:
- Patrick Ah-Fat – Teaching Scholar
- Qianyi Shu – Undergraduate student
- Tristan Allwood – Teaching Fellow
- Peter Cutler – Teaching Assistant
- Will Jones – former PhD student and now External Lecturer
- Nicolai Stawinoga – former PhD student
- Iain Stewart – Teaching Assistant
- Duncan White – Computing Systems Support and LEXIS co-author
- Mike Wyer – LEXIS co-author
- Lloyd Kamara – LEXIS test administrator