At the end of last month I ran a short workshop for library staff, going over the main features and operation of Raptor and allowing time for a bit of experimentation and questions. Raptor is installed as a test service at Kent and once placed in the user group these library staff were able to access the application using their usual network username and password (LDAP) from anywhere they were working. I asked the test group to try and find time to experiment with the application over the following weeks and then sent out a link to an online survey asking questions on the user interface, customisation options and the usefulness of the reports available. The survey featured some straight forward questions abut also asked the users to produce a graph showing authorisations to a specific resource over a specified period. They were then asked to change the parameters and sort order for this graph and finally to export a pdf of the graph. The survey asked for comments as well as ranked responses on ease of use and usefulness.
This was a very small test group so we should be wary of too strict an interpretation of the results. However in general the testers gave positive feedback to Raptor. Suggestions were made for improvements to the interface which will be fed back to the developers. At Kent, response times were not always great though this may not be the fault of the software. More frustrating was the lack of ‘feedback’ during the period after an update had been requested and Raptor producing – or sometimes failing to produce – the requested graph. Users did not know whether ‘anything was happening or not’. When updates failed it was not always apparent that this had happened as the previous version of the graph remianed visible and the Processing Status is not particualrly prominent. These issues are minor ones which I am sure can be eradicated in future releases – Raptor is still in the early stages of development. Overwhelmingly the group considered Raptor to be a useful tool which would assist them in their work and planning.
肯特大学的图书管理员已经建议了一些报告,我现在已经包括在Raptor的预配置报告列表中。开云体育主頁(欢迎您)我们已经安装了应用程序,并且运行良好,可供图书馆工作人员使用,他们将帮助我们评估这项服务。上周,我们举办了一个简短的研讨会,向图书馆工作人员介绍Raptor,并获得一些初步反馈。这些员工将熟悉Raptor,然后被要求完成一系列任务并提供反馈。
The workshop produced requests for some data that Raptor does not currently collect and probably never will – for instance a measure of how long a user stayed on a particular e-resource site. This would be difficult even if the log files did store such data. We would need to distinguish between users who just had the resource available on a browser tab or alternative window but who were not actually interacting and those who were searching for data or reading. Not so easy.
I am sure a common observation from Raptor users is that the drop down parameters are hard to decipher and there are often too many of them – some of no relevance. Again this may be something we can tackle when we look at how we can integrate with Microsoft Reporting Services.
Workshop participants also identified one or two reports that did not work as expected but I have yet to establish whether this is down to Raptor or to badly configured XML files – which will be down to me….
I am currently working on an evaluation survey to gather opinions on the usability of Raptor for library staff.
The guidelines do not state that we cannot keep the data for longer periods but stipulate that the if we do keep it, it must be anonymised. Anonymised data is less rich – it may not be possible to extract the type of affiliation from anonymised data for instance, if there is no longer a username to look up. The problem is not insurmountable but this does bring into focus a more general issue with Raptor and its use by non-technical staff.
An ideal tool for the library liaison librarians and library management would be one that could deal with ad hoc reports. By its nature, an ad hoc report should not have to be delayed whilst someone in the Learning & Research Department finds time to create it. The seriousness of this may vary across the sector depending on what level of technical expertise exists among library staff. Some university libraries may well have staff who are comfortable with tinkering with the xml to create new and modified reports or who can anonymise data and import the modified logs for Raptor to parse. But many libraries will rely on their IT department to do this work for them. Whilst IT departments will be happy to provide this service it will not always be possible to respond as quickly as the requesters would like.
I would be interested in hearing other Raptor users views on data protection issues and how they intend to tackle them.
BTW It feels a little mean criticising Raptor but I guess that is part of the evaluation. So I would like to also say that we think Raptor is a really useful tool and was much needed.
Other work to be done in these early days of the project includes setting up the Steering and Implementation groups and agreeing a schedule of meetings for the duration of the project.
So, in summary, not much to report yet but the project team are very enthusiastic about what ARK can deliver. Prior to the implementation of Raptor the University had been unable to monitor the use of e-resources with the level of detail needed to inform strategic planning. There is little doubt that the impact of the ARK project will be useful, not jsut to Kent but to the wider HE/FE community.