[SJ Logo]SuperJournal Plan for Year 3

Home | Search | Demo | News | Feedback | Members Only

[HRule Image]

This paper summarises planning for the user evaluation research in Year 3. It does not cover the various other supporting activities, eg launch of the journal clusters, activities at the library test sites, development of the application, etc. Please see the Progress Report for an update on these areas.

Objectives for Year 3

Year 1 focused on planning and building the SuperJournal application. Year 2 focused on getting the journal clusters up and running, the libraries online, and maximising use at the library test sites. Year 3 focuses on the results of the user evaluation research. Project objectives for Year 3 are to:

Usage Data

Since January 1997, the project has been collecting detailed data on usage of the SuperJournal clusters. Logfiles record each registration, login, and session at the user sites, and each user's interaction with the application during a session. The logfiles are recorded by Manchester and sent on to Loughborough for analysis.

The journal clusters were launched in succession: CCS in December 1996, MGP in May 1997, Political Science in July 1997, and Materials Chemistry in February 1998. This gives a full year of usage for each cluster for comparative studies (slightly less for Materials Chemistry), and in some cases longer.

Hypotheses

In January, Loughborough made a list of 33 hypotheses about user behaviour that could be tested using the usage data. These were discussed with the libraries, publishers, and Steering Committee for feedback.

In March, Loughborough developed an Evaluation Plan for Testing the Hypotheses. This created a conceptual framework for developing models of user behaviour. The hypotheses were linked to independent variables, eg characteristics of users, the information environment, tasks to be performed, characteristics of the service, and local factors that influence delivery. These independent variables separately and together influence the decisions that users make about whether and how to use the electronic journals, and this is reflected in the user behaviour observed.

Usage Models

Loughborough is using the methodology outlined above to analyse the usage data to build a factual picture of how the journals are used: what is used, by whom, where, how frequently, when, etc. SPSS is being used to perform the analysis and to identify trends. Various tables, charts, and graphs are being generated to identify repeat users and non-users, to profile their use, and develop usage models. This is simply raw factual data which documents what has been used and by whom.

Behavioural Models

In order to translate models of factual usage into models of user behaviour, Loughborough is conducting follow up studies with individual users. These build on the user studies conducted previously in the project: the baseline questionnaire (quantitative) and the pre-launch focus groups (qualitative) that documented what readers said they wanted from electronic journals before they became users. Follow up questionnaires will be sent to users of different types (eg frequent users, infrequent users, and non-users), with samples selected in a systematic way. Follow up interviews will be held with repeat users.

These follow up studies will enable Loughborough to understand why users have used the journals, in what ways, what features they value, what they would have liked, etc. This, in combination with the usage models, will allow Loughborough to "prove" or "disprove" the hypotheses, build behaviour models for electronic journal services, quantify benefits, and develop a vision of the electronic services that users would like.

Electronic Journal Scenarios

The factual data collected and the behavioural models derived from them are a sound basis for developing scenarios for future services. As usage and behavioural models congeal, the project will develop a range of scenarios that describe electronic journals and delivery services. These will be tested out on selected users of different clusters, to ensure they reflect what users want, and to compare user requirements across the different subject areas.

Stakeholder Workshops

Three workshops will be held to explore the implications of the electronic journal scenarios for key stakeholders: librarians, publishers, and perhaps technical or service providers. As requirements may vary per subject area, the workshops will be planned along disciplinary lines: Humanities, Sciences, and Social Sciences. Approximately 20 participants will be invited to each workshop, and the scenarios and supporting documentation will be circulated in advance.

At the workshops, the different stakeholder groups will explore the implications of each scenario from their own perspective: what is needed to make it happen, what they would change, and what the business benefits and disadvantages would be. The combined group would then assess inter-stakeholder implications, set priorities among the scenarios, agree mixtures worth pursuing, and suggest players to create the services.

Reports, Conclusions, Recommendations

The end result of the stakeholder workshops will be viable scenarios for electronic journal services, validated by users and assessed by stakeholders in terms of benefits and practicality. The scenarios will be based on the quantitative and qualitative data collected and analysed, and the resulting models for user behaviour, and therefore a firm basis for making recommendations and conclusions about future electronic journals services.

The final eLib report to JISC will document the various models developed, the scenarios, and the views of users and stakeholders regarding their implementation, comparing the requirements for different disciplines as relevant. It will present a vision of future electronic journal services and make practical recommendations as to how they can be developed.

In addition to the eLib report to JISC, a separate Dissemination Plan indicates other reports that will be prepared on different topics during the year, and how publications and conferences will be used to disseminate project results to a wide audience. At the end of the project, a one-day conference will be held for project participants to share results and conclusions. If there is sufficient project funding available, the workshop can be opened up to the wider community.

Summary Schedule for Year 3

Timing Activity
January
  • Develop hypotheses to be tested
February
  • Develop methodology
  • Relate hypotheses to usage data
March
  • Evaluation Plan for testing hypotheses
April
  • Analysis of usage data (April-Nov)
May
  • Identification of frequent, infrequent, non-users (May-July)
June
  • Develop usage models (June-July)
  • Develop and pilot follow up questionnaire
July
  • Follow up questionnaire and interviews (July-August)
  • Develop behavioural models (July-August)
  • Scenario development (July-August)
August
  • Validation of scenarios with end users (July-Sept)
September
  • Stakeholder Workshop 1: Communication & Cultural Studies
October
  • Stakeholder Workshop 2: Molecular Genetics & Proteins, Materials Chemistry
November
  • Stakeholder Workshop 3: Political Science
December
  • Reports, Conclusions, Recommendations
  • SuperJournal conference to share results and recommendations

This web site is maintained by epub@manchester.ac.uk
Last modified: July 03, 1998