Home | Search | Demo | News | Feedback | Members Only
SuperJournal Evaluation Team, April 1998
The purpose of this paper is to draw the various hypotheses we have developed within SuperJournal into a common conceptual structure and to develop a plan to collect further data and analyse data to test the hypotheses. The paper has three sections. The first section offers an integrating conceptual structure and develops concepts of user assessments and resulting patterns of user behaviour. The second section re-examines the hypotheses in the light of the conceptual structure and identifies the causal pathways associated with the different groups of hypotheses. The final section outlines a plan to gather the additional data required and to analyse the data to offer overall models of reader behaviour with electronic journals and to examine the contributions of individual hypotheses.
The overall aim for SuperJournal evaluation is to build a model which explains user behaviour with the SuperJournal application. Subsidiary aims are to assess whether there is evidence to support the 33 hypotheses we have advanced for specific kinds of user behaviour. The individual hypotheses are related to one another and we have identified the following conceptual framework which we believe underpins the hypotheses.
Figure 1. Basic Conceptual Framework
The primary causal route that is of interest is that properties of the SJ application (and the service elements that deliver it) facilitate or impede regular usage. Most of the hypotheses however identify other independent variables which might influence usage; the tasks and discipline of the user, characteristics of the user (the roles they play, familiarity with electronic services etc), the support they receive from site facilities and the other services they can use to study journals (Column 1 in Figure 1). The hypotheses predict that various combinations of these factors will help or hinder use of the SuperJournal service. The outcomes may be that the user makes no use of the service at all, tries it but does not come back to it or becomes to some degree a regular user (Column 3). However, there is no direct link between Column 1 and Column 3 - the provision of a facility does not lead directly to user behaviour. The process is mediated by individual users who, by some means, make a judgement or a series of judgements about the service which leads to their behaviour with the application. The way users interpret the factors from Column 1 to generate the outcomes in Column 3 are of central concern to us - the user assessments in Column 2.
The hypotheses suggest there are a variety of assessments that users are making and that they produce different patterns of usage. We can develop the basic conceptual structure to illustrate this in Figure 2.
Figure 2. User Assessments and Behavioural Outcomes
The users have four basic judgements to make; on the evidence they have before using it, is it worth examining; does it have content that is relevant to the task; is access via this service as good as or better than access to similar material by other services and; does the service have functions and features that add value in important ways. Some of the hypotheses are about the factors users take into account when making these judgements, others are about the sequence. Figure 2 implies a sequence that would need testing.
The judgements people make affects the basic decision; to use or not to use, but when the decision is made to use the service they then affect the form of use. Figure 2 identifies four dimensions to use - what content they are using (what journals, what articles), the frequency of use (both the number of sessions and the pattern of these sessions over time), the type of usage (depth vs. breadth - are they looking across TOCs or reading articles in detail?) and the functions and features they use (the essential core or the array of facilities provided to add value). Many of the hypotheses are at this level - looking at the way specific properties of the system influence the form of usage. In order to test these hypotheses we need ways of characterising user sessions on these dimensions. Figure 3 is a first attempt to define the major categories of interest.
Figure 3. A Classification of Usage
The divisions between limited, moderate and high usage are fairly arbitrary and could be refined as the usage record is analysed. Similarly other variables in the usage data may be of interest.
Each of the hypotheses makes a statement about the way the independent variables affect usage outcomes and we are adding user assessments as the intervening and integrating variable. None of the hypotheses proposes a single factor causal path i.e. x leads to y. Instead they propose multi-causality in varying degrees of complexity. In the figures that follow we have identified the major factors that are implicated in the different clusters of hypotheses. From these pathways we can identify the information we need to collect and process to test each group of hypotheses. We have retained the existing groups of hypotheses except that we have combined Section 2 and Section 6 which both deal with initial usage as it spreads through academic communities.
The concept of forming a cluster to facilitate usage is the foundation of SuperJournal and Figure 4 identifies the major variables involved in testing whether and how the cluster concept affects usage.
Figure 4. The Cluster Concept in Operation
For the cluster concept to work some of the journals relevant to the user must be present in the SJ application and they must be at least as accessible as they are from other services. Where these properties exist we would expect repeat usage to occur. The concept of the cluster would be working if the user goes on to explore journals not normally accessed. We also expect there to be differences between the clusters.
Initial usage depends upon the user learning sufficient about the application to consider it worthy of examining and thereafter deciding that it passes the tests of relevant content and access to make repeat usage worthwhile. None of the hypotheses suggest that the functions and features offered by the application are of particular significance at this stage.
Figure 5. Initial Usage and Changing Patterns of Use
Initial use of the application will result from the awareness generated by local site promotion. New users continue to register and to become repeat users in both the initial sites and at new ones. The process by which news spreads and awareness grows may well involve other factors such as communications within the user community.
The information environment within which the user operates serves both to provide the infrastructure for the SuperJournal application and to offer alternative ways in which the user can access journals. The environment provides the conditions for the user to assess whether access makes it worthwhile using SuperJournal. Access has many dimensions and some of them may lead to differences in usage patterns, e.g. early warning through alerting services may lead to regular use, access to journals not held in the library might encourage preferential use of these journals.
Figure 6. The Information Environment
The application provides may functions and features which may offer value to the user or may cause problems in its use. Where these features are perceived to add value users may employ them e.g. search engines and their use may change the pattern of usage e.g. a wider array of journals examined, or, in the case of alerting services, more regular use of the service.
Figure 7. Functions and Features in the System
The examination of the hypotheses suggests we need to develop two kinds of model; an initial use model which demonstrates the factors leading to use/non-use decisions and a regular use model which demonstrates the factors that sustain usage and lead to different patterns of use. It seems likely we will have approximately 1,500 registered users and about 30% (500) who become to some extent regular users. In the evaluation plan we will use the full range of registered users for the initial use model and the regular user sub-sample to explore the factors leading to different patterns of use. In both cases we are attempting to produce micro behavioural models i.e. accounts of the factors that influence individual users to behave in particular ways with the SJ application.
To aid this analysis we have detailed records of user behaviour organised by user (Column 3 in Figure 1). We have considerable information from the baseline studies about the factors which might influence user assessments (Column 2) but little information about what actually influenced the users in the sample. Of the independent variables (Column 1) we have details of the system and service provided to users and some information about the users (e.g. types of user). From the baseline questionnaire we know the journals of interest but we only have these questionnaires for a sub-sample of registered users. We know the journal holdings in each library at each user site but little else about user access to competing services or the kind of infrastructure support available to them. To build models of user behaviour and to test the various hypotheses we need to organise a survey of the users and non-users of the service.
To proceed in the most efficient and effective manner with the analysis we will:-
Create a database in which individual user usage records are supplemented with additional fields of information. At present these may be the details of baseline questionnaires where they are known. Other information will be added as it becomes available.
Identify the database of records of regular users and sort according to sites, clusters and types of user. At present this will be possible for three clusters (not Materials Chemistry). Classify the form of usage (using a refined version of the measures in Figure 3) for each user. Examine forms of usage against types of user. We anticipate, for example, that librarian usage and perhaps course students may by different from other users and should be excluded from the subsequent analysis. Analyse particularly for the issues that relate to preferences and problems with the service to guide the follow up study in 3 below.
Create a follow-up interview/questionnaire for the regular users to explore (a) the factors that caused them to use the application, (b) the factors that led to the specific forms of usage and (c) the specific independent variables in the user's setting (journals of interest, infrastructure support etc). Ideally we should try to reach all users with an on-line questionnaire but there are some questions which necessitate an interview e.g. questions relating to the individuals' usage history. We could create a sub-sample of users to explore examples of different categories of usage, the use of specific functions and features and to explore issues which are difficult to study remotely e.g. the affect of different kinds of journal presentation and whether users print articles before studying them.
Identify a sample of non-users who are as matched as possible with the sample of regular users and create a more limited questionnaire to explore reasons for non-use. The questionnaire can be shorter because there is no repeat usage to explore. By controlling such variables as site, cluster and type of user we can examine the influence of other variables in decisions about use/non-use.
Interview each librarian to establish site based independent variables especially infrastructure and competing services and to locate and gather further information about particular users in the samples for 3 and 4 above.
Identify a sample of new users (during the launch of Materials Chemistry and of new registrations for the other clusters) and design an on-line questionnaire to explore their initial reactions and likelihood of becoming repeat users. This is necessary because the other follow-up studies could be to users who registered up to a year ago and memories of factors that influenced early decisions may be unreliable.
The next stage is to plan these activities against the time frame remaining for the project. There are many factors that influence this plan - the end-date of the project, the dates of terms and holidays, the length of time available to Materials Chemistry users and when follow-up interviews can be undertaken with them, and the cut-off dates when we will no longer analyse usage data in order that comparable data i.e. length of time, the service was available in a cluster to a site, can be analysed. The planning of the follow-up questionnaires and interviews can be undertaken now and data collected from users of the early clusters. Priority will need to be given to the study of new users because Materials Chemistry users will be at this stage in the near future.
This web site is maintained by firstname.lastname@example.org
Last modified: June 22, 1998