I finished reading a book this weekend about the Scopes Monkey Trial. It's one I'm reviewing for Library Journal so when the review is published I'll try to remember to add the link to it. In the meantime thought, what really stuns me about it is the way individuals, serving their own purposes for the most part, just happen to spur events that impact a whole nation and which impacts reverbrate for years.
The Scopes Monkey Trial happened because the town fathers in a small Tennessee town sought to boost the town's economy. At the time, it was against state law to teach Dawinian theory of evolution in public schools. By encouraging a young high school teacher named John Scopes to allow himself to be indicted for breaking this law by teaching evolution in the local highschool. The ACLU leapt to defend him and thus began one of the most widely followed trials of the early 20th century.
Monday, May 28, 2007
Thursday, May 17, 2007
IUG 2007 – Annual Serials Renewals Made Easy
Jane Theissen of Fontbonne University (yeah, THAT Fontbonne!) made this presentation.
Jane walked us through the steps she takes to involve faculty in the annual serials renewal process. They also have Ebsco Subscription Services as their subscription agent. She creates a review file of current subscriptions in Millennium and then adds usage data and current price to it in a spreadsheet. Then she estimates price increases using Ebsco’s Historical Price Analysis Report. The spreadsheet is sorted by fund code (academic department) and distributed to each academic department for changes, approval, etc.
Fontbonne has about 300 subscriptions and this is a fairly simplistic formula but I think that it could be a place to start a comprehensive review. Also, they do this during the spring semester (before receiving the official renewal list from Ebsco.
She noted that one disadvantage of her current process is that although the changes they recommend are made in May, they don’t take effect until the following January (when most journals begin new volumes and thus new subscriptions).
In response to a question from the audience, she told us that they don’t have any individual subscriptions for e-journals only, they receive all of their individual subscriptions in print (unless they are included in an electronic aggregation).
Another audience member mentioned that he had learned from Ebsco that sometimes they actually end a library’s subscription and begin a new one and that results in the library receiving duplicate issues.
Jane walked us through the steps she takes to involve faculty in the annual serials renewal process. They also have Ebsco Subscription Services as their subscription agent. She creates a review file of current subscriptions in Millennium and then adds usage data and current price to it in a spreadsheet. Then she estimates price increases using Ebsco’s Historical Price Analysis Report. The spreadsheet is sorted by fund code (academic department) and distributed to each academic department for changes, approval, etc.
Fontbonne has about 300 subscriptions and this is a fairly simplistic formula but I think that it could be a place to start a comprehensive review. Also, they do this during the spring semester (before receiving the official renewal list from Ebsco.
She noted that one disadvantage of her current process is that although the changes they recommend are made in May, they don’t take effect until the following January (when most journals begin new volumes and thus new subscriptions).
In response to a question from the audience, she told us that they don’t have any individual subscriptions for e-journals only, they receive all of their individual subscriptions in print (unless they are included in an electronic aggregation).
Another audience member mentioned that he had learned from Ebsco that sometimes they actually end a library’s subscription and begin a new one and that results in the library receiving duplicate issues.
IUG 2007 – Holdings Conversion using Global Update
Steve Shadle and Sion Romaine of the University of Washington Libraries made this presentation.
Their project involved taking their free text holdings statements and converting them to MARC21 For Holdings formats in the 853 and 863 fields in order to make them transmittable to OCLC. They anticipate that this will streamline ILL processing and improve user finding journals. In the first phase of the project they converted approximately 169,000 records. They added 007 and 008 fields, converted call number tags, and converted holdings statements tags.
They presented planning and preparation steps. Preparations included normalizing and correcting existing data by identifying errors and typos, deleting extra spaces, etc. and identifying how holdings are expressed currently and whether they need to be standardized. Using regular expressions in the matches function in Create Lists would be useful for accomplishing this.
There are number of things you can do and cannot do with Global Update, specifically, you cannot make changes using regular expressions.
Using the example of converting the call number fields, Steve provided tips for practice. The key is to find the pattern in the existing data and then using that pattern, create an algorithm that will achieve the desired change. In their local project, they converted call number fields (947 in the c group), holdings statements (947 in the h group),
Oooh, adding a 947 field to the OCLC record before downloading will create a check-in record upon download according to Steve, I’ll have to check that out!
Steve talked about how to use regular expressions in the matches function of Create Lists. In doing this he provided a screen shot from Millennium Create Lists search box in which the matches function is used with regular expressions. He also provided a screen shot from Global Update that displayed the five-step algorithm they used to convert a set of holdings statements.
I think this is they way we can, first, update our periodicals holdings to MFH and then consider creating holdings statements for monographs and other materials.
An audience member asked about the feasibility of exporting the data to other applications like OCLC. Steve replied that once the data is in MFH, it’s very easily exported. I got lost when he started talking about he gory details of exporting but he did mention an ‘export table’ and I wondered whether that has anything to do with the import tables that my colleague Abel has been learning about this week.
Their project involved taking their free text holdings statements and converting them to MARC21 For Holdings formats in the 853 and 863 fields in order to make them transmittable to OCLC. They anticipate that this will streamline ILL processing and improve user finding journals. In the first phase of the project they converted approximately 169,000 records. They added 007 and 008 fields, converted call number tags, and converted holdings statements tags.
They presented planning and preparation steps. Preparations included normalizing and correcting existing data by identifying errors and typos, deleting extra spaces, etc. and identifying how holdings are expressed currently and whether they need to be standardized. Using regular expressions in the matches function in Create Lists would be useful for accomplishing this.
There are number of things you can do and cannot do with Global Update, specifically, you cannot make changes using regular expressions.
Using the example of converting the call number fields, Steve provided tips for practice. The key is to find the pattern in the existing data and then using that pattern, create an algorithm that will achieve the desired change. In their local project, they converted call number fields (947 in the c group), holdings statements (947 in the h group),
Oooh, adding a 947 field to the OCLC record before downloading will create a check-in record upon download according to Steve, I’ll have to check that out!
Steve talked about how to use regular expressions in the matches function of Create Lists. In doing this he provided a screen shot from Millennium Create Lists search box in which the matches function is used with regular expressions. He also provided a screen shot from Global Update that displayed the five-step algorithm they used to convert a set of holdings statements.
I think this is they way we can, first, update our periodicals holdings to MFH and then consider creating holdings statements for monographs and other materials.
An audience member asked about the feasibility of exporting the data to other applications like OCLC. Steve replied that once the data is in MFH, it’s very easily exported. I got lost when he started talking about he gory details of exporting but he did mention an ‘export table’ and I wondered whether that has anything to do with the import tables that my colleague Abel has been learning about this week.
IUG 2007 – Building license records in ERM
Diane Grover of the University of Washington made this presentation.
Until about five years ago, very few libraries did more than sign and file their license agreements with electronic resource vendors. At that point a number of libraries began to create systems, collections of license information and to develop standardized language to describe the elements of the license. Some of them were homegrown and some were developed by companies like III. Diane’s presentation described a retrospective project at the University of Washington Libraries to convert their paper license agreements to electronic and to include descriptive information from them in their III ERM.
They worked with a number of stakeholders in order to accomplish the project including stakeholders within the library like the ILL team, access services for reserves, the library web committee, and the library digitization committee. Stakeholders outside the library included the printing department (who prints course packs) and the attorney general.
They planned to include much of what had originally been preserved in paper files in addition to the licenses themselves, for example correspondence with vendors. Diane reviewed an active license record from their system and discussed some of the fixed and variable fields that they were using and why. They decided to store their digitized licenses on D-Space and then included links to them in their ERM license records. She also shared screen shots of their license records as they appear in the OPAC.
Other libraries are using other methods to accomplish similar ends. Some are hosting on their own web servers, others are simply using spreadsheets and database applications (like MS Access). Others are using Docutek ERes or the Millennium Media module. Some are also using OCR scanning to make the digitized licenses searchable. Most other libraries, like UW, are using some sort of security thus keeping them secure.
Some issues were common to most libraries: decisions about what data to record and what data to display for example. Most also struggle with the selection of language and terms that both users and library staff understand. They (librarians and staff working on the projects) also struggle with interpreting complex licensing language and terms. Diane shared a surprising outcome that she had not anticipated and that was that her librarians and staff became upset when they learned about some of the activities that licenses do not permit. And finally, all agreed that this kind of project is slow going.
Finally, she offered some advice about what to do and what to avoid as well as the outcomes within the library and the university that accrued from the project.
This was a really useful session that I REALLY hope that I remember to come back to when (if) we implement ERM. I do wonder though whether all this work is strictly necessary in light of the murmurs I’ve heard lately about the need for and possibly beginnings of development of a standard electronic resource license agreement.
Diane concluded the session with a discussion of some of the standards and de facto standards that are currently in development for ERM. She covered ERMI which is a de facto standard that is being widely used in the U.S. including in III’s ERM. She also mentioned the NISO License Expression Working Group (LEWG) which is working on an XLM based license terms transmission standard called ONIX-PL which is not completely compatible with the ERMI data elements. NISO is also working on a “non-license” approach called SERU (Shared E-Resource Understanding).
Until about five years ago, very few libraries did more than sign and file their license agreements with electronic resource vendors. At that point a number of libraries began to create systems, collections of license information and to develop standardized language to describe the elements of the license. Some of them were homegrown and some were developed by companies like III. Diane’s presentation described a retrospective project at the University of Washington Libraries to convert their paper license agreements to electronic and to include descriptive information from them in their III ERM.
They worked with a number of stakeholders in order to accomplish the project including stakeholders within the library like the ILL team, access services for reserves, the library web committee, and the library digitization committee. Stakeholders outside the library included the printing department (who prints course packs) and the attorney general.
They planned to include much of what had originally been preserved in paper files in addition to the licenses themselves, for example correspondence with vendors. Diane reviewed an active license record from their system and discussed some of the fixed and variable fields that they were using and why. They decided to store their digitized licenses on D-Space and then included links to them in their ERM license records. She also shared screen shots of their license records as they appear in the OPAC.
Other libraries are using other methods to accomplish similar ends. Some are hosting on their own web servers, others are simply using spreadsheets and database applications (like MS Access). Others are using Docutek ERes or the Millennium Media module. Some are also using OCR scanning to make the digitized licenses searchable. Most other libraries, like UW, are using some sort of security thus keeping them secure.
Some issues were common to most libraries: decisions about what data to record and what data to display for example. Most also struggle with the selection of language and terms that both users and library staff understand. They (librarians and staff working on the projects) also struggle with interpreting complex licensing language and terms. Diane shared a surprising outcome that she had not anticipated and that was that her librarians and staff became upset when they learned about some of the activities that licenses do not permit. And finally, all agreed that this kind of project is slow going.
Finally, she offered some advice about what to do and what to avoid as well as the outcomes within the library and the university that accrued from the project.
This was a really useful session that I REALLY hope that I remember to come back to when (if) we implement ERM. I do wonder though whether all this work is strictly necessary in light of the murmurs I’ve heard lately about the need for and possibly beginnings of development of a standard electronic resource license agreement.
Diane concluded the session with a discussion of some of the standards and de facto standards that are currently in development for ERM. She covered ERMI which is a de facto standard that is being widely used in the U.S. including in III’s ERM. She also mentioned the NISO License Expression Working Group (LEWG) which is working on an XLM based license terms transmission standard called ONIX-PL which is not completely compatible with the ERMI data elements. NISO is also working on a “non-license” approach called SERU (Shared E-Resource Understanding).
IUG – My Millennium: How long since you’ve taken a look at this powerful and personalized patron information page?
Dinah Sanders, Product Manager and Dan Mattson, Library Training Consultants from III made this presentation.
My Millennium is a suite of tools that allows library users to obtain information without help from library staff. It contains basic patron information (name, address, etc.) as well as materials checked out, fines accruing, RSS feeds, etc. Dinah recommends turning it on and then making iterative changes and improvements since it is user ready “out of the box”.
There are a lot of options that can be varied by patron type. They allow you to do things like customizing functions specifically for library staff.
Some of the useful features include:
- The preferred searches feature allows the user to save frequently done searches to easily repeat them. It is also useful as an alert (via email) of new materials that result from a saved search.
- A function that allows them to update their own personal information. The library can customize the fields that patrons are allowed to update.
I wonder if this would be a useful way to maintain staff information like the telephone list.
- In conjunction with ResearchPro, III’s federated search engine, users can save searches or groups of resources that they frequently search.
This would be similar to the function in Search-All-Databases (Metalib) that allows users to create their own groups of resources.
- Patron ratings, III’s first product that allows users to provide their opinion about library materials. Since it only displays stars, there’s little potential for abuse and therefore it would require very little library staff intervention and moderation. You can turn it on by patron type and thereby target a specific audience (e.g. faculty).
- Users must opt in (and can opt out) of a reading history function that allows them to track materials that they’ve checked out. There is no staff access to reading history, the only way to access it is to log in as the user.
- Oh this is COOL: when logged in, users can limit searches to only the items in their reading history AND, with release 2007, they can export the contents of their reading history to their “shopping cart” (and, presumably, from there to an external application like MS Word or Excel).
- Also coming with release 2007 is a “my list” function that will allow patrons to create and maintain a list of materials, whether or not they’ve checked them out.
- Also coming with release 2007 is a “forgot your password?” function, the use of which is obvious.
- A feature that can be turned on is a special, library staff display of a bib record on the patron side that includes some of the fixed field data like date of last update, creation date, record number, bib level, material type codes, and so on.
One disadvantage for our library is that it collects more information about our users than we typically keep, for instance, their reading history. I wonder if the My Millennium info disappears when students, for example, are purged and reloaded at the end and beginning of each semester? Dinah answered yes, that it disappears when the patron record is purged…which brings up another question, and that is, a student’s reading history might not be sustained from semester to semester which could be a problem especially for doctoral students AND that ratings would not tend to develop over time (the natural emergence of a community opinion, that would normally develop according to systems theory, emergence would be blocked, stunted).
My Millennium is a suite of tools that allows library users to obtain information without help from library staff. It contains basic patron information (name, address, etc.) as well as materials checked out, fines accruing, RSS feeds, etc. Dinah recommends turning it on and then making iterative changes and improvements since it is user ready “out of the box”.
There are a lot of options that can be varied by patron type. They allow you to do things like customizing functions specifically for library staff.
Some of the useful features include:
- The preferred searches feature allows the user to save frequently done searches to easily repeat them. It is also useful as an alert (via email) of new materials that result from a saved search.
- A function that allows them to update their own personal information. The library can customize the fields that patrons are allowed to update.
I wonder if this would be a useful way to maintain staff information like the telephone list.
- In conjunction with ResearchPro, III’s federated search engine, users can save searches or groups of resources that they frequently search.
This would be similar to the function in Search-All-Databases (Metalib) that allows users to create their own groups of resources.
- Patron ratings, III’s first product that allows users to provide their opinion about library materials. Since it only displays stars, there’s little potential for abuse and therefore it would require very little library staff intervention and moderation. You can turn it on by patron type and thereby target a specific audience (e.g. faculty).
- Users must opt in (and can opt out) of a reading history function that allows them to track materials that they’ve checked out. There is no staff access to reading history, the only way to access it is to log in as the user.
- Oh this is COOL: when logged in, users can limit searches to only the items in their reading history AND, with release 2007, they can export the contents of their reading history to their “shopping cart” (and, presumably, from there to an external application like MS Word or Excel).
- Also coming with release 2007 is a “my list” function that will allow patrons to create and maintain a list of materials, whether or not they’ve checked them out.
- Also coming with release 2007 is a “forgot your password?” function, the use of which is obvious.
- A feature that can be turned on is a special, library staff display of a bib record on the patron side that includes some of the fixed field data like date of last update, creation date, record number, bib level, material type codes, and so on.
One disadvantage for our library is that it collects more information about our users than we typically keep, for instance, their reading history. I wonder if the My Millennium info disappears when students, for example, are purged and reloaded at the end and beginning of each semester? Dinah answered yes, that it disappears when the patron record is purged…which brings up another question, and that is, a student’s reading history might not be sustained from semester to semester which could be a problem especially for doctoral students AND that ratings would not tend to develop over time (the natural emergence of a community opinion, that would normally develop according to systems theory, emergence would be blocked, stunted).
IUG 2007 – Quick start to ERM
Ted Fons, III Senior Product Manager and Caitlin Spears, III Library Training Consultant
ERM is a central place to store information about electronic resources and exposes that information to staff and library users. Quick start to ERM is a collection of things to help libraries to get started with ERM once they’ve installed it. There’s a training guide, records to install, updates to the fields on older implementations.
Quick start is a service that can be purchased from III for older installations. http://csdirect.iii.com/documentation/training/#qs is the address for the Quick Start Guide that goes with the Quick Start service. Note that this is a password protected link that will only be available to III customers with a username and password.
Caitlin’s part of the presentation included a very useful timeline for implementation and then she went back and reviewed the steps in detail. One of the most useful parts of this is the fact that they can take our existing data (in an Access database, an Excel spreadsheet, or existing bibliographic records for instance) and upload it into ERM.
There are really only two required fields for resource records: the resource name and the resource ID. There are no required fields for license records. There are two required fields for contacts records: contact code and contact name. Resource records also provide the capability of sending an email reminder of things like a renewal date or other important date to the maintenance of the resource.
Coverage data can be imported via CASE, vendor provided files, external knowledge bases (e.g. SFX…see session M5 for more info about a script that allows importing coverage data from the SFX knowledge base), or library maintained fields. Load is enabled by the resource ID in the coverage field. If a matching record in the libraries’ system is not matched in the load data, a brief bib with holdings data is created. Matching is based on the “i” index (ISSN field). There is also the capability of editing the coverage information that comes in with the load.
Caitlin included some examples in her presentation, for instance a resource search in the Cornell University Libraries site, a journal search from the University of Arizona Library site and the Yale Law Library.
ERM can also provide an “A-to-Z” list of electronic resources. Some more examples are Bowling Green State University and Georgetown Law Library. The A-to-Z list of resources is searchable by title and by subject (some of which come out of the box and which are customizable).
Additional Quick Start immediate capabilities:
-user level resource “terms of use” and resource advisories
-reminder emails
-staff level license and resource information
-integration with WebBridge, III’s open URL link resolver
-the ability to import usage data using SUSHI
-title overlap reports
Finally, they provided a link to a document authored by Mark Strang at Bowling Green State University that includes a pictorial guide to ERM records, wwwoptions, and some html files which is available at
http://innovativeusers.org/cgi-bin/clearinghouse/view.pl?id=123.
ERM is a central place to store information about electronic resources and exposes that information to staff and library users. Quick start to ERM is a collection of things to help libraries to get started with ERM once they’ve installed it. There’s a training guide, records to install, updates to the fields on older implementations.
Quick start is a service that can be purchased from III for older installations. http://csdirect.iii.com/documentation/training/#qs is the address for the Quick Start Guide that goes with the Quick Start service. Note that this is a password protected link that will only be available to III customers with a username and password.
Caitlin’s part of the presentation included a very useful timeline for implementation and then she went back and reviewed the steps in detail. One of the most useful parts of this is the fact that they can take our existing data (in an Access database, an Excel spreadsheet, or existing bibliographic records for instance) and upload it into ERM.
There are really only two required fields for resource records: the resource name and the resource ID. There are no required fields for license records. There are two required fields for contacts records: contact code and contact name. Resource records also provide the capability of sending an email reminder of things like a renewal date or other important date to the maintenance of the resource.
Coverage data can be imported via CASE, vendor provided files, external knowledge bases (e.g. SFX…see session M5 for more info about a script that allows importing coverage data from the SFX knowledge base), or library maintained fields. Load is enabled by the resource ID in the coverage field. If a matching record in the libraries’ system is not matched in the load data, a brief bib with holdings data is created. Matching is based on the “i” index (ISSN field). There is also the capability of editing the coverage information that comes in with the load.
Caitlin included some examples in her presentation, for instance a resource search in the Cornell University Libraries site, a journal search from the University of Arizona Library site and the Yale Law Library.
ERM can also provide an “A-to-Z” list of electronic resources. Some more examples are Bowling Green State University and Georgetown Law Library. The A-to-Z list of resources is searchable by title and by subject (some of which come out of the box and which are customizable).
Additional Quick Start immediate capabilities:
-user level resource “terms of use” and resource advisories
-reminder emails
-staff level license and resource information
-integration with WebBridge, III’s open URL link resolver
-the ability to import usage data using SUSHI
-title overlap reports
Finally, they provided a link to a document authored by Mark Strang at Bowling Green State University that includes a pictorial guide to ERM records, wwwoptions, and some html files which is available at
http://innovativeusers.org/cgi-bin/clearinghouse/view.pl?id=123.
IUG 2007 – Implementing e-claiming of serials issues via email
Shirley Lincicum of Western Oregon University Libraries made this presentation.
III’s e-claims product enables libraries to send claims directly to a subscription agent’s system via email using the Millennium Serials claim function. Shirley’s presentation was exceptionally well organized. She covered the pre-requisites required in order to implement e-claiming followed by the initial set up requirements with both III and your serials vendor (they use Ebsco almost exclusively), then pointed out limitations and considerations and finally walked us through the process they use at WOU.
The only really work intensive part of set up is including information required by the serials vendor in check-in records. Ebsco can help with this in two ways; (1) by sending the required data with the annual renewal invoice (if you receive it electronically) and (2) by sending a list called Ebscan of barcodes that the library can scan in to each check-in record.
An interesting feature of the e-claims process in Millennium Serials is that it creates a “hidden” review file of claims to be sent which can be printed out or saved to a file and then used in some pre-claim processes including shelf-reading to be sure than an issue has not arrived and simply not been checked in and verification that the issue has actually been published (with Ebsco the simple way to do this is checking the JETS service within Ebsconet to see if the issue has been received).
The obvious benefit to sending e-claims via email is the time savings that accrues from simply creating the claims list in Millennium and routing the data to the subscription agent via email as opposed to entering it into the subscription agent’s system one title at a time. Other benefits include the one I mentioned above; the ability to output a list of potential claims for further processing before sending it on to the subscription agent. Shirley also pointed out that it cuts down on the mundane communication between the library and the subscription agent which frees up both the librarian and the customer service rep to focus on issues that actually require human intervention.
Some interesting by-product information from this system was (1) that another option for communicating claims information between library and subscription agent is FTP (although this does not allow for two way communication, e.g. acknowledgment that the claims have been received by the subscription agent) and (2) that a potential use for fixed fields in the check-in record is coding claims restrictions which allows the library to select a subset of all subscriptions to send through the Millennium claims process.
III’s e-claims product enables libraries to send claims directly to a subscription agent’s system via email using the Millennium Serials claim function. Shirley’s presentation was exceptionally well organized. She covered the pre-requisites required in order to implement e-claiming followed by the initial set up requirements with both III and your serials vendor (they use Ebsco almost exclusively), then pointed out limitations and considerations and finally walked us through the process they use at WOU.
The only really work intensive part of set up is including information required by the serials vendor in check-in records. Ebsco can help with this in two ways; (1) by sending the required data with the annual renewal invoice (if you receive it electronically) and (2) by sending a list called Ebscan of barcodes that the library can scan in to each check-in record.
An interesting feature of the e-claims process in Millennium Serials is that it creates a “hidden” review file of claims to be sent which can be printed out or saved to a file and then used in some pre-claim processes including shelf-reading to be sure than an issue has not arrived and simply not been checked in and verification that the issue has actually been published (with Ebsco the simple way to do this is checking the JETS service within Ebsconet to see if the issue has been received).
The obvious benefit to sending e-claims via email is the time savings that accrues from simply creating the claims list in Millennium and routing the data to the subscription agent via email as opposed to entering it into the subscription agent’s system one title at a time. Other benefits include the one I mentioned above; the ability to output a list of potential claims for further processing before sending it on to the subscription agent. Shirley also pointed out that it cuts down on the mundane communication between the library and the subscription agent which frees up both the librarian and the customer service rep to focus on issues that actually require human intervention.
Some interesting by-product information from this system was (1) that another option for communicating claims information between library and subscription agent is FTP (although this does not allow for two way communication, e.g. acknowledgment that the claims have been received by the subscription agent) and (2) that a potential use for fixed fields in the check-in record is coding claims restrictions which allows the library to select a subset of all subscriptions to send through the Millennium claims process.
IUG 2007 – WWWOptions, WebPub and new screens, oh my! OPAC redesign for 2006 and WebPAC Pro
Aimee Fifarek of the Scottsdale Public Library made this presenation.
This program began with some in depth, html rich, slides that describe the initial set up for WebPAC Pro. The first thing she showed us was how to use the style sheets to customize the appearance and content of the tabs, both main tabs and help page tabs. She noted that the naming convention for different tabs (active and inactive and appearance and content) differ which can be confusing if you’re not aware of it. There are some small but important changes that need to be made in order for the pages to appear properly in IE7.
Some of this was not terribly useful for me since I don’t actually design or work on our library web site. But I am on the Information Architecture Working Group at my library which is charged with creating the architecture (structure) of our new web site. The confusing thing is that the web site is different from the OPAC. The OPAC is usually embedded in the overall library web site and, up to now, typically the OPAC was designed (colors, fonts, etc.) based on the web site but we’re finding that we’re starting by designing (customizing) our OPAC using WebPAC Pro and then applying that design (and the style sheets underlying it) to the rest of the web site.
Most of Aimee’s power point slides include a citation to a page in the manual. It was interesting that she made a point to mention that the manual was actually pretty helpful for the functions she’s talking about (apparently this is not always the case).
But the really cool thing about WebPAC Pro are the options that you can install once you’re installed it. They include inbound and outbound RSS feeds, a spell checker, and the ability to allow library users to write their own reviews of items in the collection (some of the things that Mark Strang talked about in his session, Enhancing the Virtual Catalog Experience. Unfortunately, these were the things that I was really interested in hearing more about, particularly if and how they are using them but she didn’t spend much time on them.
In response to a question from the audience, Aimee made the point that users are used to rapid change and web development. The library web page is not a reference book, it’s not static, and most users are going to expect to see changes (improvements) there from time to time. I would add that, in keeping with the whole user-centered design movement, the ‘gurus’ all agree that one of the benefits of including the user in an iterative design process gives them the impression that the library cares about their opinions and needs and responds to them.
This program began with some in depth, html rich, slides that describe the initial set up for WebPAC Pro. The first thing she showed us was how to use the style sheets to customize the appearance and content of the tabs, both main tabs and help page tabs. She noted that the naming convention for different tabs (active and inactive and appearance and content) differ which can be confusing if you’re not aware of it. There are some small but important changes that need to be made in order for the pages to appear properly in IE7.
Some of this was not terribly useful for me since I don’t actually design or work on our library web site. But I am on the Information Architecture Working Group at my library which is charged with creating the architecture (structure) of our new web site. The confusing thing is that the web site is different from the OPAC. The OPAC is usually embedded in the overall library web site and, up to now, typically the OPAC was designed (colors, fonts, etc.) based on the web site but we’re finding that we’re starting by designing (customizing) our OPAC using WebPAC Pro and then applying that design (and the style sheets underlying it) to the rest of the web site.
Most of Aimee’s power point slides include a citation to a page in the manual. It was interesting that she made a point to mention that the manual was actually pretty helpful for the functions she’s talking about (apparently this is not always the case).
But the really cool thing about WebPAC Pro are the options that you can install once you’re installed it. They include inbound and outbound RSS feeds, a spell checker, and the ability to allow library users to write their own reviews of items in the collection (some of the things that Mark Strang talked about in his session, Enhancing the Virtual Catalog Experience. Unfortunately, these were the things that I was really interested in hearing more about, particularly if and how they are using them but she didn’t spend much time on them.
In response to a question from the audience, Aimee made the point that users are used to rapid change and web development. The library web page is not a reference book, it’s not static, and most users are going to expect to see changes (improvements) there from time to time. I would add that, in keeping with the whole user-centered design movement, the ‘gurus’ all agree that one of the benefits of including the user in an iterative design process gives them the impression that the library cares about their opinions and needs and responds to them.
IUG 2007 – Playing with matches: Using regular expressions in Create Lists
Richard V. Jackson of Huntington Libraries made this presentation.
Create lists is a function of Millennium that allows library staff to automate the creation of groups of bibliographic records. It’s used for a huge variety of functions, for instance, listing newly acquired materials, listing journals that serve a specific discipline, listing records that are in need of revision of some kind. ‘Matches’ is one way, a fairly sophisticated and seldom used, means of including records that describe materials with particular characteristics and excluding others.
‘Matches’ uses what are called ‘regular expressions’ which are combination of literal characters and meta-characters (characters that tell the system what to do, e.g. directing Millennium to look for a single letter in a particular field only if it is the last character in that field would involve telling Millennium what you are looking for, the single letter, and also telling it to limit the results to only those records where the letter occurs at the very end of the field).
Jackson reviewed and defined the function of all of the meta-characters using a nice variety of examples.
This particular search function significantly broadens Millennium Create Lists’ capability for sophisticated searching that I’ll will find tremendously useful in what I do now and what my library can do in the future, not least of which is building creative RSS feeds.
Create lists is a function of Millennium that allows library staff to automate the creation of groups of bibliographic records. It’s used for a huge variety of functions, for instance, listing newly acquired materials, listing journals that serve a specific discipline, listing records that are in need of revision of some kind. ‘Matches’ is one way, a fairly sophisticated and seldom used, means of including records that describe materials with particular characteristics and excluding others.
‘Matches’ uses what are called ‘regular expressions’ which are combination of literal characters and meta-characters (characters that tell the system what to do, e.g. directing Millennium to look for a single letter in a particular field only if it is the last character in that field would involve telling Millennium what you are looking for, the single letter, and also telling it to limit the results to only those records where the letter occurs at the very end of the field).
Jackson reviewed and defined the function of all of the meta-characters using a nice variety of examples.
This particular search function significantly broadens Millennium Create Lists’ capability for sophisticated searching that I’ll will find tremendously useful in what I do now and what my library can do in the future, not least of which is building creative RSS feeds.
IUG 2007 – Enhancing the virtual catalog experience: WebPAC Pro and the WebPAC Pro bundle
Mark Strang from Bowling Green State University made this presentation.
The spell checker function works like you’d expect it to work, if a user misspells one of their search terms, options for correct spellings are displayed along with any results for the initial search. If they misspell more than one of their search terms, drop down boxes display that contain options for correct spellings for each potentially misspelled word! In our initial usability tests at the Bell Library, two of our four participants misspelled a search term at some point during the testing and all of them were ultimately unsuccessful in task completion as a result. That indicates to me that a spell checker would be tremendously useful to them. My dream would be that we could implement the spell checker in the initial version of our new design so that I could compare successes against the data we collected in those initial test.
If a library chooses to implement it, Encore also supports inbound and outbound RSS feeds. Strang didn’t spend any time discussing inbound feeds but instead spent his time talking about configuring outbound feeds. Most libraries’ initial use of outbound feeds seem to consist of new materials lists generated by Create List queries. Strang discussed the advantages and disadvantages of basing feeds on the query vs. a review file. In essence, using a query is the preferred method because it allows automatic updating of the feed as opposed to a review file which is static and must be updated (and thus a new feed item) manually.
The feature that generated the most questions and comments from the audience is called Community Reviews. Community reviews allows library users to write their own reviews of library materials. Strang’s library was using it to present student reviews written as a class assignment that library staff were putting up on their site, they were not currently allowing users to post their own comments. This, however, is possible and can be monitored by library staff.
Of these three functions, it seems to me that both building RSS feeds and monitoring reviews would require a permanent commitment of staff time. Not necessarily Systems staff time but library staff time certainly. A couple of questions occurred to me during this session in that regard. First, I wonder how much (if any) access to MilAdmin is required to monitor reviews since the Systems Department may be reluctant to give a lot of access to a non-Systems staff member.
The spell checker function works like you’d expect it to work, if a user misspells one of their search terms, options for correct spellings are displayed along with any results for the initial search. If they misspell more than one of their search terms, drop down boxes display that contain options for correct spellings for each potentially misspelled word! In our initial usability tests at the Bell Library, two of our four participants misspelled a search term at some point during the testing and all of them were ultimately unsuccessful in task completion as a result. That indicates to me that a spell checker would be tremendously useful to them. My dream would be that we could implement the spell checker in the initial version of our new design so that I could compare successes against the data we collected in those initial test.
If a library chooses to implement it, Encore also supports inbound and outbound RSS feeds. Strang didn’t spend any time discussing inbound feeds but instead spent his time talking about configuring outbound feeds. Most libraries’ initial use of outbound feeds seem to consist of new materials lists generated by Create List queries. Strang discussed the advantages and disadvantages of basing feeds on the query vs. a review file. In essence, using a query is the preferred method because it allows automatic updating of the feed as opposed to a review file which is static and must be updated (and thus a new feed item) manually.
The feature that generated the most questions and comments from the audience is called Community Reviews. Community reviews allows library users to write their own reviews of library materials. Strang’s library was using it to present student reviews written as a class assignment that library staff were putting up on their site, they were not currently allowing users to post their own comments. This, however, is possible and can be monitored by library staff.
Of these three functions, it seems to me that both building RSS feeds and monitoring reviews would require a permanent commitment of staff time. Not necessarily Systems staff time but library staff time certainly. A couple of questions occurred to me during this session in that regard. First, I wonder how much (if any) access to MilAdmin is required to monitor reviews since the Systems Department may be reluctant to give a lot of access to a non-Systems staff member.
IUG 2007 – Encore: Introducing our new discovery services platform
Dinah Sanders from Innovative Interfaces made this presentation.
Encore is a new Innovative product still in development and beta test and let me tell you, it is really cool! Essentially, Encore is a metasearch product that allows the library to customize the sites to be searched and then presents results in a number of different sections of the screen depending on where they come from in relevance ranked order that is also somewhat customizable by the library. It brings together the OPAC, proprietary databases, and the Web all at once. Yeah, it does sound a bit like Google , and to some extent it is. The one big difference is that it is controlled by the library in terms of results display and searches not only freely available web resources but proprietary databases and thus can point users to many more, reliable results customized to a particular library’s users.
The one major disadvantage of Encore that I see is that they have not addressed finding articles yet…at all. Since that is what our users seem to have the most difficulty with, and given the trend that I see in the literature that reports on academic library web site usability testing towards users failing to differentiate between different materials (books, videos, journals, articles, etc.) and different formats of the same item, I would be reluctant to implement Encore until it supported searching for all kinds of materials.
Encore is a new Innovative product still in development and beta test and let me tell you, it is really cool! Essentially, Encore is a metasearch product that allows the library to customize the sites to be searched and then presents results in a number of different sections of the screen depending on where they come from in relevance ranked order that is also somewhat customizable by the library. It brings together the OPAC, proprietary databases, and the Web all at once. Yeah, it does sound a bit like Google , and to some extent it is. The one big difference is that it is controlled by the library in terms of results display and searches not only freely available web resources but proprietary databases and thus can point users to many more, reliable results customized to a particular library’s users.
The one major disadvantage of Encore that I see is that they have not addressed finding articles yet…at all. Since that is what our users seem to have the most difficulty with, and given the trend that I see in the literature that reports on academic library web site usability testing towards users failing to differentiate between different materials (books, videos, journals, articles, etc.) and different formats of the same item, I would be reluctant to implement Encore until it supported searching for all kinds of materials.
Innovative Interfaces Inc. Users Group Annual Conference 2007
I've been at the Innovative Interfaces Inc. (III) Users Group (IUG) conference all week taking notes like mad at some very intersting sessions. Unfortunately, internet access has been almost non-existant (unless I chose to pay my hotel $13.95 a day which I didn't) so the next ten or fifteen posts will be about the sessions I've attended and what I've learned.
Subscribe to:
Posts (Atom)
Live chat
About Me
- SWS
- I am... a wife a daughter a sister/sister-in-law an aunt a reader a librarian a doctor a quilter a niece a grandmother ;-) a cat owner 6 feet 1 inches tall a yoga enthusiast a cook
Blog Archive
-
▼
2007
(63)
-
▼
May
(12)
- An odd conversion of events
- IUG 2007 – Annual Serials Renewals Made Easy
- IUG 2007 – Holdings Conversion using Global Update
- IUG 2007 – Building license records in ERM
- IUG – My Millennium: How long since you’ve taken a...
- IUG 2007 – Quick start to ERM
- IUG 2007 – Implementing e-claiming of serials issu...
- IUG 2007 – WWWOptions, WebPub and new screens, oh ...
- IUG 2007 – Playing with matches: Using regular exp...
- IUG 2007 – Enhancing the virtual catalog experienc...
- IUG 2007 – Encore: Introducing our new discovery s...
- Innovative Interfaces Inc. Users Group Annual Conf...
-
▼
May
(12)