Wednesday, June 27, 2007

ALA 2007 – Eye to I: Visual Literacy Meets Information Literacy

I like this session because I left my computer at the hotel on Sunday (because we were going to the National Gallery of Art later in the afternoon) and had to take notes on a very small piece of paper and that gave me time to think. Actually, I hadn’t planned to take notes at all but apparently I’m physically incapable of not writing down the interesting things that I hear. Anyway…

The presenters were Cindy Cunningham who is Director of Media Metadata and Cataloging at Corbis Corp.., Danuta Nitecki who is Associate University Librarian at Yale University, and Loanne Snavely who is Head of Instructional Programs at Penn State University and their presentation was very well organized. They began by explaining that visual literacy is important because images have become so prevalent in our lives and will in all likelihood continue to be. Thus, visual literacy will rapidly become as important as information literacy (by which they mean textual information literacy).

There are several characteristics of visual images that make them more complex to seek and find. First is ownership; copyright of a visual image is often held by multiple entities including (using a photograph of an artwork as an example) the artist, the owner of the artwork, and the photographer. The second complexity is in making images accessible (e.g. cataloging them) because of the subjective nature of interpreting the symbols represented in an image; much more subjective than the interpretation of text for which we have much greater consensus on meaning.

How’s this for a great job? At Corbis there is a team of people who scan the media (newspapers, TV, etc.) for trends and then they search for images that have some meaning or representation for the trend AND as many words as they can come up with to describe them…including making up words!

They shared a couple of web site URLs with us as examples of trends in photographic images:

http://www.ted.com/index.php/talks/view/id/129

http://www.video.google.com/videoplay?docid=824643990976635143

Then they asked, so what is visual literacy and why do we need to describe it? The answer is

- to gauge student development

- to evaluate teaching effectiveness

- to measure people’s ability to use them (visual images)

The problem is that there are no standards to describe what the “right”, “necessary” level of visual literacy is. Danuta Nitecki presented a rubric that they have developed as a potential standard (FMI see ). The disadvantage to their rubric is that it doesn’t address or measure the ability to find images.

Cindy Cunningham mentioned that there are studies about image seeking. That made me wonder whether any of the methods and frameworks that have been applied to (textual) information seeking having been applied to image seeking (e.g. Lynne Westbrook’s mental models or learning theory). It reminded me of my first year in library school when I was so interested in researching the availability of visual images on the web, in particular art images.

Monday, June 25, 2007

ALA 2007 - The Future of Information Retrieval

This session was composed of four speakers. The first was Marydee Ojala who edits ONLINE: The Leading Magazine for Information Professionals and blogs at http://onlineinsider.net/.

The questions she addressed were:
Are there philosophical differences between information professionals and end-users?
How does this affect searchability and findability?

IPs enjoy the search and sharing info between themselves BUT this leads us to forget to stop, overlook things, and, it begins to take longer. On the other hand, users want to find things, they don’t care about sources. The web makes searching pervasive but also unstable and had produced a culture of ‘gaming the system’ among publishers and providers.

In the future, the worst case scenario would be a highly controlled information environment where price doesn’t guarantee quality: shopping trumps research. In the best case, interfaces become intuitive, there are no licencing wars, high quality information is easily available.


The next presenter was Jay Datema, Technology Editor for Library Journal.

He talked about how “search[ing] has been commoditized” where the cost is privacy and people are making money from it, however, search syndication is a benefit to those who know what they’re looking for.

Sites like del.icio.us make it possible to find out what people are reading quickly and easily (presumably that makes it better than searching). See my del.icio.us page at http://del.icio.us/sarahwsutton

Authentication (‘s ability to preserve the privacy of the searcher) is the future of searching. There is the growing expectation of finding the past in the search (e.g. backfiles).

He mentioned Zotero as a means of creating a personal digital library. Note that zotero is one of my del.icio.us bookmarks in my ToRead tag category.

Here’s an interesting idea: Grokker as a federated search mechanism (some of the SUNY libraries are using it for that purpose).

Next, Mike Buschman, Technology Evangelist, Live Search Selection (Microsoft) spoke on The Future of Information Retrieval: When all books are online which seemed to me to be a bit preachy and sale-pitch-ish.

http://get.live.com/

MS Live Search Academic. About 5% of the worlds information is online, he talked primarily about the Live Search and the Book Search products.

In the future direction section of his presentation he mentions “unlocking non-textual information”. He mentioned music instruction books as an example but I immediately thought of the visual information seeking session (Eye to I), that I attended yesterday.

Questions to consider:
What is the atomic unit of the book?
What is a work?
What is the future of the physical library?
How is the movement to digital information going to affect what library professionals do?

Finally, R. David Lankes http://www.DavidLankes.org (LIS Faculty) presented on The future of information retrieval: Finding conversations

I realize that these notes have been getting progressively more disjointed, probably because it’s early, it’s the third day of the conference, he’s the last of four speakers.


The failure of Reference EXTRACT: mining data from reference transactions (which had been cleaned of any personally identifying info) and then calculated frequency of appearance of databases.

Miwa’s Question Paradox: people ask the same questions at the beginning and end of a search with completely different intentions

McClure’s Citation Strategy: in order to get cited, say it first, say it last, or say it stupid

Note: he made both of these up based on things people he knows have done and/or said.

http://iis.syr.edu/Projects/PNOpen What he means by ‘conversation’ has to do with this project to decompose reference transactions using complexity theory and network theory as frameworks…the conversation is the network created by the paths between utterances in these decomposed transactions.

Find the paper they wrote called, “Participatory Networks: The Library as Conversation”.

The web (etc.) makes a great deal of “conversational” data available.

Book as “conversation” (his definition) because the author had to organize the info it contains into chapters. Which is an interesting idea to compare to the NASIG keynote speaker about conversations in books as defined by marginal notes…a conversation in the more traditional sense: people communicating ideas in writing over time.

Saturday, June 23, 2007

ALA 2007 - Utilizing learning theory in online environments.

I arrived at this session late so I don't have the structure of the talk in front of me (so to speak) and can't structure this post in the same way as I have the preceding ones. Here are some of the ideas she presented:

Goals of learning in any discipline are the same, learning to think like, process and interpret data like a [enter discipline here].

There are lots of different types of learning (visual, auditory, kinesthic, etc.) and you can tell by observing them what their preferred learning styles are. By 'learning' from them we can create richer online learning evironments.

Most of this presentation is focused on library instruction and virtual reference.

Current educational theories (popular now):
(1) The idea that students learn in social groups; they learn from listening to each other and talking about things.
(2) Situated learning means that the learning takes place in the same place in which the knowledge will be later used; learning in context.
(3) Brain based learning or "ten minutes on, ten minutes off", in other words sharing some information and then giving the brain time "off" to absorb it.
(4) Behaviorism; negative feedback doesn't work.

A combination of face to face learning and online learning works best; helps to create an ongoing conversation which is (again) how Millennials learn.

She addressed generational differences in terms of it informing the learning process.

It seems to me that a lot of this has to do with non-verbal communication online...which sounds like an oxymoron until you think about "places" like second life and the presenters' comments about the changes in online communication that occur once the communicators have met face to face, for instance, think about the way you IM with someone or read a message from them (be it email or a blog comment, etc.) and how careful you are about what you say when you don't "know" them in person compared to how when you communicate with someone you do "know" in person and can almost hear the tone of their voice and can more easily tell when they're being sincere and when they're being sarcastic.


Interesting observation: I seem to engage in more reflection about the speaker's topic when I don't have an agenda for the presentation.

The Power Point for this presentation will be posted on the ALA website.

ALA 2007 - LRRT Research Forum: Information Seeking from Childhood through College

Note: LRRT is debuting a research mentor program; find the link and more info see their website (after a few days). Also, to volunteer for an LRRT committee, contact the incoming president soon.

The four programs in this session were ordered by age of the participants.

First was Lynne McKechnie (the I School at UW) speaking on “Spiderman is not for babies” The Boys and reading problem from the perspective of the boys themselves.

Boys lag behind girls in standardized tests of reading skills. McKechnie conducted semi structured interviews with boys aged 7 to 12 and made lists of all of their reading materials (including books, videos). They found that boys are reading. What they are reading is different between boys and girls.

This was a qualitative study and her results were presented in the voices of the boys who were interviewed. There were lots of quotes to illustrate the findings. Some of them were collected by (i.e. interviews were conducted by) her students (presumably MLS or PhD students). I would have been interested in a hearing a little bit more about the researchers’ perspectives in order to get a feel for their research paradigms

Melissa Gross (Florida State) presented next. Her presentation was entitled The Information Seeking Behaviors of School Children” which was part of a larger study that used both qualitative and quantitative methods which was published in the form of a book by Scarecrow Press. She focused on the qualitative results in this presentation. In it she compares self-generated and imposed information seeking. Some of the children were excited and happy to be asked to find some piece of information by a teacher or classmate but while this was looked on positively in the younger children it was perceived as not so positive by older children.

She began by defining the terms in her research question and the roles the people in her study generally took. She used focused in-depth interviews with seven teachers from one school including teachers, students (between the ages of 4 and 12), and the school library media specialist. She also spent some time explaining the limitations placed on the study by the ages of the children participants. She presented her results in her power point slides and provided anecdotal evidence (the childrens’ stories about their reading) verbally.

Here’s a thought: I wonder how one creates trustworthiness in this kind of study. Can you still use member checking with young children? How? Maybe through triangulation. I’ll have to look at her book to find out I s’pose.

It will also be interesting to read all of the presenters’ published research reports. It seemed to me that they presented here in language and terms that would be accessible to this audience.

The third presentation was on tweens’ information seeking behavior. Tweens are ages 9 to 13. Along with this information, he described some of their other characteristics and context. What he’s presenting is part of a larger study by Karen Fisher called “Talking to You” and had to do with finding out why people prefer to turn to each other for information, particularly for what she calls “everyday life information seeking.” In order to gather data they planned a “Tween Day” sort of one day camp which they repeated three times at three different locations (one on campus – UW, an urban outreach ministry, and a suburban elementary school).

They were asking things like what types of everyday information do they perceive a need for? How do they seek everyday information? What barriers do they encounter? (and four more that I missed because the slide passed two quickly).

They used focus groups, creative interactive web-based exercises, individual interviews all of which were recorded to collect data. He didn’t talk much (nor did the other presenters) about how they analyzed their data. He presented results and quotes from transcripts both in his power point slides and verbally. He gave a hint of their data analysis in describing their need to ‘decode’ some of the tweens’ terms (“stuff”); he talked about coding the transcript of the group interviews.

It’s interesting that, so far, none of the information sharing is happening in electronic environments. Whoops! Just as I write this, one of the quotes on one of his slides included a reference to chat rooms.

As a side note, he is a very engaging speaker and obviously passionate about tweens and his research…so much so, in fact, that he’s having trouble stopping.

Here’s an interesting finding (that they’re going to explore further): when asked what librarians can teach you to use to find information, newspapers and magazines and articles were the category that got the fewest votes.

Lynn Westbrook presented last on “Google the Random Stuff: Mental Models of Academic Information Seeking”. Her purpose for the study was to use mental models to examine information seeking; how they visualize and conceptualize about information when they’re dealing with an imposed query. The sample for the study was purposive and self-selected and bounded by matriculation level and academic achievement. She did in-depth interviews and observations, transcribed it all and used HyperResearch to code and analyze the data. Then she presented the components of some of her participants’ mental models.

She presented three different perspectives from the students in her study using quotes. Then she defined mental models and presented the advantages and disadvantages of their use as a frame work for research as well as how they’re used and how they develop. She spent most of her time expanding on the models that emerged from her data analysis.

Hmm, it would be interesting to look for how and if mental models and competency theory are related in the literature.

ALA 2007 - Library Research Round Table Research Forum, part two

In the second segment of this session, Laurie Bonnici (Drexel), Lynne Watson (Florida State) presented on "Other place as library". What they were interested in was do "other places" compete with libraries (using Oldenburg’s “Third Place” (1989) as their theoretical framework).

They used unobtrusive observation between 4/2006 – 8/2006 and a web based survey in February 2007 as their methods.

They presented some of the demographics from their survey and one of those was generation: 1 silent, 12 boomers, 12 x’s, 90 millennials.
Only 3% of the respondents to the survey given in the coffee house were using library resources but 3% also said they would like assistance with using library resources.

Most of their respondents stayed in the library between one and five hours (50% roughly).

Some of their reasons for not using the library:
- have a computer at home/work
- takes too long & pay to print
- internet service is poor
- no wifi
- have wireless laptop, don’t need to go to the library

In the library café most get coffee and move to the library (50%+).

They went through this presentation so fast that I only got about half of their points, need to look to see if they publish this somewhere (proceedings?).


Finally, Marie Radford (Rutgers), Lynn Silipigni Connaway (OCLC) presented the results of a multi year (10/05 – 9/07) project looking at virtual reference. Four phases: focus groups, analysis of live chat reference interviews, online surveys, telephone interviews. I’m not sure what their research question was, from the discussion I think that it has to do with the success of virtual reference transactions and, specifically, the success when the librarian clarifies the user’s query as compared to when the librarian doesn’t do so. [Query clarification = reference interview.]

Their results are available on the web at OCLC in a URL that ended with /synchronicity.

ALA 2007 - Library Research Round Table Research Forum

This session included three separate presentations by library school faculty (for the most part).

The first presentation was by a group of researchers from ProQuest (Joanna Marco, John Law, Serena Rosenham). I've seen John Law present these results before on a webinar earlier this spring so most of it was not new to me. But they have some really interesting results:

Ethnographic field study of how students seek information. Their research has gone through several phases since September 2006. Most participants so far have been undergraduates but they’re going to target graduate students in the next phase.

Students were actively engaged in a class research assignment and were studied onsite and remotely (Userview, which is a usability testing software for use on the Internet). The remote observation worked better because it allowed them great geographic coverage and because they obtained recordings of each session and finally because it allowed the student to relax and act more naturally without an observer in the room with them.

They used Facebook to recruit participants. A flyer was placed on Facebook and a survey was used to give them further information about the study and to filter for the characteristics they needed. They included grads, undergrads, in a variety of disciplines, and with a variety of skill levels.

How do students decide which resources to use for their research?
- Students ARE using library resources when we teach them to do so in the context of the course and at the point of their need; story of the fourth year student who used library resources but had only been doing so since a librarian had visted their classroom to show them how to do so.
- Endorsement of instructor; story about the 3rd year Biology student who used JSTOR for her biology research.
- Brand awareness has an impact on what students use to start their research; story about the student who hesitated a long time over selecting a database and spoke about recognizing ProQuest.
- Google

How do students use library research?
- 95% of parts engaged library resources for their research
- They often work with multiple resources at the same time; average number of tabs open at a time was between 5 and 12.
- Abstracts are essential in identifying relevant articles (even when the full text is available)
- They have no serious difficulties using databases once they find them

Chief inhibitors to success in using library resources:
- lack of awareness of resources; Law interprets from this a need for libraries to increase marketing efforts
- difficulty navigating library website to locate resources
- students search the library catalog for articles because the search box is front and center on the library web page
- authentication requirements and difficulties create a barrier to entrance to library resources and an obstacle; also lack of awareness of the purpose or even existence of authentication

How students REALLY use Google:
- some 90% of researchers use internet search engines for their research according to Outsell and OCLC data; but in the case of this study it was 32%. What’s important is HOW they are using it
o for quick answers and definitions
o uses it as sufficient when quality isn’t a concern
o because they’re insufficiently aware of library resources
o and because they’ve had a bad experience using library resources (like ab authentication barrier)
- when they used google they were less effective than they were when they used library resources (in terms of obtaining quality content)
- as a handy look-up tool
- to get specific answers

Their end user surveys support these findings. They had about 10,000 respondents who were invited from ProQuest websites and from Facebook to take the survey.
- they recognize that the library has higher quality content
- and that the library has more content
- but google’s interface is easier to use
- prefer to use library database for academic research
- prefer to use google for quick look ups

How does social networking sites factor into student research?
- for the most part, they don’t
- they use it for communication between group members when working together on a project

I wonder what’s going to happen to this project and ProQuest’s other research projects in light of the merger with CSA?

Informing the future of MARC: An empirical approach

Bill Moen (UNT) and Shawne Miksa (UNT) presented a research study in which they examined the use of MARC by catalogers in order to provide empirical evidence of that use and contribute to a discussion within the profession about future uses of MARC. Bill and Shawne presented their research and some results. Sally talked about “MARC Futures”.

Moen and Miksa's strictly empirical approach is interesting to me in light of the book I’ve been reading lately, Research Methods for Information which emphasizes qualitative research being more accessible to practitioners in information professions than qualitative research.

Detailed information about the study is available at http://www.mcdu.unt.edu will include the program ppt and handout. Bill noted that they will be making the record parser and MySQL database that they used for this project in an open source environment so that other researchers could work with their own record sets and ask some of the interesting questions that the audience raised.

Some of the areas and characteristics of MARC in which Sally McCallum expects to see change are:

Its granularity; there is the potential for a reduction in the number fields and subfields.

Its versatility; MARC has the potential for “community profiling” (by which she means models I think), in other words it could be used in subsets for specific purposes like FRBR, MODS, etc.

Extensibility; this seems pretty similar to versatility to me, but I think she means not just creating subsets of fields but using them for new purposes, e.g. extending their use. For example, it has the potential to link rights information to a bib record.

Hierarchy support: MARC has a little but not much ability to define hierarchies; she predicts the development of other means of doing this.

Crosswalks (data element mappings): they are expensive in terms of time required to create and maintain them.

Tools: the MARC tool kit provides the tools for transferring records from another format to or from MARC but not between each other and Sally envisions development of additional tools using MARCXML.

Cooperative management: there is already a lot of participation in MARC via lists and she expects that to continue

Pervasive: MARC is used globally and will probably continue to be so through XML.

The interesting thing about this presentation was the juxtaposition of what were basically two presentations, Moen and Miksa’s MARC research project and McCallum’s predictions about its future. It seems to me at first glance that the two were pretty much in agreement with each other in terms of the future of MARC as a standard for making bibliographic description available to users that supports their needs. This basic purpose is unchanging even while MARC itself will continue to evolve in reaction to advances in technology and newly developing needs like the ones that Sally mentioned crosswalks, improved description of hierarchies, and bringing together of disparate data.

‘Research Methods in Information’ chapters 9 and 10

These two chapters cover experimental research (10) and ethnographic research (11), which, of course, are at opposite ends of the research spectrum from one another. It’s an interesting contrast and, after having noticed this, I realized that there is a similar contrast between the first two chapters of this part of the book (case studies and surveys). The contrast is greater between experimental and ethnographic research and it appears as if she is preparing us for this great contrast by allowing us to compare and contrast case studies and survey research first; sort of easing the reader toward both ends of the spectrum.

True to her word (in the introduction), she discusses the unique aspects of conducting ethnographic research in a virtual community. Not differentiating it from ethnographic research in other communities but providing insight into the particular issues unique to a virtual environment. What interested me most here was a set of qualities she uses (borrowed) to define a community in a virtual environment. First because she doesn’t say (and I wondered) whether there are also accepted characteristics that define a ‘community’ in a non-virtual environment (other than the obvious physicals ones). Are they so obvious that ‘anyone’ will recognize them? I think it would be interesting to go back and look at that in depth.

And second, in discussing another problem posed by virtual communities, that of observing the personal identities of community members because they are more easily hidden in a virtual community. She says, “Dissembodied communication makes it very difficult for a researcher to engage in participant observation” (p.120). I have to disagree with that a bit. I think that the characteristics of an individual that a researcher can observer are different in a virtual community vs. a physical one but they certainly still exist beyond the verbal/textual. For example, one might as easily observe the communication behaviors exhibited by members of a virtual community as one could in a physical community. The means of communication may differ (speech vs. text) but the act of communicating is happening.

‘Research Methods in Information’ chapters 7 & 8

Chapters seven and eight begin Part two of the book in which the author describes a variety of qualitative and quantitative research methods beginning with case studies (chapter seven) and surveys (chapter eight). In each chapter the author describes and defines the method (both what it is and what it is not) and its important features then she provides further description of the process.

The case in a case studies must have clearly defined boundaries. The purpose of the study is also important in that it provides a means of keeping the researcher on track with the project rather than veering off in search of answers less relevant to the research question. Intrinsic (to gain understanding), instrumental (to examine a phenomenon), and multi-case.

Continued emphasis on the researchers’ responsibility. Added emphasis on the need for structure (in the form of a well defined means of organizing data) that does not interfere with or constrain the emergent process of qualitative research. She also covers the accepted means of establishing trustworthiness (in qualitative methods) and reliability (in quantitative methods).

Her chapter on case studies reminded me of what I learned about Action Research in a class this past spring. In Action Research, the researcher is focusing on one particular case in context not with the intention of generalizing results outside of the case but in order to better understand the inner workings of the community of stake holders involved and, perhaps, a particular phenomenon within that community with, in the case of Action Research, the added purpose of allowing the community to solve a community problem.

Is Action Research a type of case study? I don’t think so. I think Action Research is similar to case study and that, perhaps, case study would be one way of approaching an AR project but certainly not the only way. I’m still struggling with where AR fits into my larger picture of research as a whole.

Survey research allows one to “study relationships between specific variables”. Descriptive surveys seek to describe a situation by revealing relationships between the variables while explanatory surveys seek to explain the relationship between variables in terms of cause and effect (although there is a lot of debate about how far one can go towards saying variable A caused variable B since survey research does not seek to isolate variables).

I also found a citation to a study in this chapter (she uses it as an example of an explanatory survey) that I think will be very pertinent to my research into how members of an academic community seek information in electronic environments. [Tabatabai, D. and Shore, B.M. (2005) How experts and novices search the Web. Library and Information Science Research 27(20:222-48.]

Tuesday, June 19, 2007

"Research Methods in Information" chapters 5 & 6

I'm obviously going to have to step this up if I plan to be done with my "official" review by next Tuesday...sigh...I guess that's what long plane rides are for, right?

Chapter five is called 'Sampling' and details the differences between sampling for a qualitative project and sampling for a quantitative project. She presents and describes a couple of both probability and purposive sampling. The only thing I missed in the first read is the difference between stratified random sampling and cluster sampling. I should know that already but didn't pick up on the differences in the text.


Chapter six is entitled 'Ethics in research' and is very appropriately placed at the end of the first section of the book that provides an overview of basic research and places it in context. Here she covers the basic points of research ethics including informed consent and the difference between annonymity and confidentiality and the importance of making promises that you can keep.

At the end of the first section of the book, I have to say that I'm impressed. Impressed particularly with the sensible organization of the book, it's clear structure, and the exercises at the end of each chapter. If I ever get to teach a research methods course, this is the textbook I'll use.

Random observation: on p. 71-72 she says, "there is an argument that observing people in public places needs no permission or consent as their behavior, by definition, is public and therefore available for all to see, study, and analyze." In theory, I agree with this, but in practice I have to wonder what all those people talking on their cell phones in libraries, airports, grocery stores, and so on would have to say about a researcher who recorded the "public" portion of those conversations for analysis and publication.

Need a laugh?

Try this librarian humor....

Sunday, June 17, 2007

'Research Methods in Information' chapters 3 & 4

Ok, yeah, I'm a little behind in writing up my notes. But I have continued to read so here are my notes on chapters three through six.

Chapter three is called 'Defining the research'. Here she gives the reader a 'pre-operational structure' of research with descriptions of each part of the structure as well as continuing to use a particular case as an example. Emphasis is giving to the problems inherent in trying to 'prove' a hypothesis. There is a really good, concise, clearly written section on defining variables. Finally, she clarifies the distinction between the goals and the aims of the research project.

In chapter four she describes the usefulness of a written research proposal no matter what the contact of the research project. I particularly enjoy (and, I confess, agree with) the emphasis that she places on putting the responsibility for the research project squarely on the researcher. In this chapter, she does so in the context of the care that the researcher should take in complying with all requirements applicable to writing the research proposal.

Some of my favorite quotes from this chapter are:

"Whatever choices you make you will need to demonstrate that you understand the nature of the choices you have made." (p.54) Further down the page, she alludes to this again in the context of qualitative data analysis.

"You are opening a can of worms [in undertaking a research project] as soon as you begin to ask questions, do not expect to find all of the answers." (p.56)

NASIG 2007: final thoughts

I attended two other sessions that I either arrived late for or was focused on other things while attending.

The first was Bob Schufreider's session on "Making sense of your usage statistics" which I'm sorry I didn't arrive on time for because I am very interested in making better use of our usage statistics. Bob works for MPS Technologies who makes the ScholarlyStats product that we've just trialed.

The second was the final key note speaker, Daniel Chudnov from the Library of Congress. His basic theme was the need for lowering barriers between libraries and everything else on the web. He points out that every major media outlet is using dynamic service links which cries out for OpenURL, they’re doing it and we (libraries)’re not.


I'm really disappointed not to be able to access all of the conference handouts. For the first time this year, NASIG put program handouts on the web using Moodle which is very exciting for me since I tend to take notes on my laptop in sessions anyway and its lovely to have a copy of the speaker's materials at the tip of my fingers. But this was obviously not meant to be since, try as I will, I can't get the site to either recongize me or send me the email that contains directions for resetting the password they gave me.

However that's the ONLY negative note about this year's conference. The venue was lovely and convenient; the programs were timely and interesting and offered great variety. And the attendees were just as pleasant as always.

NASIG 2007: Education trifecta: win attention, place knowledge, show understanding

This session was presented by Virginia Taffurelli, Betsy Redmond, Steve Black. I was fortunate enough to be assigned to introduce them and thus had the opportunity to talk a bit with Steve Black, whom I hadn't met before, about the lack of attention to serials and electronic resources in library school curriculum. He teaches one of very few courses dedicated to this topic (among ALA accredited LIS programs in North America).

Virginia and Betsy presented some of the basics of developing and delivering course content. Virginia spent most of her time describing the use of course delivery software (WebCT, Blackboard, Moodle), and Betsy focused on practical tips for delivering a CE course. Their focus was a CE course in fundamentals of acquisitions for ALTCS.

Steve reviewed the syllabus for his course (which he made available to us in print). He talked about his reasons for writing his own textbook; he had contacted Nisonger to ask if he was going to revise his 1998 text and Nisonger had said no. It was published in November 2006. Prior to that he had used a copy of the manuscript in classes for two years and solicited feedback from the students which he found very useful.

He covered the objectives for the course which include a small module on cataloging a serial (they catalog on paper in class then as homework they compare what they’ve done to a MARC record online).

I really enjoyed this presentation and hope that someone follows up next year to answer some of my remaining questions: why is LIS education seemingly ignoring serials and e-resources management? what is covered in other serials courses (or modules within courses)?

NASIG 2007: How does digitization affect scholarship?

This was probably the best session I attended.

Ithaka, http://ithaka.org/research, is an organization that studies the advance of technology and how it can/should be managed. Their mission is to help academic institutions to adapt to and use technology.

The presented, Roger Schonfeld, started by asking the audience what characteristics a scholarly journal should have (format, aggregated?, open access?, indexed where?, commercial or non-profit?, sustainability) in order to develop a framework for analyzing the affects of digitization.

Two side markets = a system comprised of at least two user groups who need each other which is characterized by a platform (or intermediary) that balances the interests of both groups (sides of the market). He used the credit card network as an example where the merchants and the card holders are the two groups and the card companies are the platform or intermediary. The concept of two-sided markets is the framework that Ithaka used to examine their question about the affect of digitization on scholarship.

The two sides of the scholarly journal are readers and authors. One of the motivations that operates between the two groups is quality (high quality authors attracts high quality readers and high quality readers attract high quality authors). This characteristic is static in relation to the format in which the journal is published (the exchange mechanism = format = platform that joins the two groups).

In the traditional pricing model, the reader side involves subscription fees and on the author side are pages charges and advertising fees. The question is how are/should they be distributed?

Demand side
What are the sources of value of a journal on the (librarian) side? (audience participation)
- research/curricular support
- impact factor
- use
- ARL ranking
- Preservation of the record of scholarship
- Accreditation
- Platform stability
- Areas of collection emphasis
- Peer review
What are the sources of value of a journal on the reader side?
- findable
- usefulness and credibility of content
- currency
- author quality
- accessibility
- relative importance to field
- do they publish in it?
- Peer review
- Indexing
- Impact factor (as a proxy for quality)
Supply side
What are the sources of value of a journal from the advertiser’s perspective?
- number of subscriptions
- quality of reader
- reader’s interest in products
- cost
- findability
What are the souces of value of a journal from the author’s perspective
- reputation with colleagues
- how widely read / cited
- circulation
- speed of publication
- peer review
- impact factor
- cost to submit
- marketing and promotion

Findings from a survey of 4100 faculty members about the characteristics important to authors:
The most important characteristic was circulation (80% of participants sad that this characteristic was very important), no cost to publish (65%), preservation is assured (60%), highly selective (50%), accessible in developing world (45%), available for free (35%).
- authors submit to journals that can maximize the impact of their work on their field
- some disciplinary differences in the above data
- how has the impact of a journal changed in recent years? (digitization, more widely accessible)

Their research question is how does digitization effect the system of scholarly communication?

They’ve collected data (cited by and citing characteristics of 100 journals in each of three disciplines) and are in the process of data analysis which should be published/available in the late summer or early fall. They used regression analysis (Poisson process).

Results:
- the higher the frequency of citation, the lower the number of citations in that article (graph).
- digitizeding the journal-title years has increased inbound citation by between 7 and 17% (confidence interval)
- the effect grows steadily as the materials are available online longer
- different sources of online availability (channels) offer different effects; e.g. 3-15% increase occurs when there is one channel and 8-18% increase occurs when there are a large number of channels through which a journal is available
- questions raised: Are some channels more effective than others? Do some channels yield more impact? Is wide availability the key?
Results when the data are restricted to 1995-2005 in order to look at effects of/on born digital journals
- there is a strong and significant effect from digitization (but more analysis is needed)
- the publisher web site is not always the optimal distribution mechanism to increase citations
- longer embargos decrease the ability of a give channel to increase citations
- more questions: disciplinary variation? Effects of source item year of publication?


Their preliminary conclusion is that digitization does have a storng and significant effect on scholars’ ability to find and cite revelant reference give an advantage to
He’s obviously passionate about his topic and a very natural speaker which makes him very engaging. This is a fairly sophisticated research project and he did a very good job of explaining it in terms that were pertinent and understandable to librarians; partly because of the really good questions that the audience asked. Probably this will be my favorite session. It would be interesting to see what else Roger and his colleagues have done.

NASIG 2007: Hurry up please. It’s time – State of Emergency … aka The Paranoia Presentation

A library pundit is the best way I can describe Karen Schneider. She is one of those people who are blessed with a quick, sarcastic wit and a well developed intellect to support it. I enjoyed her presentation very much although I’m not entirely sure that I agree with all of her ideas. Please also bear in mind that this was the first session of the second day of the conference and, in addition to not being quite awake yet, I was fretting about the three meetings I had to chair during the rest of the day.

• From the perspective of a writer/essayist, what she calls the “production process of the serials ecology” includes: reflection, research, revision, work shopping, submission, revision, layout and printing.
• Relevant features of the ecology include: a nominal income to editor, author’s compensation is a year’s subscription to the publication but also provides the chance for her to write about a topic that is important to her.
• Memory work: history is built from artifacts as opposed to the memories of the people who lived it. She proposes that librarians work is memory work which gives it a curatorial aspect.
• She quoted Andrew Abbott from his book The System of Professions (which I heartily recommend if you haven’t read it) who says that a profession has (or should have) “complete, legally established control” over its domain. This, she maintains, is the basis of what she calls the ‘state of emergency’ in libraries since our control of collections and collection building (if, indeed, we ever had it) is being eroded or encroached upon by entities outside the profession.
• She maintains that particularly in the area of serials, we’re particularly susceptible to this. For example the publishers with whom we’re made “big deals.”
• Some of the concerns that she’s currently mulling are
o Why are we (libraries) allowing Google to create a proprietary collection of the world’s books? (apparently Google’s contracts with both the University of California and the University of Michigan include a clause that keeps the institutions from delivering the content that they’ve allowed Google to digitize to anyone other than Google, something I didn’t know). Same with Microsoft’s book project. AND Google search doesn’t reach the Microsoft book “silo” and vice versa, you can’t access content in Google books using any other search engine. I find this incredibly worrisome. The open content alliance is an non-proprietary version of the Google book search.
o Why do we (libraries) need to pay an organization an annual fee to give us temporary access on a remote server to the content that we already own? I’d say because our users are requiring us to.
o Why does Time-Warner have to be so greedy? For example, the recent postal rate increase impacts small presses to a much larger extent than it does publishers like Time-Warner who nas negotiated a lower postal rate. This is damaging to the serials ecology.
• Removing information from the public record is a concern of hers that she illustratd with the closing of the EPA libraries which she sees as a part of a larger movement of information being lost from the public/historical record. LOCKSS/CLOCKSS is a library grown innovation designed to protect the interests of librarianship and is . There is no license to create a “LOCKSS box”, it’s free, open-source software.
• Lessons:
o The right path is not always instinctive, obvious, or well marked
o ignore the dazzle and read the fine print
o bring our values (as librarians) to the table
o possession IS the law

• Interesting thought: people slam Disney over the 2003 copyright ruling but don’t blink an eye at apple who distributes a proprietary sound format for ipods. What makes Apple different from Disney?

Saturday, June 16, 2007

ALA 2007 schedule

After a lot of time examining maps and program details, I think I've finally nailed dow my ALA schedule. This is not as easy as it sounds since often there are three or four interesting sessions going on at once and location and distance between venues must be factored in as must time to visit with vendors in the exhibit hall AND at least a little sightseeing. Anyhow, here it is...

Saturday, June 23
8 - 10 am -- Informing the future of MARC: and empirical approach
(This one's being given by a library school prof of mine, Bill Moen)
10:30 - 12 noon -- Research: A user experience
12 - 1 pm -- Ebsco Acadmic Library Luncheon
3:30 - 3 pm -- Information seeking behavior from childhood through college
4 - 6 pm -- either The ALCTS electronic resources pricing discussion group or Utilizing learning theory in online environments depending on where the latter one takes place
7 - ? -- dinner

Sunday, June 24
7:30 - 8:30 am -- Alexander Street Press breakfast
9 - 10:30 -- Exhibits
10:30 - 12 noon -- New minds, new approaches: Juried papers by LIS students
11:30 - 1 pm -- CSA / RefWorks Lunch 'n learn
1:30 - 3:30 -- Eye to I: Visual literacy meets information literacy
3:30 - 5:30 -- National Gallery
6:30 - 8:30 -- Ex Libris customer reception

Monday, June 25
8 - 10 am -- The furture of information retrieval
10:30 - 12 noon -- Four star research
11:30 - 1 pm -- ProQuest luncheon
1:30 - 3 pm -- Fresh approaches in service delivery: linkings users and services in creative ways
3 - 5 pm -- Exhibits
7 - ? -- dinner

As usual, I'll try to post my thoughts on the sessions I attend, but also as usual, the timing will depend on the availability of internet access and electricity!

Monday, June 11, 2007

"Research Methods in Information", chapter 2

Chapter 2 is all about reviewing the literature and contains a wealth of useful tips for strategically conducting a literature review no matter what level of review one needs to accomplish. The structure of this chapter (and perhaps the whole book, we'll see) is marvelously clear. She sets out the steps/skills/stages (information seeking and retrieval, evaluation, critical analysis, synthesis) and explains the process(es) for each one including some really practical ideas for organizing them.

One of the things I'm finding most exciting and at the same time frustrating about the book so far is the suggested further reading lists at the end of each chapter. Exciting because they contain more information about topics I'm interested in and frustrating because I'll never have time to read them all.

I've been thinking about this last a bit recently because I've been feeling as if I need to find a workable (for me) way of organizing what I read (as well as what I need/want to read) and have even begun working on creating an Access database as a way to accomplish it. One of the things I'd like to be able to do is trace the network of relationships between documents (this one cited that one, etc.), partly because I think it would be interesting to see and partly because I think it might help me to organize the ideas (which already are too many to keep in my head).

"Research Methods in Information" chapter 1

This chapter introduces the reader to three major research paradigms: positivism, postpositivism, and interpretivism. It contains a brief history of each as well as an overview of qualitative and quantitative research methodologies that compares and contrasts the characteristics of each, particularly the criteria upon which judgments of quality are made.

Thoughts on this chapter:
It is thick with terminology with which inexperienced researchers and students may not be familiar but that is somewhat offset by their inclusion in the glossary.

I find myself thinking of it as a textbook for a research methods class. From that perspective, it seems useful.

I like the way she qualifies her brief overviews with repeated suggestions that the interested researcher read further on each topic...and provides recommendations on where to start such reading.

Here's my favorite quote from chapter 1: "Whichever paradigm you associate your research with, whichever methodological approach you take, demonstrating the value of your investigation is essential. This applies to practitioner research and student research: we all want our findings to be believed and are responsible for ensuring that they can be believed." (page 18)

However, I also like this one: (on establishing objectivity in quantitative research: "Findings are a result of the research investigation, not a result of the researcher's interpretation of those findings." (page 22)

Thursday, June 07, 2007

"Research Methods in Information"

My latest book to review for LJ is called "Research Methods in Information" by Dr. Alison Jane Pickard and I'm very excited about it. It's a handbook/textbook for those of us working in the information professions which, of course, is right up my alley. So I'm going to try something new here. I'm going to post my notes as I'm reading, more to keep myself organized than for any other reason but also on the off chance that there's anyone out there who shares my interest in research methods who might have a comment or insight that I don't have. Of course, I'll also post a link to the review when it's published.

So, in her introduction, Dr. Pickard lays out the importance of research in the fields of information studies, communications, records management, knowledge management and the related disciplines: (1) increasing the body of knowledge that makes up those professional fields, (2) the need for research skills in professionals in those fields, "Knowledge and experience of research is a fundamental part of what makes the 'information professional' ", (3) to allow practitioners to continue to grow in their professions as well as to better accomplish their tasks (e.g. benchmarking, assessment, strategic planning, and so on).

Next she describes the framework on which the organization of the book rests which she describes as the research hierarchy which moves from the research paradigm on which methodology is based and, in turn, on which the selection of a research method is based, and, in turn, on which selection of the research technique and instrument are based.

The research paradigm is the world view or underlying assumptions about the world that the researcher starts with. The methodology which is either qualitative or quantitative and is distinguished from the research method which is the strategy or approach to the problem taken by the researcher. The technique is an approach to data collection that is dictated by the research question. And the research instrument is the unique operationalization of the selected technique.

Now, rereading what I've written, I can already make two statements. First, I'm going to try NOT to simply summarize the book here. Rather I'm going to try to limit myself to comments about ideas that jump out at me as being noteworthy in some way. And second, I'm already engaged by and in total agreement with the idea that research is not just the realm of scholars who wish to contribute to a body of knowledge but rather research is accessible and achievable and useful, perhaps even necessary, for professionals in the information professions.

Wednesday, June 06, 2007

Too good not to share

She gave a key note speach at NASIG last week and, after reading her blog for a couple of days, I am rapidly becoming a fan: http://freerangelibrarian.com/.

Tuesday, June 05, 2007

NASIG 2007: Betting a strong hand in the game of electronic resources management

Paoshan Yue and Liz Burnette
Paoshan and Liz presented two versions of electronic resources workflow in their libraries. Paoshan described the evolution of e-resources workflow at the University of Nevada Reno Libraries and Liz presented a general model for building an e-resources workflow. This presentation was a little weak, lacking in content; the content was a bit too general. I would have liked more specifics about the actual workflows in their two libraries. However, it did get me to thinking that one of the ideas that I’ve been applying to web site design would apply equally in this situation. Many of us are trying to fit e-resources into our existing print serials workflows and that’s not something that we have (or maybe even should) be doing. The session got me to thinking about what other ways we might organize our e-resources work and other angles from which to approach that question.

See http://www2.library.unr.edu/serials/ERMworkflow.pdf for an example of
UNR's current workflow.

NASIG 2007: Alternatives to licensing of e-resources

Selden Lamoureaux & Zach Rolnik

This session was everything I expected it to be. I KNEW someone was working on this, I just didn’t know who. Now I know. It’s a NISO working group called SERU (Shared E-Resources Understanding) and they seek to find ways for libraries and publishers to come to agreement about the purchase of e-resources without the need for a contract or license.

Their argument goes like this: contracts are a barrier to access. They force both libraries and publishers to expend staff time and effort to negotiate licenses for e-journals and e-resources subscriptions. End-users suffer from the delays in access to information as a result of the need to negotiate licenses and libraries, especially smaller libraries, are put at a financial disadvantage.

Efforts are being made to reduce these costs by creating a global license, including SERU. The SERU Working Group has found a fair amount of consensus on many of the issues to be included and have a number of good reasons to believe that it might be a viable alternative.

It’s not a standard license, click-through license, or a replacement for ALL licenses. Instead, it calls for libraries and publishers to agree to accept copyright as the governing law over the provision and use of information services and uses the purchase order to describe the terms of the sale.

ALPSP and SSP both support SERU as do ARL and SPARC.

For more info and to register as a user at www.NISO.org/committees/SERU (note that registration is not open yet but will be soon).

Friday, June 01, 2007

NASIG 2007: Electronic resources workflow management

Paoshan Yue from the University of Nevada, Reno and Liz Burnette from North Carolina State University Libraries presented two models of managing electronic resources workflow and integrating it into existing libary workflow. Paoshan focused on technical integration of e-resources into serials workflow by presenting the evolution of UNR's procedures for making e-resources available from aquisitions to cataloging to accessibility. Their final (well, at least in use at present) workflow is presented at http://www2.library.unr.edu/serials/ERMworkflow.pdf.

Liz presented the staffing side of integrating e-resources into serials workflow. She emphasized the need to begin by examining existing procedures and the procedures required for e-resources processing before trying to integrate the two. She also explained an unexpected advantage of the process that they discovered at NCSU: the decrease in the occurence of inevitable slowed production when a staff member is away from the library or leaved altogether that resulted from cross-training several staff members to complete each task or step in the work flow.

This got me thinking about how we have really just squished e-resources and e-journals into our existing processed at TAMUCC and sparked a desire in me to go back and take a look at what we're doing and why. I was reminded of what I've learned about a point that was made about library web sites. That we tend to structure them in a manner similar to the organization of the physical library and that really doesn't need to be the case. Similarly, I think e-resources workflow does not necessarily need to be patterned on print serials workflow.

NASIG 2007: "What's the different about the social sciences?"

Leo Walford from Sage Publications presented this session in which he compared the characteristics of social science journals (and social science and scientists) and science, technology, and medical (STM) journals. Some of the points he made were:
- Social science journals are seen as smaller, less technologically demanding, and not published by large STM publishers. They are, therefore less expensive.
- How relevant is pricing in the world of big deals? While subscription prices increased between 1988 and 2005, the average price per page actually dropped by about 25% as a result of 'big deals'.
- Scocial scientists are less aware of the opportunities afforded by open access that are STM scholars but share a trend toward fewer visits to the physical library with them.
- (this is the point I found most interesting) Since the social sciences receive dramatically less grant funding compared to STM, when they apply the standard 1 to 2% of grant funds to paying for open access to their research publications they don't end up with enough to support the author pays model of open access that is becoming prevalent in STM publishing.
- In addition, social science journals have a longer shelf-life (meaning they are useful/cited for a longer period of time in general than STM journals), which leads publishers to impose longer embargos on their content, which makes the failure of the author pays model of open access that much more of a problem.

NASIG 2007 conference: opening session

I'm attending the North American Serials Interest Group (NASIG) meeting this week in Louisville, KY. In addition to fulfilling a number of organizational duties (committee work, etc.), I'll be attending a number of workshops and presentation which I'll be reporting on here.

This morning the first session waa an all-conference session at which which Bob Stein spoke about "The Evolution of Reading and Writing in the Networked Era". He has some very interesting (and controversial I think, at least among librarians) ideas. His main point was that what have existed as marginal notes in paper books for hundreds of years are actually converstations between the author and the reader (as well as between readers) that are very much like comments on a blog or the open peer review that some pre-publications go through.

Live chat

About Me

My photo
I am... a wife a daughter a sister/sister-in-law an aunt a reader a librarian a doctor a quilter a niece a grandmother ;-) a cat owner 6 feet 1 inches tall a yoga enthusiast a cook

Blog Archive