Wednesday, December 12, 2007

Online courses for everyone!

This is so cool I thought it had to be shared on both my blogs. I've long been a fan of open online course ware and have always wanted time to try taking a whole course. I read an article at Inside HIgher Ed this morning about Yale's open course ware offerings. They include videos of lectures in addition to syllabi, tests, notes, and PowerPoints. The courses at Yale's site include introductory courses in physics, psychology, and poetry. I've been collecting links to open course ware sites including MIT's and Rice University's in my del.icio.us site if you want to have a look at some of the other offerings.

Sunday, November 25, 2007

New words

I was reading an article this afternoon and came across several words with which I was unfamiliar so (of course) I looked them up. They're pretty fun words so I thought I'd share them. I must admit I'm not sure when one would ever use them, the author whose article I was reading could easily have used more familiar words and I confess that I mentally accused him of using $10 words just to make himself sound more intelligent (or something). Here they are (with help from Webster's):

perspicuous

Main Entry:
per·spic·u·ous Listen to the pronunciation of perspicuous
Pronunciation:
\pər-ˈspi-kyə-wəs\
Function:
adjective
Etymology:
Latin perspicuus transparent, perspicuous, from perspicere
Date:
1586

: plain to the understanding especially because of clarity and precision of presentation

opprobrium
Main Entry:
op·pro·bri·um Listen to the pronunciation of opprobrium
Pronunciation:
\-brē-əm\
Function:
noun
Etymology:
Latin, from opprobrare to reproach, from ob in the way of + probrum reproach; akin to Latin pro forward and to Latin ferre to carry, bring — more at ob-, for, bear
Date:
1656
1: something that brings disgrace2 a: public disgrace or ill fame that follows from conduct considered grossly wrong or vicious b: contempt, reproach


defeasance
Main Entry:
de·fea·sance Listen to the pronunciation of defeasance
Pronunciation:
\di-ˈfē-zən(t)s\
Function:
noun
Etymology:
Middle English defesance, from Anglo-French, from defesaunt, present participle of defaire
Date:
15th century
1 a (1): the termination of a property interest in accordance with stipulated conditions (as in a deed) (2): an instrument stating such conditions of limitation b: a rendering null or void2: defeat, overthrow


NVivo7 Test Drive Report #5

This is a short one. It just occurred to me that one of the big assumptions that NVivo makes is that all of your documents will be digital. But that might not be the case especially with historical research (diaries, photos, etc.). I wonder how they account for non-digital data? Must explore.

Friday, November 23, 2007

NVivo7 Test Drive Report #4

Preliminary conclusions.

I've been writing these first entries as an class assignment and while I have 26 more days in my trial of the software and plan to continue using it to analyze my usability data and report my experiences here (after I finish my other class assignments), I did have to complete and turn in the assignment I was originally writing for. This is the conclusion that I wrote for that assignment.

Several very useful points became clear as I worked with the NVivo7 trial. I began to find a number of things in my data that I did not expect to find, for instance, there are a number of behaviors evident among our users that, now that I am aware of them, will affect the way I teach library instruction. That is useful in itself but in terms of conducting qualitative research, I also see that there is a fine line that every researcher has to draw for him or herself between relevant findings and irrelevant findings. Where to draw that line would depend on the research question, the type of research design, the theoretical framework of the project and the theoretical orientation of the researcher.

My frame of mind is important to the quality of the results. It does not do much good to rush through data anlaysis. When I’m in a hurry (e.g. under a deadline to finish an analysis), I am at my least analytical and am least liable to go test an idea and am most liable to miss something in the data. Also, I tend toward linearity by nature. In this analysis of my usability data (which I started at the beginning of this semester and after all of my data collection was complete), I wanted to conduct the analysis participant by participant and then task by task rather than having to go back to participants whose transcripts I had already coded to look at them again. Both of these habits are things that I understand I will need to guard against as I continue to conduct qualitative research.

As far as the product, NVivo7 goes, it seems to me to be the most flexible and therefore useful of the QDA programs I have tested this semester. My “likes” lists outweighed my “dislikes” lists and it seems to combine most of the features that I found useful in all of the QDA programs I have tested. It seems to be intuitive enough for a novice researcher to grasp and begin using without to steep a learning curve but it also has some more advanced linking, memoing, querying, and modeling features that will satisfy a more sophisticated researcher.

The biggest drawback is the inability to code data in any format other than text (although I understand that this has been addressed in the next version). Another is that NVivo7, like most of the other QDA programs I have worked with this semester, seems oriented toward grounded theory research. I can think of two reasons why this might be so; first, because grounded theory is more systematic (at least Strauss & Corbin’s version of it) that other qualitative approaches and it is easier to develop computer programs that support this kind of thinking. But second, it may be the context of my perception that makes it seem so, given my inexperience with qualitative research the more systematic approaches like grounded theory would be easier for me to grasp and to find evidence of and uses for in the software.

NVivo7 Test Drive Report #3

Coding.

One thing that is different about how I’m coding my usability transcripts is that when I was watching the video recording as I was listening to and reading the transcript, I tended to group segments by task. I was thinking that, after I had coded them all, I would go back and compare each task. That doesn’t seem to be uppermost in my mind as I analyze and code in NVivo7 even though I’m using codes I created while I watching the video recording as I was listening to and reading the transcript.

I downloaded a trial copy of Camtasia with which I created an “external” for and played back as I analyzed the second transcript/video with NVivo. That helped a lot to continue my analysis of how our participants moved around the screen and from page to page. However the whole thing would be more useful for analyzing this kind of data if the video were built in to NVivo itself. I’ll be interested to see how that works in NVivo8.

Queries.

The Getting Started Guide wasn’t much help to me for learning about constructing and conducting queries so I watched the online tutorial. It gave me a lot of ideas about queries I could make of my data, as little as there is. I tried queries by node and word frequency and matrix queries. Matrix queries were my favorites because I could really begin to see how patterns might be emerging from my data (I was looking for correspondence between those participants who tended to browse the links clicking on those with which they were unfamiliar or not recognizing a term or label and those participants who opened up a particular page that we have labeled “Remote Access” because a high correspondence might indicate that we need to re-name that page with a more meaningful term).

Likes:
- being able to create a query and then re-run it as you continue to add data
- being able to save the whole project and move with it from one computer to another

Tuesday, November 20, 2007

NVivo7 Test Drive Report #2

November 20, 2007 – Creating my ‘project’

I followed the step-by-step instructions in the Getting Started guide for setting up a new project. As I worked through the steps, I noted some things I liked and disliked about NVivo7:

Likes:
- You can use regular Word files (.doc) or even text files (.txt) instead of having to convert everything to Rich Text files (.rtf), although .rtf files are acceptable. This is not really a very big deal I suppose but always felt cumbersome to me when working with other QDA software (e.g. MaxQDA and Transana).
- The ability to import multiple documents at once rather than having to select and import them one at a time.

Dislikes:
- You can’t use punctuation in naming nodes, e.g. I wanted to name a node “everything ought to be together” including the quotation marks because I was quoting a participant.
- (this one started out as a like) There doesn’t appear to be a limit to the length of description that one can give to nodes like there is in Transana…whoops! I was wrong
- No spell check
- While coding I can only look at a list of EITHER my free nodes or my tree nodes but not both at once, that’s awkward
- The little window that opens up every 15 minutes to remind you to save your work in annoying, wonder if there’s a way to change that setting to auto-save without asking me…there is! I found it by using Help but it was exactly where I would have expected it to be had I been thinking more clearly about the similarities between NVivo7 and Outlook.


I used the online help screens to find out how to add attributes to cases after reading that this was possible in the Quick Start Guide. I decided to identify each of my participants as a case since we had collected data describing characteristics like age, gender, perceived computer skills, internet use, library web site use, academic status, major, etc. I created attributes for each piece of data we collected.

Next, I added some tree nodes and free nodes based on codes I had already created and been using with this data in another piece of QDA software. One of the things I’m hoping to find out is whether one can create a tree node at a higher level of a hierarchy and then assign existing, hierarchically lower nodes to the new hierarchically higher level node.

I read through one entire transcript and coded it. During the process I created some new codes. These transcripts accompany a video recording of the screen of the workstation on which the participant is completing tasks. This is the first time I’ve analyzed a transcript without simultaneously watching the video recording of the sequence of pages visited. It’s a different experience focusing completely on the text. I can’t tell yet whether I’ll get more or less or just different results doing it this way.

NVivo7 Test Drive Report #1

It HAS been a while since I posted anything here. Between hiring a new employee at work and catching some kind of flu bug in October that I've just now gotten rid of I haven't had time to post. However, I'm working on an assignment for my Advanced Qualitative Research Methods class that I thought might be interesting and/or useful. The assignment is to try out a qualitative data analysis software called NVivo7. We were required to either try it out in the qualitative computer laboratory on campus (at TWU) or download a trial version of the software to our personal computers.

Being roughly 400 miles from campus and the Qual Lab, I chose to download a trial version of QSR’s NVivo7 qualitative data analysis software. I was excited and looking forward to testing it after Mary Helen Thompson’s presentation in our October class meeting so it is probably easy to imagine my disappointment when the software did not work properly when I downloaded it. Frankly, had the assignment not specifically called for the use of this particular software, I probably would have given up at this stage and not pursued a fix both because of the limited amount of time have for this assignment and because of previous, unsuccessful, frustrating encounters with technical support personell.

However, after two unsuccessful attempts to download, install, and use the trial version of the software, I reluctantly emailed their technical support staff. I was pleasantly surprised (and not a little astonished) to received a timely, helpful response. Unfortunately, the fix they suggested did not work. On my own initiative, I uninstalled and reinstalled the software, this time noting the error messages that appeared as the installation progressed (which I evidently ignored during the first install). I replied to the technical support staff’s email explaining what I’d done and why and what had been the result. In return, I received another pleasant, clearly articulated, helpful email that walked me step-by-step through uninstalling the component software (required for running NVivo but not created or published by QSR) including things to watch out and test for. I followed the directions exactly and am now to give NVivo7 a test on several of the transcripts taken from recordings of the usability test sessions we conducted on our newly designed library web site this summer.

Sunday, September 23, 2007

Ah Ha moments

Don't ya just love those "ah ha" moments? You know, when a couple of things in your mind just come together and gel into a completely new idea? Especially when it's an idea that is useful? I've had a couple this afternoon and wanted to share them.

Both ah ha's go goes back to a conversation I had with a classmate recently about the difference between theoretical orientation and theoretical framework in the context of qualitative research reports (e.g. article, dissertations, etc.). We decided that a theoretical framework was a deliberately selected basis or grounding for a research project, usually based on someone else's previous research, and something that may inform one research study but doesn't necessarily inform all of one's research. A theoretical orientation on the other hand, was more like a world view, possibly constructed from one's social and cultural background rather than consciously selected. My classmate called it a lens through which research was viewed. Its something that a researcher can become aware of and acknowledge but not necessarily something that can be entirely set aside and definitely something that will inform all of one's research.

The ah has for me came while I was reading a dissertation that I'm reviewing for class. I'd been trying to apply our definitions of theoretical orientation and framework to this dissertation and I realized first, that the author was using the type of research he did (grounded theory) as a theoretical framework, not something I'd ever considered before. That was the first ah ha.

The second one was that he was using multiple theoretical frameworks. Not only was he using grounded theory as a theoretical framework, he was also using a number of other theories to frame his study. I had also not considered that before, everything I've read about doing qualitative research has talked about a single theoretical foundation for a research project, never more than one!

Saturday, September 15, 2007

Research methods

I'm taking a class in advanced qualitative research methods this fall which I'm thoroughly enjoying. It seems to be dovetailing with my work at the library a bit which is both fun (or else I wouldn't be doing it) and informative because it's forcing me to look at a kind of qualitative research that I don't thing is covered directly in class.

One of the English professors at TAMUCC asked me to speak to her master's level Bibliography and Research Methods class about the different kinds of research. When I asked her whether she was looking for a particular focus she said she wanted a broad overview for them with an emphasis on research methods in the humanities. Well, what an exciting challenge! I know very little about research methods in the humanities having focused on the social sciences mainly with some peeks into the sciences for contrast.

So I began collecting books about humanities research and discovered some very cool things. For instance, research in the humanities places much more emphasis on the hunt for literature on a topic. There are loads of how-to books about how to find resources for humanities research (my personal favorite being Thomas Mann's The Oxford Guide to Library Research) but very few on how to "do" the discovery process and the criteria upon which how that process is judged by peers. Where as in the social sciences "research methods" books focus on a larger number of different aspects of conducting research including developing a research problem/question, selecting data collection and analysis methodologies, theoretical orientations, theoretical frameworks, guidelines for reporting results and judging quality.

At the moment, I think this is because in the social sciences (and the hard sciences), researchers create their own data (e.g. they measure something or transcribe interactions or write notes) whereas in the humanities, researchers don't create their data, they have to go out and find it (e.g. in primary and secondary documents). So the emphasis on "how to do" research in a discipline is focused on where the data comes from.

Anyhow, I managed to create a fairly general matrix comparing qualitative research and quantitative research for the class and spend about 45 minutes talking with them about the differences and similarities. Then I asked them to break into groups of three or four and gave each of them a research "type" from the matrix and asked them to develop a research problem/question and tell the rest of us how they would collect data to answer it. They all worked with the same issue: assessing learning outcomes for students in a first-year learning experience.

I expected pretty superficial responses given that they'd had all of 45 minutes of introduction but they accepted the challenge enthusiastically and came up with some really wonderful answers. What was really cool (for me) was how their answers provided me with material on which to expand my earlier explanations about qualitative and quantitative methods. For instance, one group described how they would use three different sources of data for a qualitative analysis of the issue which allowed me to expand on how qualitative projects are judged on trustworthiness and on some other ways to achieve trustworthiness (since they'd already described triangulation). I was inspired to create this exercise by John W. Creswell's book Qualitative Inquiry and Research Design: Choosing among five approaches (Sage).

I had such a good time teaching this class that I had a hard time stopping and leaving. When I did I met one of the students in the hall. She told me about how much she disliked this class because she wanted to do creative writing, not research. So I told her about narrative research where the researcher's job is to tell the story of an event or phenomenon in an individual's life. I gave her an example from Creswell's book. When I was done there was this amazing spark in her eye. She thanked me enthusiastically and asked if I thought that she could use narrative research for her final project in class.

I CANNOT TELL YOU HOW COOL THAT WAS! There is no way I can describe how that made me feel. It is so thrilling to be able to spark a student's interest, to watch the "ah ha" in their faces. It reminded me of the feeling I had the first day of library school...this is awesome, I love doing this, why didn't I know this was possible?!?! And I can't wait to do it again!

Sunday, September 09, 2007

Just when you think you know something

I've got an assignment to review an article that reports on qualitative research. The one I chose is a grounded theory approach to scholars' use of and decisions to publish in open access journals. After I read through the article once, I was taken aback at how little of what I know about qualitative research was included. There was more explanation of the method than description of the research.

Since I had been thinking of using grounded theory for my mock dissertation proposal later in the semester, I checked out the grounded theory "bible" (Strauss and Corbin, 1998) from the library. What they really say about it (and the way they say it, which, I'm learning, is just as important to qualitative research) isn't what I interpreted from what others (Patton and Creswell in particular)were saying.

Just goes to show that you really need to consult the primary source.

Thursday, September 06, 2007

Lessons learned

When they talk about theory in math, look out! They don't mean theory the way I've always thought about it, or at least the way I've thought about it lately, in terms of theories of education or communication expressed in words with specific epistemological, ontological, and axiological perspectives. They mean formulas and proofs of formulas that prove not only the practical, realistically possible situations but all situations no matter how highly improbable.

Now I suppose you could argue that these two things are fairly similar. Theory in the social sciences are described with words, sometimes highly specialized terms and this is not all that different from theory expressed with numbers and symbols that are also a highly specialized language of sorts.

You could argue that and you'd be right but there is a world of difference in the preparation one needs in order to understand mathematical theory and theory in the social sciences so perhaps I won't appear too naive when I tell you that I didn't REALLY understand the difference until this week.

My degree at TWU requires me to not only complete the core library science courses but also several courses in a cognate area. I chose statistics as my cognate and included in my degree plan a course called Theory of Statistics in which I enrolled this semester. Sadly, and much to my disappointment, I have learned quickly that I am no where nearly prepared to succeed in this class.

And it hasn't been pretty. I started tripping over calculus and trigonometry after about the second class and it went down hill from there, which is disappointing because I was really enjoying learning about statistics from another perspective. However, I've come to my senses, realized that I don't have the mathematical background to understand the theory that underlies the statistical manipulations that programs like SPSS and SAS do for us social scientists, and dropped the class.

Maybe I can be a mathematician or a rocket scientist in my next life!

Monday, September 03, 2007

Podcast for FS6793

Today I learned how to podcast courtesy of an assignment for my Advanced Qualitative Research Methods class. Here's what I created for them...


Subscribe Free  Add to my Page

Thursday, August 16, 2007

Research methods and job ads

I'm writing up a job ad for a vacant position in my department today. I've been thinking about the skills and qualifications that I'm looking for and it occurs to me that one of the things that makes hiring difficult is that we (I) typically don't think much about how I'll measure the qualities that I'm looking for in a new employee. For instance, if I say that I'm looking for someone who is self-motivated and has a positive attitude, how will I determine whether my applicants have those qualities? That's not a question I would have asked myself before I started learning about research methods.

Before, I would have probably tried to come up with some way to quantify things like supervisory experience and familiarity with Windows Office software which are fairly easily quantified into measurements that have an ordinal relationship. But I would also have tried to quantify things like self-motivation and positive attitude which I think is much more difficult, not to mention subjective. This time I'd like to take a more qualitative approach to analyzing resumes, applications, reference letters and interviews (and helping the members of my search committee to do the same).

Sunday, August 12, 2007

Usability and the new library web site

This is it! The weekend that we take down the old library web site and publish the new one. It's a project I've worked hard on all summer.

In July, along with some colleagues, I conducted usability tests on the new (unpublished) site. Then I reviewed the recordings of the usability sessions as well as the comments that my colleagues and I made after each usability test session and comments from the library staff and compiled a list of changes to the new site. The changes were mostly based on the things that our test participants had trouble with during the usability testing although some were based on staff commments and research that the Information Architecture Working Group did as they planned the new site. Finally, a number of us in the library have worked very hard to implement those changes in time for the preparations for the beginning of the fall semester to be completed.

As of Monday, August 13, 2007 you can see our new web site at http://rattler.tamucc.edu. The parts I worked on most are the Search All Databases pages (you can see them by clicking on the "Find Articles" link on the library homepage) and the Find Journals list (you can see this by clicking on the "Find" drop down menu at the top of the page and selecting Journals). Unfortunately, because our contracts with online information resource vendors require us to limit access to them to students, staff, and faculty of our university, you probably won't be able to see much.

Anyway, I'm fairly proud of what we've accomplished this summer and happy to have some data with which to work this fall. I'd welcome any comments that anyone cares to make about the site.

Thursday, July 26, 2007

Getting carried away

All of a sudden I'm racking up a list of fun things to do this fall. I responded to a call for articles on serials and electronic resources management and social sciences data archives in a new library science encyclopedia. Don't know that I'll get invited to actually write them but it sounded like a good opportunity (especially since the articles would be refereed).

I also found out that I could apply (and would probably be accepted) to work as a Graduate Teaching or Research Assistant in the School of Library and Information Studies at TWU. That sounds like GREAT fun and would fill in a gap in my vitae (I need some teaching experience).

I'm taking two classes this fall, Theory of Statistics and Advanced Qualitative Research Methods.

And I still have tons of data from the usability study I just completed at work to analyse. I'd like to write an article (or two) based on those results.

I'm also chair of the Library School Outreach Committee for NASIG this year. I had to step down from chairing their Awards & Recognition Committee because the Executive Board felt (correctly) that I wouldn't be able to give both committees my full attention.

And I'm still reviewing reference books for Library Journal and the occasional serials related book for The Serials Librarian.

I guess it sounds a little like I'm bragging about all of this (and maybe I am a bit) but my intention when I started writing this was to try to "think aloud" about which of these opportunities I should accept (if and when they're offered). 'Course some of them I've already committed to. I have a bad habit of taking on more than I can do which I thought I had under control (partly by taping a sign that said "NO" on my computer monitor) but which I've obviously let run rampant again this summer.

Friday, July 06, 2007

Book reviews

I found a couple of the book reviews I've written recently and thought you might be interested in reading them.

One is for the Billboard Illustrated Encyclopedia of Country Music.

And the other is for the Encyclopedia of Politics and Religion(you have to scroll down a bit to find this one).

Wednesday, June 27, 2007

ALA 2007 – Eye to I: Visual Literacy Meets Information Literacy

I like this session because I left my computer at the hotel on Sunday (because we were going to the National Gallery of Art later in the afternoon) and had to take notes on a very small piece of paper and that gave me time to think. Actually, I hadn’t planned to take notes at all but apparently I’m physically incapable of not writing down the interesting things that I hear. Anyway…

The presenters were Cindy Cunningham who is Director of Media Metadata and Cataloging at Corbis Corp.., Danuta Nitecki who is Associate University Librarian at Yale University, and Loanne Snavely who is Head of Instructional Programs at Penn State University and their presentation was very well organized. They began by explaining that visual literacy is important because images have become so prevalent in our lives and will in all likelihood continue to be. Thus, visual literacy will rapidly become as important as information literacy (by which they mean textual information literacy).

There are several characteristics of visual images that make them more complex to seek and find. First is ownership; copyright of a visual image is often held by multiple entities including (using a photograph of an artwork as an example) the artist, the owner of the artwork, and the photographer. The second complexity is in making images accessible (e.g. cataloging them) because of the subjective nature of interpreting the symbols represented in an image; much more subjective than the interpretation of text for which we have much greater consensus on meaning.

How’s this for a great job? At Corbis there is a team of people who scan the media (newspapers, TV, etc.) for trends and then they search for images that have some meaning or representation for the trend AND as many words as they can come up with to describe them…including making up words!

They shared a couple of web site URLs with us as examples of trends in photographic images:

http://www.ted.com/index.php/talks/view/id/129

http://www.video.google.com/videoplay?docid=824643990976635143

Then they asked, so what is visual literacy and why do we need to describe it? The answer is

- to gauge student development

- to evaluate teaching effectiveness

- to measure people’s ability to use them (visual images)

The problem is that there are no standards to describe what the “right”, “necessary” level of visual literacy is. Danuta Nitecki presented a rubric that they have developed as a potential standard (FMI see ). The disadvantage to their rubric is that it doesn’t address or measure the ability to find images.

Cindy Cunningham mentioned that there are studies about image seeking. That made me wonder whether any of the methods and frameworks that have been applied to (textual) information seeking having been applied to image seeking (e.g. Lynne Westbrook’s mental models or learning theory). It reminded me of my first year in library school when I was so interested in researching the availability of visual images on the web, in particular art images.

Monday, June 25, 2007

ALA 2007 - The Future of Information Retrieval

This session was composed of four speakers. The first was Marydee Ojala who edits ONLINE: The Leading Magazine for Information Professionals and blogs at http://onlineinsider.net/.

The questions she addressed were:
Are there philosophical differences between information professionals and end-users?
How does this affect searchability and findability?

IPs enjoy the search and sharing info between themselves BUT this leads us to forget to stop, overlook things, and, it begins to take longer. On the other hand, users want to find things, they don’t care about sources. The web makes searching pervasive but also unstable and had produced a culture of ‘gaming the system’ among publishers and providers.

In the future, the worst case scenario would be a highly controlled information environment where price doesn’t guarantee quality: shopping trumps research. In the best case, interfaces become intuitive, there are no licencing wars, high quality information is easily available.


The next presenter was Jay Datema, Technology Editor for Library Journal.

He talked about how “search[ing] has been commoditized” where the cost is privacy and people are making money from it, however, search syndication is a benefit to those who know what they’re looking for.

Sites like del.icio.us make it possible to find out what people are reading quickly and easily (presumably that makes it better than searching). See my del.icio.us page at http://del.icio.us/sarahwsutton

Authentication (‘s ability to preserve the privacy of the searcher) is the future of searching. There is the growing expectation of finding the past in the search (e.g. backfiles).

He mentioned Zotero as a means of creating a personal digital library. Note that zotero is one of my del.icio.us bookmarks in my ToRead tag category.

Here’s an interesting idea: Grokker as a federated search mechanism (some of the SUNY libraries are using it for that purpose).

Next, Mike Buschman, Technology Evangelist, Live Search Selection (Microsoft) spoke on The Future of Information Retrieval: When all books are online which seemed to me to be a bit preachy and sale-pitch-ish.

http://get.live.com/

MS Live Search Academic. About 5% of the worlds information is online, he talked primarily about the Live Search and the Book Search products.

In the future direction section of his presentation he mentions “unlocking non-textual information”. He mentioned music instruction books as an example but I immediately thought of the visual information seeking session (Eye to I), that I attended yesterday.

Questions to consider:
What is the atomic unit of the book?
What is a work?
What is the future of the physical library?
How is the movement to digital information going to affect what library professionals do?

Finally, R. David Lankes http://www.DavidLankes.org (LIS Faculty) presented on The future of information retrieval: Finding conversations

I realize that these notes have been getting progressively more disjointed, probably because it’s early, it’s the third day of the conference, he’s the last of four speakers.


The failure of Reference EXTRACT: mining data from reference transactions (which had been cleaned of any personally identifying info) and then calculated frequency of appearance of databases.

Miwa’s Question Paradox: people ask the same questions at the beginning and end of a search with completely different intentions

McClure’s Citation Strategy: in order to get cited, say it first, say it last, or say it stupid

Note: he made both of these up based on things people he knows have done and/or said.

http://iis.syr.edu/Projects/PNOpen What he means by ‘conversation’ has to do with this project to decompose reference transactions using complexity theory and network theory as frameworks…the conversation is the network created by the paths between utterances in these decomposed transactions.

Find the paper they wrote called, “Participatory Networks: The Library as Conversation”.

The web (etc.) makes a great deal of “conversational” data available.

Book as “conversation” (his definition) because the author had to organize the info it contains into chapters. Which is an interesting idea to compare to the NASIG keynote speaker about conversations in books as defined by marginal notes…a conversation in the more traditional sense: people communicating ideas in writing over time.

Saturday, June 23, 2007

ALA 2007 - Utilizing learning theory in online environments.

I arrived at this session late so I don't have the structure of the talk in front of me (so to speak) and can't structure this post in the same way as I have the preceding ones. Here are some of the ideas she presented:

Goals of learning in any discipline are the same, learning to think like, process and interpret data like a [enter discipline here].

There are lots of different types of learning (visual, auditory, kinesthic, etc.) and you can tell by observing them what their preferred learning styles are. By 'learning' from them we can create richer online learning evironments.

Most of this presentation is focused on library instruction and virtual reference.

Current educational theories (popular now):
(1) The idea that students learn in social groups; they learn from listening to each other and talking about things.
(2) Situated learning means that the learning takes place in the same place in which the knowledge will be later used; learning in context.
(3) Brain based learning or "ten minutes on, ten minutes off", in other words sharing some information and then giving the brain time "off" to absorb it.
(4) Behaviorism; negative feedback doesn't work.

A combination of face to face learning and online learning works best; helps to create an ongoing conversation which is (again) how Millennials learn.

She addressed generational differences in terms of it informing the learning process.

It seems to me that a lot of this has to do with non-verbal communication online...which sounds like an oxymoron until you think about "places" like second life and the presenters' comments about the changes in online communication that occur once the communicators have met face to face, for instance, think about the way you IM with someone or read a message from them (be it email or a blog comment, etc.) and how careful you are about what you say when you don't "know" them in person compared to how when you communicate with someone you do "know" in person and can almost hear the tone of their voice and can more easily tell when they're being sincere and when they're being sarcastic.


Interesting observation: I seem to engage in more reflection about the speaker's topic when I don't have an agenda for the presentation.

The Power Point for this presentation will be posted on the ALA website.

ALA 2007 - LRRT Research Forum: Information Seeking from Childhood through College

Note: LRRT is debuting a research mentor program; find the link and more info see their website (after a few days). Also, to volunteer for an LRRT committee, contact the incoming president soon.

The four programs in this session were ordered by age of the participants.

First was Lynne McKechnie (the I School at UW) speaking on “Spiderman is not for babies” The Boys and reading problem from the perspective of the boys themselves.

Boys lag behind girls in standardized tests of reading skills. McKechnie conducted semi structured interviews with boys aged 7 to 12 and made lists of all of their reading materials (including books, videos). They found that boys are reading. What they are reading is different between boys and girls.

This was a qualitative study and her results were presented in the voices of the boys who were interviewed. There were lots of quotes to illustrate the findings. Some of them were collected by (i.e. interviews were conducted by) her students (presumably MLS or PhD students). I would have been interested in a hearing a little bit more about the researchers’ perspectives in order to get a feel for their research paradigms

Melissa Gross (Florida State) presented next. Her presentation was entitled The Information Seeking Behaviors of School Children” which was part of a larger study that used both qualitative and quantitative methods which was published in the form of a book by Scarecrow Press. She focused on the qualitative results in this presentation. In it she compares self-generated and imposed information seeking. Some of the children were excited and happy to be asked to find some piece of information by a teacher or classmate but while this was looked on positively in the younger children it was perceived as not so positive by older children.

She began by defining the terms in her research question and the roles the people in her study generally took. She used focused in-depth interviews with seven teachers from one school including teachers, students (between the ages of 4 and 12), and the school library media specialist. She also spent some time explaining the limitations placed on the study by the ages of the children participants. She presented her results in her power point slides and provided anecdotal evidence (the childrens’ stories about their reading) verbally.

Here’s a thought: I wonder how one creates trustworthiness in this kind of study. Can you still use member checking with young children? How? Maybe through triangulation. I’ll have to look at her book to find out I s’pose.

It will also be interesting to read all of the presenters’ published research reports. It seemed to me that they presented here in language and terms that would be accessible to this audience.

The third presentation was on tweens’ information seeking behavior. Tweens are ages 9 to 13. Along with this information, he described some of their other characteristics and context. What he’s presenting is part of a larger study by Karen Fisher called “Talking to You” and had to do with finding out why people prefer to turn to each other for information, particularly for what she calls “everyday life information seeking.” In order to gather data they planned a “Tween Day” sort of one day camp which they repeated three times at three different locations (one on campus – UW, an urban outreach ministry, and a suburban elementary school).

They were asking things like what types of everyday information do they perceive a need for? How do they seek everyday information? What barriers do they encounter? (and four more that I missed because the slide passed two quickly).

They used focus groups, creative interactive web-based exercises, individual interviews all of which were recorded to collect data. He didn’t talk much (nor did the other presenters) about how they analyzed their data. He presented results and quotes from transcripts both in his power point slides and verbally. He gave a hint of their data analysis in describing their need to ‘decode’ some of the tweens’ terms (“stuff”); he talked about coding the transcript of the group interviews.

It’s interesting that, so far, none of the information sharing is happening in electronic environments. Whoops! Just as I write this, one of the quotes on one of his slides included a reference to chat rooms.

As a side note, he is a very engaging speaker and obviously passionate about tweens and his research…so much so, in fact, that he’s having trouble stopping.

Here’s an interesting finding (that they’re going to explore further): when asked what librarians can teach you to use to find information, newspapers and magazines and articles were the category that got the fewest votes.

Lynn Westbrook presented last on “Google the Random Stuff: Mental Models of Academic Information Seeking”. Her purpose for the study was to use mental models to examine information seeking; how they visualize and conceptualize about information when they’re dealing with an imposed query. The sample for the study was purposive and self-selected and bounded by matriculation level and academic achievement. She did in-depth interviews and observations, transcribed it all and used HyperResearch to code and analyze the data. Then she presented the components of some of her participants’ mental models.

She presented three different perspectives from the students in her study using quotes. Then she defined mental models and presented the advantages and disadvantages of their use as a frame work for research as well as how they’re used and how they develop. She spent most of her time expanding on the models that emerged from her data analysis.

Hmm, it would be interesting to look for how and if mental models and competency theory are related in the literature.

ALA 2007 - Library Research Round Table Research Forum, part two

In the second segment of this session, Laurie Bonnici (Drexel), Lynne Watson (Florida State) presented on "Other place as library". What they were interested in was do "other places" compete with libraries (using Oldenburg’s “Third Place” (1989) as their theoretical framework).

They used unobtrusive observation between 4/2006 – 8/2006 and a web based survey in February 2007 as their methods.

They presented some of the demographics from their survey and one of those was generation: 1 silent, 12 boomers, 12 x’s, 90 millennials.
Only 3% of the respondents to the survey given in the coffee house were using library resources but 3% also said they would like assistance with using library resources.

Most of their respondents stayed in the library between one and five hours (50% roughly).

Some of their reasons for not using the library:
- have a computer at home/work
- takes too long & pay to print
- internet service is poor
- no wifi
- have wireless laptop, don’t need to go to the library

In the library café most get coffee and move to the library (50%+).

They went through this presentation so fast that I only got about half of their points, need to look to see if they publish this somewhere (proceedings?).


Finally, Marie Radford (Rutgers), Lynn Silipigni Connaway (OCLC) presented the results of a multi year (10/05 – 9/07) project looking at virtual reference. Four phases: focus groups, analysis of live chat reference interviews, online surveys, telephone interviews. I’m not sure what their research question was, from the discussion I think that it has to do with the success of virtual reference transactions and, specifically, the success when the librarian clarifies the user’s query as compared to when the librarian doesn’t do so. [Query clarification = reference interview.]

Their results are available on the web at OCLC in a URL that ended with /synchronicity.

ALA 2007 - Library Research Round Table Research Forum

This session included three separate presentations by library school faculty (for the most part).

The first presentation was by a group of researchers from ProQuest (Joanna Marco, John Law, Serena Rosenham). I've seen John Law present these results before on a webinar earlier this spring so most of it was not new to me. But they have some really interesting results:

Ethnographic field study of how students seek information. Their research has gone through several phases since September 2006. Most participants so far have been undergraduates but they’re going to target graduate students in the next phase.

Students were actively engaged in a class research assignment and were studied onsite and remotely (Userview, which is a usability testing software for use on the Internet). The remote observation worked better because it allowed them great geographic coverage and because they obtained recordings of each session and finally because it allowed the student to relax and act more naturally without an observer in the room with them.

They used Facebook to recruit participants. A flyer was placed on Facebook and a survey was used to give them further information about the study and to filter for the characteristics they needed. They included grads, undergrads, in a variety of disciplines, and with a variety of skill levels.

How do students decide which resources to use for their research?
- Students ARE using library resources when we teach them to do so in the context of the course and at the point of their need; story of the fourth year student who used library resources but had only been doing so since a librarian had visted their classroom to show them how to do so.
- Endorsement of instructor; story about the 3rd year Biology student who used JSTOR for her biology research.
- Brand awareness has an impact on what students use to start their research; story about the student who hesitated a long time over selecting a database and spoke about recognizing ProQuest.
- Google

How do students use library research?
- 95% of parts engaged library resources for their research
- They often work with multiple resources at the same time; average number of tabs open at a time was between 5 and 12.
- Abstracts are essential in identifying relevant articles (even when the full text is available)
- They have no serious difficulties using databases once they find them

Chief inhibitors to success in using library resources:
- lack of awareness of resources; Law interprets from this a need for libraries to increase marketing efforts
- difficulty navigating library website to locate resources
- students search the library catalog for articles because the search box is front and center on the library web page
- authentication requirements and difficulties create a barrier to entrance to library resources and an obstacle; also lack of awareness of the purpose or even existence of authentication

How students REALLY use Google:
- some 90% of researchers use internet search engines for their research according to Outsell and OCLC data; but in the case of this study it was 32%. What’s important is HOW they are using it
o for quick answers and definitions
o uses it as sufficient when quality isn’t a concern
o because they’re insufficiently aware of library resources
o and because they’ve had a bad experience using library resources (like ab authentication barrier)
- when they used google they were less effective than they were when they used library resources (in terms of obtaining quality content)
- as a handy look-up tool
- to get specific answers

Their end user surveys support these findings. They had about 10,000 respondents who were invited from ProQuest websites and from Facebook to take the survey.
- they recognize that the library has higher quality content
- and that the library has more content
- but google’s interface is easier to use
- prefer to use library database for academic research
- prefer to use google for quick look ups

How does social networking sites factor into student research?
- for the most part, they don’t
- they use it for communication between group members when working together on a project

I wonder what’s going to happen to this project and ProQuest’s other research projects in light of the merger with CSA?

Informing the future of MARC: An empirical approach

Bill Moen (UNT) and Shawne Miksa (UNT) presented a research study in which they examined the use of MARC by catalogers in order to provide empirical evidence of that use and contribute to a discussion within the profession about future uses of MARC. Bill and Shawne presented their research and some results. Sally talked about “MARC Futures”.

Moen and Miksa's strictly empirical approach is interesting to me in light of the book I’ve been reading lately, Research Methods for Information which emphasizes qualitative research being more accessible to practitioners in information professions than qualitative research.

Detailed information about the study is available at http://www.mcdu.unt.edu will include the program ppt and handout. Bill noted that they will be making the record parser and MySQL database that they used for this project in an open source environment so that other researchers could work with their own record sets and ask some of the interesting questions that the audience raised.

Some of the areas and characteristics of MARC in which Sally McCallum expects to see change are:

Its granularity; there is the potential for a reduction in the number fields and subfields.

Its versatility; MARC has the potential for “community profiling” (by which she means models I think), in other words it could be used in subsets for specific purposes like FRBR, MODS, etc.

Extensibility; this seems pretty similar to versatility to me, but I think she means not just creating subsets of fields but using them for new purposes, e.g. extending their use. For example, it has the potential to link rights information to a bib record.

Hierarchy support: MARC has a little but not much ability to define hierarchies; she predicts the development of other means of doing this.

Crosswalks (data element mappings): they are expensive in terms of time required to create and maintain them.

Tools: the MARC tool kit provides the tools for transferring records from another format to or from MARC but not between each other and Sally envisions development of additional tools using MARCXML.

Cooperative management: there is already a lot of participation in MARC via lists and she expects that to continue

Pervasive: MARC is used globally and will probably continue to be so through XML.

The interesting thing about this presentation was the juxtaposition of what were basically two presentations, Moen and Miksa’s MARC research project and McCallum’s predictions about its future. It seems to me at first glance that the two were pretty much in agreement with each other in terms of the future of MARC as a standard for making bibliographic description available to users that supports their needs. This basic purpose is unchanging even while MARC itself will continue to evolve in reaction to advances in technology and newly developing needs like the ones that Sally mentioned crosswalks, improved description of hierarchies, and bringing together of disparate data.

‘Research Methods in Information’ chapters 9 and 10

These two chapters cover experimental research (10) and ethnographic research (11), which, of course, are at opposite ends of the research spectrum from one another. It’s an interesting contrast and, after having noticed this, I realized that there is a similar contrast between the first two chapters of this part of the book (case studies and surveys). The contrast is greater between experimental and ethnographic research and it appears as if she is preparing us for this great contrast by allowing us to compare and contrast case studies and survey research first; sort of easing the reader toward both ends of the spectrum.

True to her word (in the introduction), she discusses the unique aspects of conducting ethnographic research in a virtual community. Not differentiating it from ethnographic research in other communities but providing insight into the particular issues unique to a virtual environment. What interested me most here was a set of qualities she uses (borrowed) to define a community in a virtual environment. First because she doesn’t say (and I wondered) whether there are also accepted characteristics that define a ‘community’ in a non-virtual environment (other than the obvious physicals ones). Are they so obvious that ‘anyone’ will recognize them? I think it would be interesting to go back and look at that in depth.

And second, in discussing another problem posed by virtual communities, that of observing the personal identities of community members because they are more easily hidden in a virtual community. She says, “Dissembodied communication makes it very difficult for a researcher to engage in participant observation” (p.120). I have to disagree with that a bit. I think that the characteristics of an individual that a researcher can observer are different in a virtual community vs. a physical one but they certainly still exist beyond the verbal/textual. For example, one might as easily observe the communication behaviors exhibited by members of a virtual community as one could in a physical community. The means of communication may differ (speech vs. text) but the act of communicating is happening.

‘Research Methods in Information’ chapters 7 & 8

Chapters seven and eight begin Part two of the book in which the author describes a variety of qualitative and quantitative research methods beginning with case studies (chapter seven) and surveys (chapter eight). In each chapter the author describes and defines the method (both what it is and what it is not) and its important features then she provides further description of the process.

The case in a case studies must have clearly defined boundaries. The purpose of the study is also important in that it provides a means of keeping the researcher on track with the project rather than veering off in search of answers less relevant to the research question. Intrinsic (to gain understanding), instrumental (to examine a phenomenon), and multi-case.

Continued emphasis on the researchers’ responsibility. Added emphasis on the need for structure (in the form of a well defined means of organizing data) that does not interfere with or constrain the emergent process of qualitative research. She also covers the accepted means of establishing trustworthiness (in qualitative methods) and reliability (in quantitative methods).

Her chapter on case studies reminded me of what I learned about Action Research in a class this past spring. In Action Research, the researcher is focusing on one particular case in context not with the intention of generalizing results outside of the case but in order to better understand the inner workings of the community of stake holders involved and, perhaps, a particular phenomenon within that community with, in the case of Action Research, the added purpose of allowing the community to solve a community problem.

Is Action Research a type of case study? I don’t think so. I think Action Research is similar to case study and that, perhaps, case study would be one way of approaching an AR project but certainly not the only way. I’m still struggling with where AR fits into my larger picture of research as a whole.

Survey research allows one to “study relationships between specific variables”. Descriptive surveys seek to describe a situation by revealing relationships between the variables while explanatory surveys seek to explain the relationship between variables in terms of cause and effect (although there is a lot of debate about how far one can go towards saying variable A caused variable B since survey research does not seek to isolate variables).

I also found a citation to a study in this chapter (she uses it as an example of an explanatory survey) that I think will be very pertinent to my research into how members of an academic community seek information in electronic environments. [Tabatabai, D. and Shore, B.M. (2005) How experts and novices search the Web. Library and Information Science Research 27(20:222-48.]

Tuesday, June 19, 2007

"Research Methods in Information" chapters 5 & 6

I'm obviously going to have to step this up if I plan to be done with my "official" review by next Tuesday...sigh...I guess that's what long plane rides are for, right?

Chapter five is called 'Sampling' and details the differences between sampling for a qualitative project and sampling for a quantitative project. She presents and describes a couple of both probability and purposive sampling. The only thing I missed in the first read is the difference between stratified random sampling and cluster sampling. I should know that already but didn't pick up on the differences in the text.


Chapter six is entitled 'Ethics in research' and is very appropriately placed at the end of the first section of the book that provides an overview of basic research and places it in context. Here she covers the basic points of research ethics including informed consent and the difference between annonymity and confidentiality and the importance of making promises that you can keep.

At the end of the first section of the book, I have to say that I'm impressed. Impressed particularly with the sensible organization of the book, it's clear structure, and the exercises at the end of each chapter. If I ever get to teach a research methods course, this is the textbook I'll use.

Random observation: on p. 71-72 she says, "there is an argument that observing people in public places needs no permission or consent as their behavior, by definition, is public and therefore available for all to see, study, and analyze." In theory, I agree with this, but in practice I have to wonder what all those people talking on their cell phones in libraries, airports, grocery stores, and so on would have to say about a researcher who recorded the "public" portion of those conversations for analysis and publication.

Need a laugh?

Try this librarian humor....

Sunday, June 17, 2007

'Research Methods in Information' chapters 3 & 4

Ok, yeah, I'm a little behind in writing up my notes. But I have continued to read so here are my notes on chapters three through six.

Chapter three is called 'Defining the research'. Here she gives the reader a 'pre-operational structure' of research with descriptions of each part of the structure as well as continuing to use a particular case as an example. Emphasis is giving to the problems inherent in trying to 'prove' a hypothesis. There is a really good, concise, clearly written section on defining variables. Finally, she clarifies the distinction between the goals and the aims of the research project.

In chapter four she describes the usefulness of a written research proposal no matter what the contact of the research project. I particularly enjoy (and, I confess, agree with) the emphasis that she places on putting the responsibility for the research project squarely on the researcher. In this chapter, she does so in the context of the care that the researcher should take in complying with all requirements applicable to writing the research proposal.

Some of my favorite quotes from this chapter are:

"Whatever choices you make you will need to demonstrate that you understand the nature of the choices you have made." (p.54) Further down the page, she alludes to this again in the context of qualitative data analysis.

"You are opening a can of worms [in undertaking a research project] as soon as you begin to ask questions, do not expect to find all of the answers." (p.56)

NASIG 2007: final thoughts

I attended two other sessions that I either arrived late for or was focused on other things while attending.

The first was Bob Schufreider's session on "Making sense of your usage statistics" which I'm sorry I didn't arrive on time for because I am very interested in making better use of our usage statistics. Bob works for MPS Technologies who makes the ScholarlyStats product that we've just trialed.

The second was the final key note speaker, Daniel Chudnov from the Library of Congress. His basic theme was the need for lowering barriers between libraries and everything else on the web. He points out that every major media outlet is using dynamic service links which cries out for OpenURL, they’re doing it and we (libraries)’re not.


I'm really disappointed not to be able to access all of the conference handouts. For the first time this year, NASIG put program handouts on the web using Moodle which is very exciting for me since I tend to take notes on my laptop in sessions anyway and its lovely to have a copy of the speaker's materials at the tip of my fingers. But this was obviously not meant to be since, try as I will, I can't get the site to either recongize me or send me the email that contains directions for resetting the password they gave me.

However that's the ONLY negative note about this year's conference. The venue was lovely and convenient; the programs were timely and interesting and offered great variety. And the attendees were just as pleasant as always.

NASIG 2007: Education trifecta: win attention, place knowledge, show understanding

This session was presented by Virginia Taffurelli, Betsy Redmond, Steve Black. I was fortunate enough to be assigned to introduce them and thus had the opportunity to talk a bit with Steve Black, whom I hadn't met before, about the lack of attention to serials and electronic resources in library school curriculum. He teaches one of very few courses dedicated to this topic (among ALA accredited LIS programs in North America).

Virginia and Betsy presented some of the basics of developing and delivering course content. Virginia spent most of her time describing the use of course delivery software (WebCT, Blackboard, Moodle), and Betsy focused on practical tips for delivering a CE course. Their focus was a CE course in fundamentals of acquisitions for ALTCS.

Steve reviewed the syllabus for his course (which he made available to us in print). He talked about his reasons for writing his own textbook; he had contacted Nisonger to ask if he was going to revise his 1998 text and Nisonger had said no. It was published in November 2006. Prior to that he had used a copy of the manuscript in classes for two years and solicited feedback from the students which he found very useful.

He covered the objectives for the course which include a small module on cataloging a serial (they catalog on paper in class then as homework they compare what they’ve done to a MARC record online).

I really enjoyed this presentation and hope that someone follows up next year to answer some of my remaining questions: why is LIS education seemingly ignoring serials and e-resources management? what is covered in other serials courses (or modules within courses)?

NASIG 2007: How does digitization affect scholarship?

This was probably the best session I attended.

Ithaka, http://ithaka.org/research, is an organization that studies the advance of technology and how it can/should be managed. Their mission is to help academic institutions to adapt to and use technology.

The presented, Roger Schonfeld, started by asking the audience what characteristics a scholarly journal should have (format, aggregated?, open access?, indexed where?, commercial or non-profit?, sustainability) in order to develop a framework for analyzing the affects of digitization.

Two side markets = a system comprised of at least two user groups who need each other which is characterized by a platform (or intermediary) that balances the interests of both groups (sides of the market). He used the credit card network as an example where the merchants and the card holders are the two groups and the card companies are the platform or intermediary. The concept of two-sided markets is the framework that Ithaka used to examine their question about the affect of digitization on scholarship.

The two sides of the scholarly journal are readers and authors. One of the motivations that operates between the two groups is quality (high quality authors attracts high quality readers and high quality readers attract high quality authors). This characteristic is static in relation to the format in which the journal is published (the exchange mechanism = format = platform that joins the two groups).

In the traditional pricing model, the reader side involves subscription fees and on the author side are pages charges and advertising fees. The question is how are/should they be distributed?

Demand side
What are the sources of value of a journal on the (librarian) side? (audience participation)
- research/curricular support
- impact factor
- use
- ARL ranking
- Preservation of the record of scholarship
- Accreditation
- Platform stability
- Areas of collection emphasis
- Peer review
What are the sources of value of a journal on the reader side?
- findable
- usefulness and credibility of content
- currency
- author quality
- accessibility
- relative importance to field
- do they publish in it?
- Peer review
- Indexing
- Impact factor (as a proxy for quality)
Supply side
What are the sources of value of a journal from the advertiser’s perspective?
- number of subscriptions
- quality of reader
- reader’s interest in products
- cost
- findability
What are the souces of value of a journal from the author’s perspective
- reputation with colleagues
- how widely read / cited
- circulation
- speed of publication
- peer review
- impact factor
- cost to submit
- marketing and promotion

Findings from a survey of 4100 faculty members about the characteristics important to authors:
The most important characteristic was circulation (80% of participants sad that this characteristic was very important), no cost to publish (65%), preservation is assured (60%), highly selective (50%), accessible in developing world (45%), available for free (35%).
- authors submit to journals that can maximize the impact of their work on their field
- some disciplinary differences in the above data
- how has the impact of a journal changed in recent years? (digitization, more widely accessible)

Their research question is how does digitization effect the system of scholarly communication?

They’ve collected data (cited by and citing characteristics of 100 journals in each of three disciplines) and are in the process of data analysis which should be published/available in the late summer or early fall. They used regression analysis (Poisson process).

Results:
- the higher the frequency of citation, the lower the number of citations in that article (graph).
- digitizeding the journal-title years has increased inbound citation by between 7 and 17% (confidence interval)
- the effect grows steadily as the materials are available online longer
- different sources of online availability (channels) offer different effects; e.g. 3-15% increase occurs when there is one channel and 8-18% increase occurs when there are a large number of channels through which a journal is available
- questions raised: Are some channels more effective than others? Do some channels yield more impact? Is wide availability the key?
Results when the data are restricted to 1995-2005 in order to look at effects of/on born digital journals
- there is a strong and significant effect from digitization (but more analysis is needed)
- the publisher web site is not always the optimal distribution mechanism to increase citations
- longer embargos decrease the ability of a give channel to increase citations
- more questions: disciplinary variation? Effects of source item year of publication?


Their preliminary conclusion is that digitization does have a storng and significant effect on scholars’ ability to find and cite revelant reference give an advantage to
He’s obviously passionate about his topic and a very natural speaker which makes him very engaging. This is a fairly sophisticated research project and he did a very good job of explaining it in terms that were pertinent and understandable to librarians; partly because of the really good questions that the audience asked. Probably this will be my favorite session. It would be interesting to see what else Roger and his colleagues have done.

NASIG 2007: Hurry up please. It’s time – State of Emergency … aka The Paranoia Presentation

A library pundit is the best way I can describe Karen Schneider. She is one of those people who are blessed with a quick, sarcastic wit and a well developed intellect to support it. I enjoyed her presentation very much although I’m not entirely sure that I agree with all of her ideas. Please also bear in mind that this was the first session of the second day of the conference and, in addition to not being quite awake yet, I was fretting about the three meetings I had to chair during the rest of the day.

• From the perspective of a writer/essayist, what she calls the “production process of the serials ecology” includes: reflection, research, revision, work shopping, submission, revision, layout and printing.
• Relevant features of the ecology include: a nominal income to editor, author’s compensation is a year’s subscription to the publication but also provides the chance for her to write about a topic that is important to her.
• Memory work: history is built from artifacts as opposed to the memories of the people who lived it. She proposes that librarians work is memory work which gives it a curatorial aspect.
• She quoted Andrew Abbott from his book The System of Professions (which I heartily recommend if you haven’t read it) who says that a profession has (or should have) “complete, legally established control” over its domain. This, she maintains, is the basis of what she calls the ‘state of emergency’ in libraries since our control of collections and collection building (if, indeed, we ever had it) is being eroded or encroached upon by entities outside the profession.
• She maintains that particularly in the area of serials, we’re particularly susceptible to this. For example the publishers with whom we’re made “big deals.”
• Some of the concerns that she’s currently mulling are
o Why are we (libraries) allowing Google to create a proprietary collection of the world’s books? (apparently Google’s contracts with both the University of California and the University of Michigan include a clause that keeps the institutions from delivering the content that they’ve allowed Google to digitize to anyone other than Google, something I didn’t know). Same with Microsoft’s book project. AND Google search doesn’t reach the Microsoft book “silo” and vice versa, you can’t access content in Google books using any other search engine. I find this incredibly worrisome. The open content alliance is an non-proprietary version of the Google book search.
o Why do we (libraries) need to pay an organization an annual fee to give us temporary access on a remote server to the content that we already own? I’d say because our users are requiring us to.
o Why does Time-Warner have to be so greedy? For example, the recent postal rate increase impacts small presses to a much larger extent than it does publishers like Time-Warner who nas negotiated a lower postal rate. This is damaging to the serials ecology.
• Removing information from the public record is a concern of hers that she illustratd with the closing of the EPA libraries which she sees as a part of a larger movement of information being lost from the public/historical record. LOCKSS/CLOCKSS is a library grown innovation designed to protect the interests of librarianship and is . There is no license to create a “LOCKSS box”, it’s free, open-source software.
• Lessons:
o The right path is not always instinctive, obvious, or well marked
o ignore the dazzle and read the fine print
o bring our values (as librarians) to the table
o possession IS the law

• Interesting thought: people slam Disney over the 2003 copyright ruling but don’t blink an eye at apple who distributes a proprietary sound format for ipods. What makes Apple different from Disney?

Saturday, June 16, 2007

ALA 2007 schedule

After a lot of time examining maps and program details, I think I've finally nailed dow my ALA schedule. This is not as easy as it sounds since often there are three or four interesting sessions going on at once and location and distance between venues must be factored in as must time to visit with vendors in the exhibit hall AND at least a little sightseeing. Anyhow, here it is...

Saturday, June 23
8 - 10 am -- Informing the future of MARC: and empirical approach
(This one's being given by a library school prof of mine, Bill Moen)
10:30 - 12 noon -- Research: A user experience
12 - 1 pm -- Ebsco Acadmic Library Luncheon
3:30 - 3 pm -- Information seeking behavior from childhood through college
4 - 6 pm -- either The ALCTS electronic resources pricing discussion group or Utilizing learning theory in online environments depending on where the latter one takes place
7 - ? -- dinner

Sunday, June 24
7:30 - 8:30 am -- Alexander Street Press breakfast
9 - 10:30 -- Exhibits
10:30 - 12 noon -- New minds, new approaches: Juried papers by LIS students
11:30 - 1 pm -- CSA / RefWorks Lunch 'n learn
1:30 - 3:30 -- Eye to I: Visual literacy meets information literacy
3:30 - 5:30 -- National Gallery
6:30 - 8:30 -- Ex Libris customer reception

Monday, June 25
8 - 10 am -- The furture of information retrieval
10:30 - 12 noon -- Four star research
11:30 - 1 pm -- ProQuest luncheon
1:30 - 3 pm -- Fresh approaches in service delivery: linkings users and services in creative ways
3 - 5 pm -- Exhibits
7 - ? -- dinner

As usual, I'll try to post my thoughts on the sessions I attend, but also as usual, the timing will depend on the availability of internet access and electricity!

Monday, June 11, 2007

"Research Methods in Information", chapter 2

Chapter 2 is all about reviewing the literature and contains a wealth of useful tips for strategically conducting a literature review no matter what level of review one needs to accomplish. The structure of this chapter (and perhaps the whole book, we'll see) is marvelously clear. She sets out the steps/skills/stages (information seeking and retrieval, evaluation, critical analysis, synthesis) and explains the process(es) for each one including some really practical ideas for organizing them.

One of the things I'm finding most exciting and at the same time frustrating about the book so far is the suggested further reading lists at the end of each chapter. Exciting because they contain more information about topics I'm interested in and frustrating because I'll never have time to read them all.

I've been thinking about this last a bit recently because I've been feeling as if I need to find a workable (for me) way of organizing what I read (as well as what I need/want to read) and have even begun working on creating an Access database as a way to accomplish it. One of the things I'd like to be able to do is trace the network of relationships between documents (this one cited that one, etc.), partly because I think it would be interesting to see and partly because I think it might help me to organize the ideas (which already are too many to keep in my head).

"Research Methods in Information" chapter 1

This chapter introduces the reader to three major research paradigms: positivism, postpositivism, and interpretivism. It contains a brief history of each as well as an overview of qualitative and quantitative research methodologies that compares and contrasts the characteristics of each, particularly the criteria upon which judgments of quality are made.

Thoughts on this chapter:
It is thick with terminology with which inexperienced researchers and students may not be familiar but that is somewhat offset by their inclusion in the glossary.

I find myself thinking of it as a textbook for a research methods class. From that perspective, it seems useful.

I like the way she qualifies her brief overviews with repeated suggestions that the interested researcher read further on each topic...and provides recommendations on where to start such reading.

Here's my favorite quote from chapter 1: "Whichever paradigm you associate your research with, whichever methodological approach you take, demonstrating the value of your investigation is essential. This applies to practitioner research and student research: we all want our findings to be believed and are responsible for ensuring that they can be believed." (page 18)

However, I also like this one: (on establishing objectivity in quantitative research: "Findings are a result of the research investigation, not a result of the researcher's interpretation of those findings." (page 22)

Thursday, June 07, 2007

"Research Methods in Information"

My latest book to review for LJ is called "Research Methods in Information" by Dr. Alison Jane Pickard and I'm very excited about it. It's a handbook/textbook for those of us working in the information professions which, of course, is right up my alley. So I'm going to try something new here. I'm going to post my notes as I'm reading, more to keep myself organized than for any other reason but also on the off chance that there's anyone out there who shares my interest in research methods who might have a comment or insight that I don't have. Of course, I'll also post a link to the review when it's published.

So, in her introduction, Dr. Pickard lays out the importance of research in the fields of information studies, communications, records management, knowledge management and the related disciplines: (1) increasing the body of knowledge that makes up those professional fields, (2) the need for research skills in professionals in those fields, "Knowledge and experience of research is a fundamental part of what makes the 'information professional' ", (3) to allow practitioners to continue to grow in their professions as well as to better accomplish their tasks (e.g. benchmarking, assessment, strategic planning, and so on).

Next she describes the framework on which the organization of the book rests which she describes as the research hierarchy which moves from the research paradigm on which methodology is based and, in turn, on which the selection of a research method is based, and, in turn, on which selection of the research technique and instrument are based.

The research paradigm is the world view or underlying assumptions about the world that the researcher starts with. The methodology which is either qualitative or quantitative and is distinguished from the research method which is the strategy or approach to the problem taken by the researcher. The technique is an approach to data collection that is dictated by the research question. And the research instrument is the unique operationalization of the selected technique.

Now, rereading what I've written, I can already make two statements. First, I'm going to try NOT to simply summarize the book here. Rather I'm going to try to limit myself to comments about ideas that jump out at me as being noteworthy in some way. And second, I'm already engaged by and in total agreement with the idea that research is not just the realm of scholars who wish to contribute to a body of knowledge but rather research is accessible and achievable and useful, perhaps even necessary, for professionals in the information professions.

Wednesday, June 06, 2007

Too good not to share

She gave a key note speach at NASIG last week and, after reading her blog for a couple of days, I am rapidly becoming a fan: http://freerangelibrarian.com/.

Tuesday, June 05, 2007

NASIG 2007: Betting a strong hand in the game of electronic resources management

Paoshan Yue and Liz Burnette
Paoshan and Liz presented two versions of electronic resources workflow in their libraries. Paoshan described the evolution of e-resources workflow at the University of Nevada Reno Libraries and Liz presented a general model for building an e-resources workflow. This presentation was a little weak, lacking in content; the content was a bit too general. I would have liked more specifics about the actual workflows in their two libraries. However, it did get me to thinking that one of the ideas that I’ve been applying to web site design would apply equally in this situation. Many of us are trying to fit e-resources into our existing print serials workflows and that’s not something that we have (or maybe even should) be doing. The session got me to thinking about what other ways we might organize our e-resources work and other angles from which to approach that question.

See http://www2.library.unr.edu/serials/ERMworkflow.pdf for an example of
UNR's current workflow.

NASIG 2007: Alternatives to licensing of e-resources

Selden Lamoureaux & Zach Rolnik

This session was everything I expected it to be. I KNEW someone was working on this, I just didn’t know who. Now I know. It’s a NISO working group called SERU (Shared E-Resources Understanding) and they seek to find ways for libraries and publishers to come to agreement about the purchase of e-resources without the need for a contract or license.

Their argument goes like this: contracts are a barrier to access. They force both libraries and publishers to expend staff time and effort to negotiate licenses for e-journals and e-resources subscriptions. End-users suffer from the delays in access to information as a result of the need to negotiate licenses and libraries, especially smaller libraries, are put at a financial disadvantage.

Efforts are being made to reduce these costs by creating a global license, including SERU. The SERU Working Group has found a fair amount of consensus on many of the issues to be included and have a number of good reasons to believe that it might be a viable alternative.

It’s not a standard license, click-through license, or a replacement for ALL licenses. Instead, it calls for libraries and publishers to agree to accept copyright as the governing law over the provision and use of information services and uses the purchase order to describe the terms of the sale.

ALPSP and SSP both support SERU as do ARL and SPARC.

For more info and to register as a user at www.NISO.org/committees/SERU (note that registration is not open yet but will be soon).

Friday, June 01, 2007

NASIG 2007: Electronic resources workflow management

Paoshan Yue from the University of Nevada, Reno and Liz Burnette from North Carolina State University Libraries presented two models of managing electronic resources workflow and integrating it into existing libary workflow. Paoshan focused on technical integration of e-resources into serials workflow by presenting the evolution of UNR's procedures for making e-resources available from aquisitions to cataloging to accessibility. Their final (well, at least in use at present) workflow is presented at http://www2.library.unr.edu/serials/ERMworkflow.pdf.

Liz presented the staffing side of integrating e-resources into serials workflow. She emphasized the need to begin by examining existing procedures and the procedures required for e-resources processing before trying to integrate the two. She also explained an unexpected advantage of the process that they discovered at NCSU: the decrease in the occurence of inevitable slowed production when a staff member is away from the library or leaved altogether that resulted from cross-training several staff members to complete each task or step in the work flow.

This got me thinking about how we have really just squished e-resources and e-journals into our existing processed at TAMUCC and sparked a desire in me to go back and take a look at what we're doing and why. I was reminded of what I've learned about a point that was made about library web sites. That we tend to structure them in a manner similar to the organization of the physical library and that really doesn't need to be the case. Similarly, I think e-resources workflow does not necessarily need to be patterned on print serials workflow.

NASIG 2007: "What's the different about the social sciences?"

Leo Walford from Sage Publications presented this session in which he compared the characteristics of social science journals (and social science and scientists) and science, technology, and medical (STM) journals. Some of the points he made were:
- Social science journals are seen as smaller, less technologically demanding, and not published by large STM publishers. They are, therefore less expensive.
- How relevant is pricing in the world of big deals? While subscription prices increased between 1988 and 2005, the average price per page actually dropped by about 25% as a result of 'big deals'.
- Scocial scientists are less aware of the opportunities afforded by open access that are STM scholars but share a trend toward fewer visits to the physical library with them.
- (this is the point I found most interesting) Since the social sciences receive dramatically less grant funding compared to STM, when they apply the standard 1 to 2% of grant funds to paying for open access to their research publications they don't end up with enough to support the author pays model of open access that is becoming prevalent in STM publishing.
- In addition, social science journals have a longer shelf-life (meaning they are useful/cited for a longer period of time in general than STM journals), which leads publishers to impose longer embargos on their content, which makes the failure of the author pays model of open access that much more of a problem.

NASIG 2007 conference: opening session

I'm attending the North American Serials Interest Group (NASIG) meeting this week in Louisville, KY. In addition to fulfilling a number of organizational duties (committee work, etc.), I'll be attending a number of workshops and presentation which I'll be reporting on here.

This morning the first session waa an all-conference session at which which Bob Stein spoke about "The Evolution of Reading and Writing in the Networked Era". He has some very interesting (and controversial I think, at least among librarians) ideas. His main point was that what have existed as marginal notes in paper books for hundreds of years are actually converstations between the author and the reader (as well as between readers) that are very much like comments on a blog or the open peer review that some pre-publications go through.

Monday, May 28, 2007

An odd conversion of events

I finished reading a book this weekend about the Scopes Monkey Trial. It's one I'm reviewing for Library Journal so when the review is published I'll try to remember to add the link to it. In the meantime thought, what really stuns me about it is the way individuals, serving their own purposes for the most part, just happen to spur events that impact a whole nation and which impacts reverbrate for years.

The Scopes Monkey Trial happened because the town fathers in a small Tennessee town sought to boost the town's economy. At the time, it was against state law to teach Dawinian theory of evolution in public schools. By encouraging a young high school teacher named John Scopes to allow himself to be indicted for breaking this law by teaching evolution in the local highschool. The ACLU leapt to defend him and thus began one of the most widely followed trials of the early 20th century.

Live chat

About Me

My photo
I am... a wife a daughter a sister/sister-in-law an aunt a reader a librarian a doctor a quilter a niece a grandmother ;-) a cat owner 6 feet 1 inches tall a yoga enthusiast a cook

Blog Archive