Guest Post from Diana Almader-Douglas: Raising Awareness about the Importance of Culture on Health Literacy for Librarians

This isn’t something i’ve done before, but one of my fellow colleagues – Diana Almader-Douglas, has spent the last 6+ months updating some excellent resources on culture and health literacy at the National Library of Medicine. Diana is incredibly knowledgeable about these issues, and has asked if I would be willing to let her write a short post on my blog. You can read the post in its entirety below, and it is full of useful information about this issue – especially for health sciences librarians. I will make a disclaimer that this post is more focused on issues in the US, but I think that issues surrounding culture and health literacy presented here are applicable to Canada as well. Enjoy!

Diana Almader-Douglas:
diana-almader-douglas

Through a National Library of Medicine Associate Fellowship Project, I evaluated and enhanced the National Network of Libraries of Medicine’s (NN/LM) Health Literacy resource by adding content and resources related to culture in the context of health literacy.

By providing information about the relationship between culture and health literacy, the highly-utilized resource has the ability to impact a wider audience by encouraging the dissemination of culturally relevant health information by librarians and information professionals.

Through this project, I aimed to raise awareness about vulnerable and special populations while highlighting the connection to health disparities and health literacy.

Culture is one component of health literacy, but it is also a critical element of the complex topic of health literacy. Culture shapes communication, beliefs, and the comprehension of health information.  By enhancing the NN/LM Health Literacy Web page with content about health literacy in a cultural context, users of the page, and end users will be able to better meet the health information needs of vulnerable and diverse population groups they are serving. 

For more information about culture and health literacy, visit:

Benjamin RM. Improving Health by Improving Health Literacy. Public Health Rep. 2010, Nov-Dec; 125(6):784-785. Available from:http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2966655/pdf/phr125000784.pdf

United States Department of Health & Human Services. Health Resources and Services Administration (HSRA). Culture, Language and Health Literacy. Available from: http://www.hrsa.gov/culturalcompetence/index.html

United States Department of Health & Human Services. National Library of Medicine Specialized Information Services Outreach Activities & Resources.Multi-cultural Resources for Health Information. Available from:http://sis.nlm.nih.gov/outreach/multicultural.html

Thanks for reading. I hope health sciences librarians will find this information to be useful. Just to add a bit of Canadian content, I have included some Canadian health literacy resources below – many of which could use the cultural focus that Diana has implemented for the NNLM:

Canadian Public Health Association Health Literacy Portal:http://www.cpha.ca/en/portals/h-l.aspx

Canadian Council on Learning. Health Literacy in Canada: A Healthy Understanding: http://www.ccl-cca.ca/ccl/Reports/HealthLiteracy.html

Health Literacy Council of Canada:http://healthliteracy.ca/

Public Health Agency of Canada: http://www.phac-aspc.gc.ca/cd-mc/hl-ls/index-eng.php

Podcast on Health Literacy and Cultural Competence. Centre for Literacy: http://www.centreforliteracy.qc.ca/news/podcast-health-literacy-and-cultural-competence 

 
 

Concerning the deal between LAC and Canadiana: We ask for transparency

I thought I would take this opportunity to weigh in on the deal between Library and Archives Canada and Canadiana, which calls for the transfer and digitization of the largest collection of Canadian archival records in history. I want to make it clear that in the grand scheme of things I think that this project is all in all a very good thing for archives in Canada, and is long overdue. What worries me is that the details surrounding this deal are largely unclear, and I think it is important for us, being Canadian archivists and librarians, to ask specific questions about this deal to ensure that this heritage collection is safe, and will ultimately be freely available to all Canadians who want to view it.

Canadiana has already tried to quell some of the hysteria surrounding the deal with their recently published FAQ, but if I’m honest there are a lot of  questions that I have that are still largely left unanswered. I even asked Canadiana on Twitter the other day to clarify the issues surrounding the ‘Premium’ payment that would be required if I wanted to have access to the search and discovery features they will be developing, but I have yet to hear a reply. I think this line from the FAQ deserves a more detailed explanation:

Until the completion of the project, this searchable, full-text data will be one of the premium services.

Does this mean that once the project is completed everyone will have free access to these features? If this is only one of the premium features, what else will we be missing out on if we don’t pay? These are just some of the questions I have about the deal, but more importantly, I think it is crucial that we start asking those involved (CRKN, CARL, LAC, Canadiana) how they plan to manage, describe and preserve this enormous amount of information and make sure that it will be available to Canadians for years to come. A lot of these questions have been discussed in Bibliocracy’s blog posts on the issue, but I would like to reiterate and request that the library and archives community start asking Canadiana and LAC their own questions to hopefully spur on more details about the project. To start it off, I have outlined below the questions that I would like to have answered:

How will this information be stored, and consequently transferred back to LAC once the full digitization process is complete?

Information architecture is obviously a crucial component of this project, as the collection will need to be stored someplace where it can be accessed by all. I think it is more important that we receive an answer about how all of this content will be transferred back to LAC. There are many methods and avenues this project can take in terms of placing the material in a repository or content management system to hold of all this material, and I think that both parties owe it to us to explain how this work will be completed. Will Canadiana use something like CLOCKSS to ensure that this material is preserved and made freely available forever? Or will this be the responsibility of LAC once the project is done? I would like to know that themigration of digital documents will be easily transferred back to LAC once this is over. Which brings me to my next question:

What measures will be taken regarding the digital preservation of the finalized, newly described content?

I’m hoping that having the responsibility of managing Canada’s largest archival collection will spearhead Canadiana to take measures to ensure the preservation not only of the physical content, but the newly digitized content as well. I would like to know where they plan on storing all of this information – will copies be held in a dark archive to ensure its long-term preservation? Will they use an Open Archival Information System (OAIS)? Will they use the Trusted Digital Repository model? It would be nice to see something akin to a Trustworthy Repository Audit and Certification (TRAC) so that Canadian information professionals feel confident that the proper steps are being taken to preserve this digital content.

What type of metadata schemas will be used?

This one is pretty self explanatory, but seeing as this is a Canadian initiative one would have to assume that Canada’s RAD archival description schema will be used. Seeing as linked data has become so prominent as of late, does Canadiana have plans to use RDF to encourage and support linked data within this collection? Because one of the main goals of this project is to make this content more discoverable and searchable, I think it would be helpful for us to understand how all of this transcription and metadata tagging will take place.

What do you really mean when you say that all of the content will be open access?

When I hear the term open access used to describe information content I always get excited. If this effort is truly going to make all of this digitized archival material open access, then that is fantastic. However with this deal, there are details of how open access is being described in this context that have me scratching my head. For a definition of open access, I like to use SPARC’s definition, which they define (in a nutshell) as material that has:

immediate, free availability on the public internet, permitting any users to read, download, copy, distribute, print, search or link to the full text of collections, crawl them for indexing, pass them as data to software or use them for any other lawful purpose

There have been a lot of discussions around Canadiana’s statement that they will be making the digital content available for free via a Creative Commons license. What I don’t understand is that in order to access certain features of this content, you will have to pay a premium fee. That doesn’t sound very open access to me, but a simple clarification would help with this fact. Which leads me to:

Can you please elaborate on the fees that are involved with premium access, and how this will work with the 10% of digital material released per year for 10 years?

This question has been on my mind since I heard about this deal (as I described above). What I would like to know if how this premium fee will work: what will it cost? what features are involved? Will the premium features become freely available once every 10% of the digitization process is completed?

I understand that in order to create high quality descriptive metadata for digitization you need money to do it. I don’t have as much of a problem with that, but what worries me is that these details have no been provided to us. By not answering this one glaring questions, Canadiana has made me nervous that I will have to pay, or my institution will have to pay for content over the long term? How do I know that these charges won’t continue once you finish the project?

What experts are going to be consulted for this project?

I know that CRKN and CARL have both supplied money for this project, but it would be very comforting to know that highly skilled, expert personnel will be working on this project. As a librarian and archivist, I want this effort to succeed at the highest level. In order to feel confident that this will be the case, I think it would be wise to inform the library and archival community in Canada as to who will be advising this effort. I always like specifics, and knowing that the best people are working on this effort will go a long way towards easing my mind.

In the end, all I’m asking for is a little bit of transparency. This project will have an effect on a huge number of information professionals, researchers, and the general public. I think that this project shows a lot of promise, and should be a cause for excitement amongst the Canadian information community. However, until Canadiana or LAC provide specifics about this deal, I will be holding my excitement. The lack of explanation, and vagueness of this project should be a cause of concern for everyone. Ultimately, I don’t think an open and transparent explanation of a project that affects so many Canadian people is too much to ask for.

I encourage other Canadian archivists and librarians to ask their own questions about this deal through blogs, social media, or email in hopes that it will generate enough demand that Canadiana and LAC will have to respond. I am only a small voice in this, and it would be great to see others get involved. Using #heritagedeal on Twitter could help synthesize all of this information in one place.

Thanks for reading.

Data Publishing: Who is meeting this need?

I realize I haven’t written a post in over a month, and I feel horribly guilty about it. The one good thing about not having the time to write blog posts frequently is that I now have a stockpile of ideas, and plenty of material to write more frequent posts.

What I would like to address in today’s post is some of the ongoing efforts from journals, government agencies, and open source communities have taken to address the need to publish data, in all of its messy and intricate formats. Similar to my previous posts, I will describe each of the efforts that I find to be promising in terms of their ability to tackle this massive, and complicated task. In case readers are unfamiliar with the concept of a data publication, I define the concept based on a hybrid of different viewpoints from papers by Borgman, Lynch, Reilly et al., Smith, and White:

A data publication takes data that has been used for research and expands on the ‘why, when and how’ of its collection and processing, leaving an account of the analysis and conclusions to a conventional article. A data publication should  include metadata describing the data in detail such as who created the data, the description of the type of data, the versioning of the data, and most importantly where the data can be accessed (if it can be accessed at all). The main purpose of a data publication is to provide adequate information about the data so that it can be reused by another researcher in the future, as well as provide a way to attribute data to its respective creator. Knowing who creates data provides an added layer of transparency, as researchers will have to be held accountable for how they collect and present their data. Ideally, a data publication would be linked with its associated journal article to provide more information about the research.

With all that being said, lets take a look at some of the efforts that currently exist in the data publishing realm. Note that clicking on the images will take you to the homepages of each resource.

Nature Publishing Group – Scientific Data

Scientific Data

Scientific Data is the first of its kind in that it is an open access, online-only publication that is specifically designed to describe scientific data sets. Because the description of scientific data can be a complicated and exhaustive, this publication does an excellent job of addressing all of the questions that need to be asked of researchers before they even think of submitting their data. Scientific Data just came out with their criteria for publication today, and the questions they ask are exactly what is needed to ensure that the data publication will be able to be reused through appropriate description.

Then comes the next great component – the metadata. Scientific Data uses aData Descriptor’ model that requires narrative content about a data set such as the more traditional descriptors librarians are familiar with such as Title, Abstract and Methodology. What is excellent about the Data Descriptor model is that it also requires structured content about the data.  This structured content uses the an ‘Investigation’, ‘Study’ and ‘Assay’ (ISA) open source metadata format to describe aspects of the data in detail. These major categories are apparently designed to be ‘generic and extensible’, and serve to address all scientific data types and technologies. You can check ISA out HERE.

Overall I think that Scientific Data is the beginning of a new trend in publishing where major journals will begin to publish data publications more frequently on top of traditional research articles. This publication is the first step towards making research data available, reusable and transparent within the scientific research community.

F1000Research – Making Data Inclusion a Requirement

F1000Research   An innovative OA journal offering immediate publication and open peer review.

F1000Research is an excellent new open science journal that has caught my attention for its foray into systematic reviews and meta analyses and for its recent ‘grace period’ to encourage researchers to submit their negative results for publication. I think that this publication that medical librarians should be aware of, and potentially encourage researchers to submit to should they be looking for a more frugal option. What really impresses me with F1000Research though, is their commitment to ensuring that data associated with research articles is made readily available.

Currently, F1000Research reviews data that is submitted in conjunction with an article, and then offers to deposit the data on the authors behalf in an appropriate data repository. The journal is open to placing in data in any repository, but they work mainly with figshare – a popular platform for sharing data.  Together figshare and F1000Research have created a ‘data widget’ that allows figshare to link data files with its associated article in F1000Research – which is excellent! There was a recent blog post written about this widget here that can give it the attention it deserveshttp://blog.f1000research.com/2013/05/23/new-f1000research-figshare-portal-and-widget-design/). F1000Research is also apparently working on a similar project with Dryad. I think that moving forward we will see more efforts from journals like F1000Research to seamlessly connect their publications with associated data. This is a crucial component to publishing data as the journal article provides the context in terms of how the data was used. 

Dryad – Integrated Journals

Dryad Digital Repository   Dryad

Dryad is a data repository and service that offers journals the option of submission integration with their system. The service is completely free and is designed to simplify the process of submitting data, and ensure biodirectional links between the article and the data. Currently Dryad provides an option for data to be opened up to peer review, but I would like to see that become more of a requirement going forward. Here is a link to Dryad’s journal integration page: http://datadryad.org/pages/journalIntegration

Currently there are a number of journals currently participating in this effort, and a complete list of them can be seen HERE. Carly Strasser also did a great job of outlining other journals that require data sharing in her post about data sharing on the excellent blog Data Pub. I think Dryad is a perfect example of the other side of traditional publishing. We need data repositories like Dryad and figshare to continue supporting data publication and storage, as they represent half of the picture that will allow articles and data to be connected.

The Dataverse Network

Screenshot_1The Dataverse Network is a data repository designed for sharing, citing and archiving research data. Developed by Harvard and the Data Science team at the Institute for Quantitative Social Science, Dataverse is open to researchers in all scientific fields. As a service, Dataverse organizes its data sets into studies; each study contains cataloguing information along with the data, and provides a persistent way to cite the data that has been deposited.

Dataverse also uses Zelig (an R statistical package) software that provide statistical modeling of the data that is submitted. Finally, Dataverse can also be installed as a software program into their own institutional data repositories. I see the ability to download Dataverse for institutional purposes to be an excellent prospective strategy; as more academic institutions begin to develop data storage capabilities to their institutional repositories, Dataverse will provide some much needed assistance in this arena.

GitHub: Git for Data Publishing

GitHub · Build software better  together.

Although I would not call myself an expert of the GitHub world, I will say that I recognize a fruitful initiative to publish data when I see one. In a recent blog post by James Smith talking about how the tools of open source could potentially revolutionize open data publishing. The post is great and you can read it here: http://theodi.org/blog/gitdatapublishingutm_source=buffer&utm_medium=twitter&utm_campaign=Buffer&utm_content=buffer6c57f James’ idea is to upload data to GitHub repositories and use a DataPackage to attach metadata that will sufficiently describe the data. Ultimately the goal of using GitHub for data publication would enable sharing and reuse of data within a supporting and collaborative community. While some of this can get complicated, working through the links from his post really provides you with a sense of how an open source community is coming together to address the need to publish data.

Biositemaps

National Centers for Biomedical Computing

Biositemaps is a working group within the NIH that is designed to: 

(i) locating, (ii) querying, (iii) composing or combining, and (iv) mining biomedical resources

‘Biomedical resources’, in this case can be defined as anything from data sets to software packages to computer models. What is most interesting about Biositemaps is that they provide an Information Model that outlines a set of metadata that can be used to describe data. Using the Information Model as a base for data description, it then uses a Biomedical Resource Ontology (BRO); BRO is a controlled terminology for the ‘resource_type’, ‘area of research’, and ‘activity’ to help provide more information about how  data is used, and how it can be described in detail using biomedical terminology. I will admit this resource is still pretty raw, but I think it has a lot of potential for being an excellent resource moving forward. The basic idea behind Biositemaps is that a researcher fills in a lengthy auto-complete form describing themselves, their data, and the methodology used to create the data. Once the form is complete, it produces an RDF file that is uploaded to a registry where it can be linked to, and from anywhere. If you are a medical librarian and you have researchers interested in publishing data, I encourage you to take a look at this resource.

SHARE Program – Association of Research Libraries (ARL), Association of American Universities (AAU), the Association of Public and Land-grant Universities (APLU)

This effort just came out last week, but the ARL, AAU and APLU are joining together to create a shared vision of universities collaborating with the Federal government and others to host institutional repositories across the the memberships to provide access to public access research – including data. While it is not entirely clear how this will be achieved – especially in the realm of data – I think that this is the type of collaboration that will provide a well researched, evidence based solution moving forward. I hope that SHARE continues to expand beyond the response to the OSTP memo, as I think Canadian academic institutions could benefit greatly from this effort. Here is a link to the development draft for SHARE: http://www.arl.org/storage/documents/publications/share-proposal-07june13.pdf

For Medical Librarians

My goal in presenting these data publication efforts is an attempt to get medical librarians to think more about the options that are available for data publication. Journals, government agencies and open source communities are all trying to address the issues surrounding data publication, and I think it is our duty as medical librarians to familiarize ourselves with journal policies around data sharing; data publication initiatives like DataCite, Dryad, and figshare; and new government efforts like Biositemaps that are becoming more heavily used every day, and will be relevant for our liaison and research areas of practice moving forward. I have tried to provide a lot of links within this post, but I’ve included some more reading below that may be useful. I’d also like to mention that this is by no means an exhaustive list, but rather some of the interesting efforts i’ve seen throughout my work with data. Please feel free to add as you wish in the comments section.

Readings/References

1. Borgman CL, Wallis JC, Enyedy N. Little science confronts the data deluge: habitat ecology, embedded sensor networks, and digital libraries. International Journal of Digital Libraries [Internet]. 2007;7:17–30. Available from: http://escholarship.org/uc/item/6fs4559s#  

2. Lynch C. The shape of the scientific article in the developing cyberinfrastructure. CT Watch Quarterly [Internet]. 2007;3(3):5–10. Available from: http://www.ctwatch.org/quarterly/articles/2007/08/the-shape-of-the-scientific-article-in-the-developing-cyberinfrastructure/  

3. Piowowar H, Chapman W. A review of journal policies for sharing research data. Nature Precedings [Internet]. 2008. Available from: http://www.academia.edu/904922/A_review_of_journal_policies_for_sharing_research_data

4. Reilly S, Schallier W, Schrimpf S, Smit E, Wilkinson M. Report on Integration of Data and Publications [Internet]. 2011: p. 1–7. Available from: http://www.alliancepermanentaccess.org/wp-content/uploads/downloads/2011/10/ODE-ReportOnIntegrationOfDataAndPublications-exesummary.pdf  

5. Smith VS. Data publication: towards a database of everything. BMC research notes [Internet]. 2009 Jan [cited 2013 Mar 3];2:113. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2702265&tool=pmcentrez&rendertype=abstract  

6. Whyte A. IDCC13 Data Publication: generating trust around data sharing. Digital Curation Centre [Internet]. 2013 Jan 23; Available from: http://www.dcc.ac.uk/blog/idcc13-data-publication-generating-trust-around-data-sharing