UW-Madison Research Data Services http://researchdata.wisc.edu Fri, 24 Jul 2015 15:24:00 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.2 Andrew Johnson and Municipal Open Data at IASSIST 2015 http://researchdata.wisc.edu/news/5569/ http://researchdata.wisc.edu/news/5569/#comments Fri, 24 Jul 2015 14:48:51 +0000 http://researchdata.wisc.edu/?p=5569 [...]]]> By Trisha Adamus, Data, Network, and Translational Research Librarian at Ebling Library

Minneapolis Skyline

This photo, “Minneapolis Skyline” is copyright (c) 2011 Mike Appel and made available under a Attribution-Noncommercial-NoDerivs 2.0 Generic license.

In June 2015, I attended the International Association for Social Science Information Services and Technology (IASSIST) annual conference in Minneapolis, Minnesota. As you can imagine from the theme of the conference, “Bridging the Data Divide: Data in the International Context” many speakers and presentations provided valuable information useful in managing data. Andrew Johnson, elected to the City Council of Minneapolis in 2013, ran on a platform of Open Data Policy and implemented that policy in July of 2014. His plenary talk at the IASSIST conference focused on the dynamics and challenges of policy creation and passage, along with a discussion of next steps for Open Data in Minneapolis.

To start, let’s define open data. Open data is defined as “data that can be freely used, re-used and redistributed by anyone – subject only, at most, to the requirement to attribute and sharealike.” The two dimensions of data openness are:

  1. The data must be legally open, which means they must be placed in the public domain or under liberal terms of use with minimal restrictions.
  2. The data must be technically open, which means they must be published in electronic formats that are machine readable and preferably non-proprietary, so that anyone can access and use the data using common, freely available software tools. Data must also be publicly available and accessible on a public server, without password or firewall restrictions.

As you might imagine, creating policy to allow Minneapolis city data to be open represents a significant step forward in the accessibility of Minneapolis city government. The City Council of Minneapolis, including Andrew Johnson, and other supporters of the Open Data Policy consider the policy a necessary evolution of the city’s work via innovation, engagement, trust, and collaboration in the 21st century.

The purpose of the policy is to set guidelines for incorporating an open data framework into existing and future systems and procedures, and aid in determining what data sets should be made public, how to make the data sets public, and how to maintain the existing published data sets. To highlight a few of the Open Data Policy objectives; the policy requires the creation of a city Open Data Portal within 120 days of passage, establishes an Open Data Advisory Group to coordinate open data activities, requires an annual Open Data Compliance Report and provides a set of guidelines for IT and other city department responsibilities pertaining to open data.

I have to say that I was surprised and intrigued at the notion of a politician speaking at a data-centric conference. As it so happens, I found Andrew Johnson’s plenary talk to be extremely relevant to the data community gathered at IASSIST. The Minneapolis Open Data Policy creates awareness of the possibilities and successes of open and transparent government, and provides a model for other government entities to create their own more open and transparent governments. Minneapolis was the 16th city in the United States to implement an open data policy.

References and Related Links

Open Data Definitions



Minneapolis Open Data Policy


Minneapolis Open Data Portal


Andrew Johnson’s Blog post on Open Data and Open Government


Open Twin Cities blog post on the Minneapolis Open Data Policy


http://researchdata.wisc.edu/news/5569/feed/ 0
Join RDS at the UW-Madison Open Meetup! http://researchdata.wisc.edu/news/join-rds-at-the-uw-madison-open-meetup/ http://researchdata.wisc.edu/news/join-rds-at-the-uw-madison-open-meetup/#comments Tue, 14 Jul 2015 17:59:11 +0000 http://researchdata.wisc.edu/?p=5563 [...]]]> Banners celebrating 100 years of the Wisconsin Idea adorn the exterior of Bascom Hall at the University of Wisconsin-Madison on Aug. 5, 2011. (Photo by Bryce Richter / UW-Madison)

Banners celebrating 100 years of the Wisconsin Idea adorn the exterior of Bascom Hall at the University of Wisconsin-Madison on Aug. 5, 2011. (Photo by Bryce Richter / UW-Madison)

Are you interested in open data, open access, and open educational resources? Just want to learn more about what those terms mean? Join RDS this Thursday at the second meeting of UW-Madison Open Meetup!

What is UW-Madison Open Meetup?

The meetup happens every third Thursday at 12:30 and is a space for a campus discussion around these “openness” topics. Currently, the meetings are focused on building relationships between one another and sharing information, interest, and experiences. As the community grows, there is room for the meetings and shared resources to take more focused shape. Learn more about the meetup here.

We’re pretty excited for these meetups here at RDS –  “openness” is key to sharing knowledge and moving research forward. We hope you’ll join the conversation!

The details:

When: Thursday, July 16th from 12:30 – 1:30 PM

Where: Wisconsin Idea Room – Room 159, Education Bldg (on Bascom Mall)

http://researchdata.wisc.edu/news/join-rds-at-the-uw-madison-open-meetup/feed/ 0
Get to Know the RDS Team: Erin Carrillo http://researchdata.wisc.edu/news/get-to-know-the-rds-team-erin-carrillo/ http://researchdata.wisc.edu/news/get-to-know-the-rds-team-erin-carrillo/#comments Tue, 14 Jul 2015 14:46:42 +0000 http://researchdata.wisc.edu/?p=5558 [...]]]> In this series, we introduce the team members who make up Research Data Services (RDS). This interview is with Erin Carrillo, RDS team member and Information Services Librarian at Steenbock Memorial Library.


Describe your role at Steenbock Library.

I’m an information services librarian, so I answer questions, teach library instruction sessions, and am the library liaison to Plant Sciences, Nelson Institute, Zoology,  Botany, Plant Pathology, and Entomology.

What’s the most interesting project you’ve worked on recently?

In November, RDS held a two day data management workshop for graduate student researchers. Participants were from several departments across campus, including Limnology, Entomology, Forest and Wildlife Ecology, Geography, and the Nelson Institute for Environmental Studies, and were part of a cohort of graduate students doing research in the area of biodiversity conservation, funded by an NSF Integrative Graduate Education and Research Traineeship grant. We planned the workshop with two graduate students, who saw a need to provide new researchers with the knowledge and skills to navigate the changing research data landscape. The workshop addressed several broad topics within data management, but content was tailored to the specific needs of the group.


What excites you about supporting research data management on campus?

I’m excited that funders and publishers are increasingly requiring data sharing and open data. There are so many benefits to sharing data to both researchers and the public, such as increasing recognition and visibility, and accelerating discovery. I enjoy advocating for data sharing, and helping researchers make their data available for reuse.

If you had an unlimited budget, what would you institute on campus?

A for-credit data management course that all incoming graduate students are required to take. From funder and publisher requirements for data management plans and data sharing, to the ongoing development of metadata standards and discipline-specific data repositories, researchers need to be aware of trends within their discipline and practice good data management from the outset.

Do you have a favorite UW building or landmark?

I love the Allen Centennial Gardens during the spring and summer. It’s relaxing to sit and watch the koi swim around the pond.

What do you like to do outside of work?

I like to run, sew, and binge watch tv shows on Netflix. I also recently started taking trapeze classes. My photo shows me running my first Ragnar Relay from Madison to Chicago.

Do you have a question for Erin or the rest of the RDS team? Contact us today.

http://researchdata.wisc.edu/news/get-to-know-the-rds-team-erin-carrillo/feed/ 0
Get to Know the RDS Team: Luke Bluma http://researchdata.wisc.edu/news/get-to-know-the-rds-team-luke-bluma/ http://researchdata.wisc.edu/news/get-to-know-the-rds-team-luke-bluma/#comments Wed, 27 May 2015 14:28:49 +0000 http://researchdata.wisc.edu/?p=5539 [...]]]> In this series, we introduce the team members who make up Research Data Services (RDS). This interview is with Luke Bluma, RDS team member and Engagement Manager for the Campus Computing Infrastructure (CCI) initiative.

Describe your role with CCI.

I am the Engagement Manager for the Campus Computing Infrastructure (CCI) initiative. CCI is a campus sponsored and governed initiative that delivers shared, scalable, secure IT infrastructure services to campus partners at UW-Madison. Services include: data center management, server hosting, storage and backup. My role is all about building relationships, learning how departments on campus do what they do, and gathering requirements on how shared IT infrastructure services may be able to help them out. My main focus over the last couple of years has been file storage.

What’s the most interesting project you’ve worked on recently?

I recently got to work with a faculty researcher who had some storage needs. He is going to be utilizing the computing resources through the Advanced Computing Initiative (ACI) and needed a place to store the research data after the computations were complete. I was able to meet with him, learn a little about his research, identify his storage needs, and set him up with our scalable, affordable network storage service.

What excites you about supporting research data management on campus?

Luke GolfI love supporting research data management on campus because in the past my role has been mostly focused on administrative data, and while administrative data is critical to our campus it isn’t always as exciting as research data, in my opinion. I love being able to provide the platforms (virtual servers, storage, backup) that allow researchers to innovate. Working for UW-Madison is great, and being able to help support the research we do, in some small way, makes that even better!

If you had an unlimited budget, what would you institute on campus?

Free Babcock ice cream for all! In every building, day or night! (I wish!)

If I had an unlimited budget, I would re-think how we provide IT services on campus. I would work with campus to identify what core IT services should be provided by the University at no cost. This might include things like networking, virtual and physical servers, storage for your group, backup for your data and computers, etc. This would be a tremendous undertaking and would require a huge investment, but it would allow researchers and departments on campus to focus less on IT infrastructure (like running their own server room or storage array) and focus even more on their missions!

In addition to that, since I have an unlimited budget, I would also establish a group that would be available to facilitate access to these services. A group of people that could meet with you in person, learn about your work, identify potential solutions and help you get started. Having free tools is great, but it’s even better when someone is available to show you how to utilize them in the best ways possible.

Do you have a favorite UW building or landmark?

This was a tough question. I’m lucky because in my role I get to roam around campus a lot and see a lot of different buildings. I love the tall buildings because you get some spectacular views of downtown Madison from way up there. However, if I have to pick just one, I’d have to go with the Memorial Union. It gave me so many great memories during my undergraduate years here at UW-Madison – from studying in Der Rathskeller to enjoying a beer on a sunny afternoon at the Terrace. And recently one of my best friends got married there, so the memories just keep adding up!

What do you like to do outside of work?

I love to golf! However, I should be honest here… while I do love to golf, I’m not very good at it. I was on the golf team in high school because it allowed me to play golf after school for free, not because I was a great golfer. I love being able to get outside on a sunny Sunday afternoon and play 18 holes with some friends, even if I spend a lot of the time in the woods looking for my ball.

Do you have a question for Luke or the rest of the RDS team? Contact us today.

http://researchdata.wisc.edu/news/get-to-know-the-rds-team-luke-bluma/feed/ 0
NADDI Reflections [part 1] http://researchdata.wisc.edu/news/naddi-reflections-part-1/ http://researchdata.wisc.edu/news/naddi-reflections-part-1/#comments Wed, 20 May 2015 17:25:36 +0000 http://researchdata.wisc.edu/?p=5519 [...]]]> NADDI_RDS

Evan (L) and Morgaine (R)

This post on NADDI 2015 was written by Morgaine Gilchrist Scott, one of two recipients of an RDS student scholarship. Read Evan Meszaros’ reflection.

In my past life, I was a public health researcher. In my current one, I’m a first year SLIS graduate student. I’m amazed and appalled at the data I once lost due to convenience. I don’t think we knew (or cared about) anything better than the proprietary format which met our immediate needs perfectly. I just looked up the software, and it’s already dead.

Have you ever heard of the Överkalix study? It’s often indicated as the seminal study in epigenetics. Scientists were able to discover things like a greater BMI at 9 years in the sons (but not the daughters) of fathers who began smoking early, and that a granddaughter’s risk of cardiovascular mortality increased when there was a sharp change in food availability for their paternal grandmothers.

But HOW were researchers able to conclude these things? Data. Old data. Old, easily explainable, data. Scientists looked at records from 1890, 1905, and 1920 on birthrates and various environmental factors and were able to follow up with children and grandchildren. These records were obviously kept on paper in a safe place and in the same language used today. But in today’s digital age, we may be depriving future generations of intuiting similarly ground breaking conclusions from the data collected today.

We’re producing data at a greater rate than ever before, and who knows what could be useful in the future. But with poor metadata, and the use of proprietary formats, we’re also losing more than ever. Fortunately, the good people involved with the Data Documentation Initiative are working towards a world where that won’t happen. I learned about so many easy, free, and important tools at NADDI. I can’t wait to implement them in my own research.

Now, you’ve missed the conference. That’s a shame, but we won’t hold that against you. NADDI has opened the doors here at Madison to making sure you have sustainable data. I’d encourage you to talk to someone from the RDS team and they can show you some free or cheap tools that are so easy to use, you’ll barely notice them. These tools, and the future of DDI will make sure that your data will contribute to science for as long as possible.

Morgaine Gilchrist-Scott is currently a Masters candidate in the School of Library and Information Science at UW-Madison. She hails from Ohio and has worked in Boston and New York before coming to Madison. She hopes to continue in data management and STEM librarianship with her degree.

http://researchdata.wisc.edu/news/naddi-reflections-part-1/feed/ 0
NADDI Reflections [part 2] http://researchdata.wisc.edu/news/naddi-reflections-part-2/ http://researchdata.wisc.edu/news/naddi-reflections-part-2/#comments Wed, 20 May 2015 17:25:27 +0000 http://researchdata.wisc.edu/?p=5516 [...]]]> NADDI_RDS

Evan (L) and Morgaine (R)

This post on NADDI 2015 was written by Evan Meszaros, one of two recipients of an RDS student scholarship. Read Morgaine Gilchrist-Scott’s reflection.

The NADDI 2015 conference afforded its attendees a smorgasbord of content, from the basic to the advanced, and across a range of contexts, from the narrowly-focused to the bigger picture. As a newcomer to NADDI in addition to being a newcomer to most related topics, the broader and more basic views resonated with me the most.

Jane Fry, a Data Specialist at Carleton University’s MacOdrum Library in Ottawa, led one such basic and broad workshop session, entitled, “Discover the Power of DDI Metadata.” Fry introduced the Data Documentation Initiative (DDI) to those unfamiliar with the international, XML-based metadata specification, and discussed its applications, history, versioning, and the current challenges it faces as its developers improve its functionality and expand its adoption.

A plenary session featuring the UW-Madison School of Library and Information Studies’ Faculty Associate, Dorothea Salo, explored DDI’s place as an emerging metadata standard (mainly for large, social sciences datasets) amidst a zoo of established information standards. Her take-no-prisoners critique of the DDI community’s progress, however, sparked plenty of discussion and revealed that there is lots of work yet to be done to get the word out effectively.

The diversity and scale of projects implementing DDI—as well as the internationality of stakeholders in the initiative was also on display throughout conference. A number of sessions explored noteworthy projects (a growing list of which can be found here), while others focused on the programs and scripts (e.g. Colectica MTNA’s OpenDataForge) used to support DDI in these projects.

Two sessions in particular, both led by academic data librarians, very helpfully painted a picture of the broader world of research data services (RDS) in which tools like DDI are playing an ever more prominent role. Kristin Briney, Data Services Librarian at UW-Milwaukee, summarized her findings-to-date for a study she and her collaborators are conducting on the current state of RDS as it exists in an official capacity at larger research universities across the US. While the findings she described were preliminary, their survey work suggests some interesting correlations amongst the size and research budgets of these institutions and the presence of established data services personnel/departments or data policies.

Perhaps even more applicable to my own position, the subsequent session provided a glimpse into another university’s data services “operation”. Brianna Marshall, Digital Curation Coordinator, and Trisha Adamus, Data, Network, and Translational Research Librarian, both from UW-Madison’s Research Data Services, delivered reports of successful strategies and ongoing challenges faced while carrying out RDS core functions on their campus. A couple takeaways gleaned from this session (and the ensuing conversations it sparked) included suggestions to improve education and outreach, by hosting a ‘brown bag’ series or publishing a digest of RDS stories of interest to researchers) and to develop a toolkit for researchers that would be keyed to the various stages of the research data lifecycle. It’s clear from the many impressive projects and potentialities discussed throughout the conference that DDI, and the community of developers, partners, and software applications it represents, should be an important part of any such RDS toolkit.

Evan Meszaros is a graduate student in the UW-Madison School of Library and Information Studies, having just completed his first year in its online degree program. He is also a newly-hired librarian at Case Western Reserve University, where he plays both research data services and traditional/reference librarian roles.

http://researchdata.wisc.edu/news/naddi-reflections-part-2/feed/ 0
Tools: OnCore and REDCap http://researchdata.wisc.edu/news/tools-oncore-and-redcap/ http://researchdata.wisc.edu/news/tools-oncore-and-redcap/#comments Tue, 12 May 2015 16:24:25 +0000 http://researchdata.wisc.edu/?p=4963 [...]]]> Overview


REDCap (Research Electronic Data Capture) and OnCore (Online Collaborative Environment) are clinical data management tools supported by the UW Institute for Clinical and Translational Research. See table for a comparison of the features of the two tools.

OnCore copy

These systems are used by researchers who conduct clinical trials in the School of Medicine and Public Health and in other units. OnCore is required for some types of clinical protocols. The two systems are designed for use with clinical research data, including identifiable information about subjects. In both systems, data is entered in forms. OnCore provides standard forms for managing clinical trials that can be customized by ICTR staff. REDCap users create their own forms and these can be used to collect survey data. Supporting files and documents in various formats can be also be uploaded to both systems.

Both the OnCore and REDCap systems are HIPAA compliant, employing secure networks, architectures, and appliances such as firewalls, routers, and gateways for routing data. Data in the systems are encrypted and all actions are tracked and audited. In addition, access to data centers is restricted to authorized personnel only. Oncore is also compliant with the requirements of the Code of Federal Regulations Title 21, Part II Electronic Records; Electronic Signatures.

Both systems allow access by researchers at multiple study sites. Access rights can be specified for each user, to limit access to personal health information fields to specific individuals, or to allow only some users to enter data and others to electronically sign/verify and lock records.

Tracking changes/Versions
Both systems track all modifications to data and provide an interface where all changes/user actions can be viewed.

Data Documentation
Both systems allow upload of supporting documents describing the data and collection methods, such as data dictionaries, code books, protocols, etc.

Data Quality Controls
In both systems, forms can include several measures that enhance the accuracy/validity of data entered in forms. These include field notes describing allowed data values and field validation settings that limit data entry to specified ranges of values. Data quality rules can also be applied to search for missing values and empty fields in forms. In addition, data records can be verified and locked.

Both systems allow export of data in a variety of formats for use in statistical software, such as Excel, SAS, SPSS, and others.

http://researchdata.wisc.edu/news/tools-oncore-and-redcap/feed/ 0
Data Archiving Platforms: Dryad http://researchdata.wisc.edu/news/data-archiving-platforms-dryad/ http://researchdata.wisc.edu/news/data-archiving-platforms-dryad/#comments Tue, 05 May 2015 18:32:57 +0000 http://researchdata.wisc.edu/?p=5113 [...]]]> by Brianna Marshall, Digital Curation Coordinator

This is part two of a three-part series where I explore platforms for archiving and sharing your data. Read the first post in the series, focused on UW’s institutional repository, MINDS@UW.

To help you better understand your options, here are the areas I address for each platform:

  • Background information on who can use it and what type of content is appropriate.
  • Options for sharing and access
  • Archiving and preservation benefits the platform offers
  • Whether the platform complies with the forthcoming OSTP mandate


About: Dryad is a repository appropriate for data that accompanies published articles in the sciences or medicine. Many journals partner with Dryad to provide submission integration, which makes linking the data between Dryad and the journal easy for you. Pricing varies depending on the journal you are publishing in; some journals cover the data publishing charge (DPC) while others do not. Read more about Dryad’s pricing model or browse the journals with sponsored DPCs.

Sharing and access: Data uploaded to Dryad are made available for reuse under the Creative Commons Zero (CC0) license. There are no format restrictions to what you upload, though you are encouraged to use community standards if possible. Your data will be given a DOI, enabling you to get credit for sharing.

Archiving and preservation: According to the Dryad website, “Data packages in Dryad are replicated across multiple systems to support failover, improve access times, allow recovery from disk failures, and preserve bit integrity. The data packages are discoverable and backed up for long-term preservation within the DataONE network.”

OSTP mandate: The OSTP mandate requires all federal funding agencies with over $100 million in R&D funds to make greater efforts to make grant-funded research outputs more accessible. This will likely mean that data must be publicly accessible and have an assigned DOI (though you’ll need to check with your funding agency for the exact requirements). As long as the data you need to share is associated with a published article, Dryad is a good candidate for OSTP-compliant data: it mints DOIs and makes data openly available under a CC0 license.

Visit Dryad.

Have additional questions or concerns about where you should archive your data? Contact us.

http://researchdata.wisc.edu/news/data-archiving-platforms-dryad/feed/ 0
Building a Practical DM Foundation http://researchdata.wisc.edu/storing-data/databases/building-a-practical-dm-foundation/ http://researchdata.wisc.edu/storing-data/databases/building-a-practical-dm-foundation/#comments Thu, 30 Apr 2015 18:06:40 +0000 http://researchdata.wisc.edu/?p=5467 [...]]]> 5070_Lab_microscope_original

By Elliott Shuppy, Masters Candidate, School of Library and Information Studies

In addition to being an active research lab on the UW-Madison campus, the Laboratory for Optical and Computational Imaging (LOCI) initiates quite a lot of experimental instrumentation techniques and develops software to support those techniques. One major database platform development is OMERO, which stands for Open Microscopy Environment Remote Object. OMERO is an open, consortium-driven software package that is set up with the capabilities to view, organize, share, and analyze image data. One hiccough is that it’s not widely used at LOCI.

Having identified this problem, my mentor Kevin Elicieri, LOCI director, and I thought it would be a good idea for me to develop expertise in this software as a project for ZOO 699 and figure out how to incorporate it into a researcher workflow at LOCI. On-site researcher Jayne Squirrel was the ideal candidate as she is a highly organized researcher working in the lab, providing us an excellent use case. Before we could insert OMERO into her workflow, we had to lay some formal foundational management practices, which will be transferable in her use of OMERO.

We identified four immediate needs:

  • Simple and consistent folder structure
  • Identify all associated files
  • ID system that can be used in OMERO database
  • Documentation

We then developed solutions to meet each need. The first solution was a formalized folder structure, which we chose to organize by Jayne’s workload:

Lab\Year (YYYY)\Project\Sub-project\Experiment\Replicates\Files

This folder structure will help organize and regularize naming of files and data sets not only locally and on the backup server, but also within the OMERO platform.

In order to identify all files associated with a particular experiment we developed a unique identifier that we termed the Experiment ID.  This identifier will lead file names and consists of the following values: initial of collaborating lab (O or H) and a numerical sequence based on current year, month, series number of experiments, and replicate.

Example: O_1411_02_R1

The example reads Ogle lab, 2014, November, second experiment (within the month of November), replicate one. Incorporating this ID into file names will help to identify and recall data sets of a particular experiment and any related files such as processed images and analyses.

Further, both the file organization and experiment ID can aid organization and identification within OMERO.  The database platform has two levels of nesting resolution.  The folder is the top tier; within each folder a dataset can be nested; each dataset contains a number of image data. So, we can adapt folder structure naming to organize files and datasets and apply the unique identifier to name uploaded image objects.  These upgrades make searching more robust and similar in process to local drive searches.

Lastly, we developed documentation for reference. We realized that Experiment ID’s need to be accessible at the prep bench and microscope.  We subsequently created a mobile accessible spreadsheet containing information on each experiment. We termed this document the Experimental Worksheet and it contains the following information:

  • Experiment ID
  • Experiment Description
  • Experiment Start Date
  • Project Name
  • Sub-project Name
  • Notes

This document will act as a quick reference of bare bones experiment information for Jayne and student workers. Too, we realized that Jayne’s student workers need to know what the processes are in each step of her workflow. So, we developed step-by-step procedures and policy for each phase of the workflow. These procedural and policy documents set management expectations and conduct for Jayne’s data. Now, with such a data management foundation laid, the next step is to get to our root problem, discern how Jayne can best benefit from using OMERO and where it makes sense in her workflow.

http://researchdata.wisc.edu/storing-data/databases/building-a-practical-dm-foundation/feed/ 0
NSF Releases New Public Access Plan http://researchdata.wisc.edu/news/nsf-releases-new-public-access-plan/ http://researchdata.wisc.edu/news/nsf-releases-new-public-access-plan/#comments Mon, 27 Apr 2015 14:19:14 +0000 http://researchdata.wisc.edu/?p=5459 New Requirements to Make Work and Data More Transparent and Reusable

April, 2015 – The National Science Foundation (NSF) recently released a set of public access requirements for researchers applying for grants with an effective date on or after January 2016. According to the plan, entitled Today’s Data, Tomorrow’s Discoveries, the objectives of increasing public-accessibility are to make research and data easier for other investigators and educational institutions to use, and spur innovation from these same communities.

The NSF sees these requirements as the “initial implementation” of a framework that will change and grow over time to include additional research products and degree of accessibility.

The scope of the plan is initially focused on four types of outcome products:

  • Articles in peer-reviewed journals
  • Papers accepted as part of juried conference proceedings
  • Articles/juried papers in conference proceedings authored entirely or in part by NSF employees
  • Data generated and curated as part of an NSF-required Data Management Plan (DMP).

Researchers who receive all or partial NSF funding will be required to

  • Deposit either the version of record or final accepted peer-reviewed manuscript of these products in a public access compliant repository as designated by the NSF. At this time, the NSF has designated the Department of Energy’s PAGES (Public Access Gateway for Energy and Science) system as their designated repository.
  • Make these outcome products freely available for download, reading and analysis no later than 12 months after initial publication.
  • Provide a minimum level of machine-readable metadata with each product at the time of initial publication.
  • Ensure the long-term preservation of products.
  • Provide a unique persistent identifier to all products in the award annual and final reports.

The NSF expects that investigators will be able to deposit research products into the PAGES system by the end of the 2015 calendar year. Data underling journal article or conference paper findings should be deposited in a repository as specified by the publication or as described in the research proposal’s DMP.

Public access requirement specifics will be provided in future NSF documents and grant solicitations.

For more information on how these new requirements could affect your grant proposal, contact the solicitation’s Cognizant Program Officer or the UW-Madison’s Research Data Services.

http://researchdata.wisc.edu/news/nsf-releases-new-public-access-plan/feed/ 0