UW-Madison Research Data Services http://researchdata.wisc.edu Tue, 30 Jun 2015 19:20:41 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.2 Get to Know the RDS Team: Luke Bluma http://researchdata.wisc.edu/news/get-to-know-the-rds-team-luke-bluma/ http://researchdata.wisc.edu/news/get-to-know-the-rds-team-luke-bluma/#comments Wed, 27 May 2015 14:28:49 +0000 http://researchdata.wisc.edu/?p=5539 [...]]]> In this series, we introduce the team members who make up Research Data Services (RDS). This interview is with Luke Bluma, RDS team member and Engagement Manager for the Campus Computing Infrastructure (CCI) initiative.

Describe your role with CCI.

I am the Engagement Manager for the Campus Computing Infrastructure (CCI) initiative. CCI is a campus sponsored and governed initiative that delivers shared, scalable, secure IT infrastructure services to campus partners at UW-Madison. Services include: data center management, server hosting, storage and backup. My role is all about building relationships, learning how departments on campus do what they do, and gathering requirements on how shared IT infrastructure services may be able to help them out. My main focus over the last couple of years has been file storage.

What’s the most interesting project you’ve worked on recently?

I recently got to work with a faculty researcher who had some storage needs. He is going to be utilizing the computing resources through the Advanced Computing Initiative (ACI) and needed a place to store the research data after the computations were complete. I was able to meet with him, learn a little about his research, identify his storage needs, and set him up with our scalable, affordable network storage service.

What excites you about supporting research data management on campus?

Luke GolfI love supporting research data management on campus because in the past my role has been mostly focused on administrative data, and while administrative data is critical to our campus it isn’t always as exciting as research data, in my opinion. I love being able to provide the platforms (virtual servers, storage, backup) that allow researchers to innovate. Working for UW-Madison is great, and being able to help support the research we do, in some small way, makes that even better!

If you had an unlimited budget, what would you institute on campus?

Free Babcock ice cream for all! In every building, day or night! (I wish!)

If I had an unlimited budget, I would re-think how we provide IT services on campus. I would work with campus to identify what core IT services should be provided by the University at no cost. This might include things like networking, virtual and physical servers, storage for your group, backup for your data and computers, etc. This would be a tremendous undertaking and would require a huge investment, but it would allow researchers and departments on campus to focus less on IT infrastructure (like running their own server room or storage array) and focus even more on their missions!

In addition to that, since I have an unlimited budget, I would also establish a group that would be available to facilitate access to these services. A group of people that could meet with you in person, learn about your work, identify potential solutions and help you get started. Having free tools is great, but it’s even better when someone is available to show you how to utilize them in the best ways possible.

Do you have a favorite UW building or landmark?

This was a tough question. I’m lucky because in my role I get to roam around campus a lot and see a lot of different buildings. I love the tall buildings because you get some spectacular views of downtown Madison from way up there. However, if I have to pick just one, I’d have to go with the Memorial Union. It gave me so many great memories during my undergraduate years here at UW-Madison – from studying in Der Rathskeller to enjoying a beer on a sunny afternoon at the Terrace. And recently one of my best friends got married there, so the memories just keep adding up!

What do you like to do outside of work?

I love to golf! However, I should be honest here… while I do love to golf, I’m not very good at it. I was on the golf team in high school because it allowed me to play golf after school for free, not because I was a great golfer. I love being able to get outside on a sunny Sunday afternoon and play 18 holes with some friends, even if I spend a lot of the time in the woods looking for my ball.

Do you have a question for Luke or the rest of the RDS team? Contact us today.

]]>
http://researchdata.wisc.edu/news/get-to-know-the-rds-team-luke-bluma/feed/ 0
NADDI Reflections [part 1] http://researchdata.wisc.edu/news/naddi-reflections-part-1/ http://researchdata.wisc.edu/news/naddi-reflections-part-1/#comments Wed, 20 May 2015 17:25:36 +0000 http://researchdata.wisc.edu/?p=5519 [...]]]> NADDI_RDS

Evan (L) and Morgaine (R)

This post on NADDI 2015 was written by Morgaine Gilchrist Scott, one of two recipients of an RDS student scholarship. Read Evan Meszaros’ reflection.

In my past life, I was a public health researcher. In my current one, I’m a first year SLIS graduate student. I’m amazed and appalled at the data I once lost due to convenience. I don’t think we knew (or cared about) anything better than the proprietary format which met our immediate needs perfectly. I just looked up the software, and it’s already dead.

Have you ever heard of the Överkalix study? It’s often indicated as the seminal study in epigenetics. Scientists were able to discover things like a greater BMI at 9 years in the sons (but not the daughters) of fathers who began smoking early, and that a granddaughter’s risk of cardiovascular mortality increased when there was a sharp change in food availability for their paternal grandmothers.

But HOW were researchers able to conclude these things? Data. Old data. Old, easily explainable, data. Scientists looked at records from 1890, 1905, and 1920 on birthrates and various environmental factors and were able to follow up with children and grandchildren. These records were obviously kept on paper in a safe place and in the same language used today. But in today’s digital age, we may be depriving future generations of intuiting similarly ground breaking conclusions from the data collected today.

We’re producing data at a greater rate than ever before, and who knows what could be useful in the future. But with poor metadata, and the use of proprietary formats, we’re also losing more than ever. Fortunately, the good people involved with the Data Documentation Initiative are working towards a world where that won’t happen. I learned about so many easy, free, and important tools at NADDI. I can’t wait to implement them in my own research.

Now, you’ve missed the conference. That’s a shame, but we won’t hold that against you. NADDI has opened the doors here at Madison to making sure you have sustainable data. I’d encourage you to talk to someone from the RDS team and they can show you some free or cheap tools that are so easy to use, you’ll barely notice them. These tools, and the future of DDI will make sure that your data will contribute to science for as long as possible.

Morgaine Gilchrist-Scott is currently a Masters candidate in the School of Library and Information Science at UW-Madison. She hails from Ohio and has worked in Boston and New York before coming to Madison. She hopes to continue in data management and STEM librarianship with her degree.

]]>
http://researchdata.wisc.edu/news/naddi-reflections-part-1/feed/ 0
NADDI Reflections [part 2] http://researchdata.wisc.edu/news/naddi-reflections-part-2/ http://researchdata.wisc.edu/news/naddi-reflections-part-2/#comments Wed, 20 May 2015 17:25:27 +0000 http://researchdata.wisc.edu/?p=5516 [...]]]> NADDI_RDS

Evan (L) and Morgaine (R)

This post on NADDI 2015 was written by Evan Meszaros, one of two recipients of an RDS student scholarship. Read Morgaine Gilchrist-Scott’s reflection.

The NADDI 2015 conference afforded its attendees a smorgasbord of content, from the basic to the advanced, and across a range of contexts, from the narrowly-focused to the bigger picture. As a newcomer to NADDI in addition to being a newcomer to most related topics, the broader and more basic views resonated with me the most.

Jane Fry, a Data Specialist at Carleton University’s MacOdrum Library in Ottawa, led one such basic and broad workshop session, entitled, “Discover the Power of DDI Metadata.” Fry introduced the Data Documentation Initiative (DDI) to those unfamiliar with the international, XML-based metadata specification, and discussed its applications, history, versioning, and the current challenges it faces as its developers improve its functionality and expand its adoption.

A plenary session featuring the UW-Madison School of Library and Information Studies’ Faculty Associate, Dorothea Salo, explored DDI’s place as an emerging metadata standard (mainly for large, social sciences datasets) amidst a zoo of established information standards. Her take-no-prisoners critique of the DDI community’s progress, however, sparked plenty of discussion and revealed that there is lots of work yet to be done to get the word out effectively.

The diversity and scale of projects implementing DDI—as well as the internationality of stakeholders in the initiative was also on display throughout conference. A number of sessions explored noteworthy projects (a growing list of which can be found here), while others focused on the programs and scripts (e.g. Colectica MTNA’s OpenDataForge) used to support DDI in these projects.

Two sessions in particular, both led by academic data librarians, very helpfully painted a picture of the broader world of research data services (RDS) in which tools like DDI are playing an ever more prominent role. Kristin Briney, Data Services Librarian at UW-Milwaukee, summarized her findings-to-date for a study she and her collaborators are conducting on the current state of RDS as it exists in an official capacity at larger research universities across the US. While the findings she described were preliminary, their survey work suggests some interesting correlations amongst the size and research budgets of these institutions and the presence of established data services personnel/departments or data policies.

Perhaps even more applicable to my own position, the subsequent session provided a glimpse into another university’s data services “operation”. Brianna Marshall, Digital Curation Coordinator, and Trisha Adamus, Data, Network, and Translational Research Librarian, both from UW-Madison’s Research Data Services, delivered reports of successful strategies and ongoing challenges faced while carrying out RDS core functions on their campus. A couple takeaways gleaned from this session (and the ensuing conversations it sparked) included suggestions to improve education and outreach, by hosting a ‘brown bag’ series or publishing a digest of RDS stories of interest to researchers) and to develop a toolkit for researchers that would be keyed to the various stages of the research data lifecycle. It’s clear from the many impressive projects and potentialities discussed throughout the conference that DDI, and the community of developers, partners, and software applications it represents, should be an important part of any such RDS toolkit.

Evan Meszaros is a graduate student in the UW-Madison School of Library and Information Studies, having just completed his first year in its online degree program. He is also a newly-hired librarian at Case Western Reserve University, where he plays both research data services and traditional/reference librarian roles.

]]>
http://researchdata.wisc.edu/news/naddi-reflections-part-2/feed/ 0
Tools: OnCore and REDCap http://researchdata.wisc.edu/news/tools-oncore-and-redcap/ http://researchdata.wisc.edu/news/tools-oncore-and-redcap/#comments Tue, 12 May 2015 16:24:25 +0000 http://researchdata.wisc.edu/?p=4963 [...]]]> Overview

RedCap

REDCap (Research Electronic Data Capture) and OnCore (Online Collaborative Environment) are clinical data management tools supported by the UW Institute for Clinical and Translational Research. See table for a comparison of the features of the two tools.

OnCore copy

These systems are used by researchers who conduct clinical trials in the School of Medicine and Public Health and in other units. OnCore is required for some types of clinical protocols. The two systems are designed for use with clinical research data, including identifiable information about subjects. In both systems, data is entered in forms. OnCore provides standard forms for managing clinical trials that can be customized by ICTR staff. REDCap users create their own forms and these can be used to collect survey data. Supporting files and documents in various formats can be also be uploaded to both systems.

Security
Both the OnCore and REDCap systems are HIPAA compliant, employing secure networks, architectures, and appliances such as firewalls, routers, and gateways for routing data. Data in the systems are encrypted and all actions are tracked and audited. In addition, access to data centers is restricted to authorized personnel only. Oncore is also compliant with the requirements of the Code of Federal Regulations Title 21, Part II Electronic Records; Electronic Signatures.

Sharing
Both systems allow access by researchers at multiple study sites. Access rights can be specified for each user, to limit access to personal health information fields to specific individuals, or to allow only some users to enter data and others to electronically sign/verify and lock records.

Tracking changes/Versions
Both systems track all modifications to data and provide an interface where all changes/user actions can be viewed.

Data Documentation
Both systems allow upload of supporting documents describing the data and collection methods, such as data dictionaries, code books, protocols, etc.

Data Quality Controls
In both systems, forms can include several measures that enhance the accuracy/validity of data entered in forms. These include field notes describing allowed data values and field validation settings that limit data entry to specified ranges of values. Data quality rules can also be applied to search for missing values and empty fields in forms. In addition, data records can be verified and locked.

Exporting 
Both systems allow export of data in a variety of formats for use in statistical software, such as Excel, SAS, SPSS, and others.

]]>
http://researchdata.wisc.edu/news/tools-oncore-and-redcap/feed/ 0
Data Archiving Platforms: Dryad http://researchdata.wisc.edu/news/data-archiving-platforms-dryad/ http://researchdata.wisc.edu/news/data-archiving-platforms-dryad/#comments Tue, 05 May 2015 18:32:57 +0000 http://researchdata.wisc.edu/?p=5113 [...]]]> by Brianna Marshall, Digital Curation Coordinator

This is part two of a three-part series where I explore platforms for archiving and sharing your data. Read the first post in the series, focused on UW’s institutional repository, MINDS@UW.

To help you better understand your options, here are the areas I address for each platform:

  • Background information on who can use it and what type of content is appropriate.
  • Options for sharing and access
  • Archiving and preservation benefits the platform offers
  • Whether the platform complies with the forthcoming OSTP mandate

Dryad

About: Dryad is a repository appropriate for data that accompanies published articles in the sciences or medicine. Many journals partner with Dryad to provide submission integration, which makes linking the data between Dryad and the journal easy for you. Pricing varies depending on the journal you are publishing in; some journals cover the data publishing charge (DPC) while others do not. Read more about Dryad’s pricing model or browse the journals with sponsored DPCs.

Sharing and access: Data uploaded to Dryad are made available for reuse under the Creative Commons Zero (CC0) license. There are no format restrictions to what you upload, though you are encouraged to use community standards if possible. Your data will be given a DOI, enabling you to get credit for sharing.

Archiving and preservation: According to the Dryad website, “Data packages in Dryad are replicated across multiple systems to support failover, improve access times, allow recovery from disk failures, and preserve bit integrity. The data packages are discoverable and backed up for long-term preservation within the DataONE network.”

OSTP mandate: The OSTP mandate requires all federal funding agencies with over $100 million in R&D funds to make greater efforts to make grant-funded research outputs more accessible. This will likely mean that data must be publicly accessible and have an assigned DOI (though you’ll need to check with your funding agency for the exact requirements). As long as the data you need to share is associated with a published article, Dryad is a good candidate for OSTP-compliant data: it mints DOIs and makes data openly available under a CC0 license.

Visit Dryad.

Have additional questions or concerns about where you should archive your data? Contact us.

]]>
http://researchdata.wisc.edu/news/data-archiving-platforms-dryad/feed/ 0
Building a Practical DM Foundation http://researchdata.wisc.edu/storing-data/databases/building-a-practical-dm-foundation/ http://researchdata.wisc.edu/storing-data/databases/building-a-practical-dm-foundation/#comments Thu, 30 Apr 2015 18:06:40 +0000 http://researchdata.wisc.edu/?p=5467 [...]]]> 5070_Lab_microscope_original

By Elliott Shuppy, Masters Candidate, School of Library and Information Studies

In addition to being an active research lab on the UW-Madison campus, the Laboratory for Optical and Computational Imaging (LOCI) initiates quite a lot of experimental instrumentation techniques and develops software to support those techniques. One major database platform development is OMERO, which stands for Open Microscopy Environment Remote Object. OMERO is an open, consortium-driven software package that is set up with the capabilities to view, organize, share, and analyze image data. One hiccough is that it’s not widely used at LOCI.

Having identified this problem, my mentor Kevin Elicieri, LOCI director, and I thought it would be a good idea for me to develop expertise in this software as a project for ZOO 699 and figure out how to incorporate it into a researcher workflow at LOCI. On-site researcher Jayne Squirrel was the ideal candidate as she is a highly organized researcher working in the lab, providing us an excellent use case. Before we could insert OMERO into her workflow, we had to lay some formal foundational management practices, which will be transferable in her use of OMERO.

We identified four immediate needs:

  • Simple and consistent folder structure
  • Identify all associated files
  • ID system that can be used in OMERO database
  • Documentation

We then developed solutions to meet each need. The first solution was a formalized folder structure, which we chose to organize by Jayne’s workload:

Lab\Year (YYYY)\Project\Sub-project\Experiment\Replicates\Files

This folder structure will help organize and regularize naming of files and data sets not only locally and on the backup server, but also within the OMERO platform.

In order to identify all files associated with a particular experiment we developed a unique identifier that we termed the Experiment ID.  This identifier will lead file names and consists of the following values: initial of collaborating lab (O or H) and a numerical sequence based on current year, month, series number of experiments, and replicate.

Example: O_1411_02_R1

The example reads Ogle lab, 2014, November, second experiment (within the month of November), replicate one. Incorporating this ID into file names will help to identify and recall data sets of a particular experiment and any related files such as processed images and analyses.

Further, both the file organization and experiment ID can aid organization and identification within OMERO.  The database platform has two levels of nesting resolution.  The folder is the top tier; within each folder a dataset can be nested; each dataset contains a number of image data. So, we can adapt folder structure naming to organize files and datasets and apply the unique identifier to name uploaded image objects.  These upgrades make searching more robust and similar in process to local drive searches.

Lastly, we developed documentation for reference. We realized that Experiment ID’s need to be accessible at the prep bench and microscope.  We subsequently created a mobile accessible spreadsheet containing information on each experiment. We termed this document the Experimental Worksheet and it contains the following information:

  • Experiment ID
  • Experiment Description
  • Experiment Start Date
  • Project Name
  • Sub-project Name
  • Notes

This document will act as a quick reference of bare bones experiment information for Jayne and student workers. Too, we realized that Jayne’s student workers need to know what the processes are in each step of her workflow. So, we developed step-by-step procedures and policy for each phase of the workflow. These procedural and policy documents set management expectations and conduct for Jayne’s data. Now, with such a data management foundation laid, the next step is to get to our root problem, discern how Jayne can best benefit from using OMERO and where it makes sense in her workflow.

]]>
http://researchdata.wisc.edu/storing-data/databases/building-a-practical-dm-foundation/feed/ 0
NSF Releases New Public Access Plan http://researchdata.wisc.edu/news/nsf-releases-new-public-access-plan/ http://researchdata.wisc.edu/news/nsf-releases-new-public-access-plan/#comments Mon, 27 Apr 2015 14:19:14 +0000 http://researchdata.wisc.edu/?p=5459 New Requirements to Make Work and Data More Transparent and Reusable

April, 2015 – The National Science Foundation (NSF) recently released a set of public access requirements for researchers applying for grants with an effective date on or after January 2016. According to the plan, entitled Today’s Data, Tomorrow’s Discoveries, the objectives of increasing public-accessibility are to make research and data easier for other investigators and educational institutions to use, and spur innovation from these same communities.

The NSF sees these requirements as the “initial implementation” of a framework that will change and grow over time to include additional research products and degree of accessibility.

The scope of the plan is initially focused on four types of outcome products:

  • Articles in peer-reviewed journals
  • Papers accepted as part of juried conference proceedings
  • Articles/juried papers in conference proceedings authored entirely or in part by NSF employees
  • Data generated and curated as part of an NSF-required Data Management Plan (DMP).

Researchers who receive all or partial NSF funding will be required to

  • Deposit either the version of record or final accepted peer-reviewed manuscript of these products in a public access compliant repository as designated by the NSF. At this time, the NSF has designated the Department of Energy’s PAGES (Public Access Gateway for Energy and Science) system as their designated repository.
  • Make these outcome products freely available for download, reading and analysis no later than 12 months after initial publication.
  • Provide a minimum level of machine-readable metadata with each product at the time of initial publication.
  • Ensure the long-term preservation of products.
  • Provide a unique persistent identifier to all products in the award annual and final reports.

The NSF expects that investigators will be able to deposit research products into the PAGES system by the end of the 2015 calendar year. Data underling journal article or conference paper findings should be deposited in a repository as specified by the publication or as described in the research proposal’s DMP.

Public access requirement specifics will be provided in future NSF documents and grant solicitations.

For more information on how these new requirements could affect your grant proposal, contact the solicitation’s Cognizant Program Officer or the UW-Madison’s Research Data Services.

]]>
http://researchdata.wisc.edu/news/nsf-releases-new-public-access-plan/feed/ 0
Final Spring Holz Brown Bag Talk http://researchdata.wisc.edu/news/final-spring-holz-brown-bag-talk/ http://researchdata.wisc.edu/news/final-spring-holz-brown-bag-talk/#comments Thu, 09 Apr 2015 15:06:56 +0000 http://researchdata.wisc.edu/?p=5447 [...]]]> BarryTRadlerThe final spring brown bag, The Role of Metadata in Research: Reflections on NADDI 2015, will be presented by Barry Radler, a researcher at UW-Madison Institute on Aging.

TIME: Wednesday, April 29, 12pm-1pm.

PLACE: Bunge Room, School of Library and Information Studies, 4th floor of Helen C. White Hall.

ABSTRACT: The increasing availability of research and other data via the internet has spurred interest in and the need for better documentation of such data. The Open Data movement gaining momentum among federal funding agencies, academic libraries, and professional journals is also contributing to a recognition that good documentation and metadata are essential to distinguishing the quality of research datasets and facilitating their discovery and use in an online environment of ever-expanding information. This presentation will provide a primer in metadata use and metadata standards like the Data Documentation Initiative (DDI). It will also include reflections by the presenter on his particular DDI use cases, as well as his experience hosting the 3rd annual North American DDI Conference. There will be an opportunity for questions and discussion.

ABOUT DR. RADLER: Dr. Radler’s research interests explore how human beings process information, make decisions, and behave in social, political, and marketing contexts. For the last 20 years he has explored, advocated, and implemented the use of information technologies to improve research processes and data. Dr. Radler is currently the Data Management Director for the MIDUS study (www.midus.wisc.edu), a complex longitudinal study that uses an XML metadata standard called the Data Documentation Initiative (www.ddialliance.org/) to develop web-based documentation.

Please RSVP for this talk if you plan to attend. View other talks in this series in our archive.

]]>
http://researchdata.wisc.edu/news/final-spring-holz-brown-bag-talk/feed/ 0
Let’s Talk About Storage http://researchdata.wisc.edu/storing-data/lets-talk-about-storage/ http://researchdata.wisc.edu/storing-data/lets-talk-about-storage/#comments Thu, 09 Apr 2015 15:06:28 +0000 http://researchdata.wisc.edu/?p=5431 [...]]]> By Luke Bluma, IT Engagement Manager for the Campus Computing Infrastructure (CCI)

Data is a critical part of our lives here at UW-Madison. We collect, analyze, and share data every day to get our jobs done. Data comes in all shapes and sizes and it needs the right place to live. That’s where storage comes in.

However, storage can be a loaded term. It can mean a thumb drive, or your computer’s hard drive, or storage that is accessed via a server or cloud storage or a large campus-wide storage service. It is all of these things, but not all of these will fit your needs. Your needs are what matters and they will drive what solution(s) will work for you.

I am the Engagement Manager for the Campus Computing Infrastructure (CCI) initiative. I work with campus partners on their data center, server, storage and/or backup needs. Storage is currently a big focus for me, so I wanted to share some thoughts about evaluating potential storage solutions.

Storage Array in Data Center

Storage for CCI

The main areas to think about are:

  • What kinds of data are you working with?
  • What are your “must have’s”?
  • What storage options are available at UW-Madison?

What kinds of data are you working with?

This is the first big question you want to focus on because it drastically impacts what options are available to you. Are you working with FERPA data, sensitive data, restricted data, PCI data, etc.? Each of these will impact what service(s) you can or can’t utilize. For more information on Restricted Data see: https://www.cio.wisc.edu/security/about/campus-initiatives/restricted-data-security-standards/

What are your “must have’s”?

Once you have identified the types of data you are working with, then it is crucial to determine what are your must have requirements for a storage solution. Does it need to be secure? If so, how secure? Does it need to be accessed by people outside of UW-Madison? Does it need to be high performance storage? Does it need to scale to 20+ TB? Does it need to be accessible via the web? These are just example questions, and the key here is that there is no perfect storage solution. Some services do X, Y, Z and others do X, Y, A but not Z. So determining your “must have’s” will help you figure out which services you can work with, and which you can’t.

What storage options are available at UW-Madison?

Now that you have identified the kinds of data, and the “must have’s” for your solution the final step is to evaluate what storage options are available to you at UW-Madison. Storage is an evolving technology so specific services will change over time, but here are good places to start to learn more about what services are available to you:

  • Local IT – if you have a local IT group, then talk to them first about what local options may be available to you
  • Campus Computing Infrastructure (CCI) – if you need network storage or server storage that isn’t focused on high performance computing then CCI has several options that could work depending on your needs
  • Advanced Computing Initiative (ACI) – if you need to do high performance or high throughput computing then ACI has several options that could work depending on your needs
  • Division of Information Technology (DoIT) – if you need cloud storage, like Box.com, or local storage, like an external hard drive, then DoIT has solutions that could work for you as well

This can seem like a lot to think about, and to be honest it can be quite confusing at times. The good news is that you have help! Research Data Services (RDS) can be a great starting point for your storage needs. We can focus on the key question: what are you looking to do? Then we can help you evaluate some potential options for moving forward based on your needs.

To get started contact RDS at http://researchdata.wisc.edu/help/contact-us/ or contact me at cci@cio.wisc.edu

]]>
http://researchdata.wisc.edu/storing-data/lets-talk-about-storage/feed/ 0
Introducing ORCID at UW-Madison http://researchdata.wisc.edu/news/introducingorcid/ http://researchdata.wisc.edu/news/introducingorcid/#comments Thu, 02 Apr 2015 13:34:42 +0000 http://researchdata.wisc.edu/?p=5422 [...]]]> orcid_128x128

By Trisha Adamus, Data, Network, and Translational Research Librarian at Ebling Library

ORCID (pronounced “orkid”) stands for Open Researcher and Contributor ID. ORCID is an open, non-profit, community-driven effort to create and maintain a registry of unique researcher identifiers. An ORCID iD acts as a unique identifier for a person, much like a publication has a DOI. ORCID acts as a transparent “hub” between different sites and services in the researcher workflow – funders, publishers, repositories, research networks and more.

The ORCID Registry is available free of charge to individuals, who may obtain an ORCID identifier, manage their record of activities, and search for others in the Registry. The Health Sciences Library (Ebling Library), is a licensed member of ORCID, which allows the Library to link biographical and bibliographic information to ORCID identifiers, update ORCID records, to receive updates from ORCID, and to register employees and students for ORCID identifiers.

While not mandatory, publishers and funding agencies are increasingly adopting ORCID as a tool to manage submissions and applications. At some point in the future having an ORCID iD and using ORCID as a tool may be required. For new researchers, an ORCID iD offers a way to have an accurate record of scholarly output from the very beginning. An ORCID iD can be used on CVs, departmental webpages, email signatures, in professional directories and more.

You can set up your own ORCID iD using the Register for an ORCID iD website and your UW-Madison email address. If you created an ORCID iD using a different email address you can update your profile at orcid.org to add your current UW-Madison (@wisc.edu) address. The ORCID iD is tied to you, not any particular institution. You can add publications from previous jobs, and if you leave UW-Madison just update your ORCID profile with your new email address.

To learn more about ORCID please visit the Ebling Library webpage on ORCID or contact the University of Wisconsin – Madison ORCID Ambassador Trisha Adamusorcid.org/0000-0001-8464-3334.

]]>
http://researchdata.wisc.edu/news/introducingorcid/feed/ 0