Tool: Identity Finder

Information from DoIT’s recent news and DoIT’s Identity Finder information.

What is Identity Finder?

This tool was featured recently by DoIT, but we wanted to cover the information again as it’s a great tool for those interested in ensuring security on their local machines. Identity Finder is a software tool that can help find personally identifiable or sensitive information on your local machine. Finding restricted information allows you to take steps to ensure protection or encryption of that information. (more…)

Open Science Grid (OSG) User School 2017

Information from the OSG User School page.

The Open Science Grid (OSG) User School is now accepting applications for the 2017 session!

The school runs July 17-21 at the University of Wisconsin-Madison campus. Participants “… will learn to use high-throughput computing (HTC) systems — at your own campus or using the national Open Science Grid (OSG) — to run large-scale computing applications that are at the heart of today’s cutting-edge science. Through lectures, discussions, and lots of hands-on activities with experienced OSG staff, you will learn how HTC systems work, how to run and manage lots of jobs and huge datasets to implement a scientific computing workflow, and where to turn for more information and help. Take a look at the high-level curriculum and syllabus for more details.”

Application deadline is Friday, April 7, 2017 by the end of the day. The school is geared toward graduate students, but will consider applications from advanced undergraduates, post-doctoral students, faculty, and staff.  Accepted students will receive financial support for the cost of basic travel and local costs.

You can learn more about the school, see application details, and contact information on the OSG User School page.

February 2017 Brown Bag: Pierce Edmiston

The Rebecca J. Holz series in Research Data Management is a monthly lecture series hosted during the spring and fall academic semesters. Research Data Services invites speakers from a variety of disciplines to talk about their research or involvement with data.

On February 15th Pierce Edmiston, a PhD candidate in the Department of Psychology at UW-Madison, gave a talk entitled “Adopting open source practices for better science”. Details on viewing the slides will be listed below following the description of the brown bag talk.

Emiston’s talk was framed around a discussion of reproducibility and the ways that bias creeps into the research process, which he then provided suggestions on practices to help prevent these issues. The introduction to his talk covered his personal motivations for adopting reproducible practices, gave an overview of the literature surrounding reproducibility and the reproduciblity crisis in psychological science, and then covered why he thinks open source could be the answer. Edmiston then offered three suggestions of open source practices to adopt – including version control, dynamic documents, and what he calls “building from source”. The talk finished with examples of these practices from his own work and further literature on openness in practice and culture. Edmistion has also provided a list of linked references for all of the literature mentioned in his presentation.

(more…)

Finding the Right Data – Love Your Data Week 2017

Information from Love Your Data Week.

Message of the day

Need to find the right data? Have a clear question and locate quality data sources.

Things to consider

romanticlocationicon_nounprojectIn a 2004 Science Daily News article, the National Science Foundation used the phrase “here there be data”to highlight the exploratory nature of traversing the “untamed” scientific data landscape. The use of that phrase harkens to older maps of the world where unexplored territories or areas on maps bore the warning ‘here, there be [insert mythical/fantastical creatures]’ to alert explorers to the dangers of the unknown. While the research data landscape is (slightly) less foreboding, there’s still an adventurous quality to looking for research data.

(more…)

Documenting DH: Bronwen Masemann and LIS 640, Digital Humanities Analytics

Written by Laura Schmidt

Documenting DH is a project from the Digital Humanities Research Network (DHRN), which consists of a series of audio interviews with various humanities scholars and students around the University of Wisconsin-Madison campus. Each interviewee is given a chance to talk about how they view data and how they manage data or how they teach data to others. In February, we interviewed Bronwen Masemann, an instructor in the School of Library and Information Studies and her interview will be accessible to the public on the DHRN website starting February 24, 2017. She is currently teaching LIS 640: Digital Humanities Analytics, which provides upper-level and graduate students the “technology skills to analyze and plan data-driven projects in the humanities, social sciences and other fields.” (more…)

Good Data Examples – Love Your Data Week 2017

Information from Love Your Data Week.

Message of the day

Good data are FAIR – Findable, Accessible, Interoperable, Re-usable

Things to consider

What makes data good?

  1. It has to be readable and well enough documented for others (and a future you) to understand.
  2. Data has to be findable to keep it from being lost. Information scientists have started to call such data FAIR — Findable, Accessible, Interoperable, Re-usable. One of the most important things you can do to keep your data FAIR is to deposit it in a trusted digital repository. Do not use your personal website as your data archive.
  3. Tidy data are good data. Messy data are hard to work with.
  4. Data quality is a process, starting with planning through to curation of the data for deposit.

phd072613s
Source: http://www.phdcomics.com/comics/archive.php?comicid=1612
Remember! “Documentation is a love letter to your data”

(more…)

Documenting, Describing, Defining – Love Your Data Week 2017

Information from Love Your Data Week.

Message of the day

Good documentation tells people they can trust your data by enabling validation, replication, and reuse.

Things to consider

Why does having good documentation matter?

  • It contributes to the quality and usefulness of your research and the data itself – for yourself, colleagues, students, and others.
  • It makes the analysis and write-up stages of your project easier and less stressful.
  • It helps your teammates, colleagues, and students understand and build on your work.
  • It helps to build trust in your research by allowing others to validate your data or methods.
  • It can help you answer questions about your work during pre-publication peer review and after publication.
  • It can make it easier for others to replicate or reuse your data. When they cite the data, you get credit! Include these citations in your CV, funding proposal, or promotion and tenure package.
  • It improves the integrity of the scholarly record by providing a more complete picture of how your research was conducted. This promotes public trust and support of research!
  • Some communities and fields have been talking about documentation for decades and have well-developed standards for documentation (e.g., geospatial data, clinical data, etc.), while others do not (e.g., psychology, education, engineering, etc.). No matter where your research community or field falls in this spectrum, you can start improving your documentation today!

(more…)