Meet Merinda Hensley, Digital Scholarship Liaison and Instruction Librarian

This post is part of our series profiling the expertise housed in the Scholarly Commons and our affiliate units in the University Library. Today we are featuring Merinda Hensley, Associate Professor and Digital Scholarship Liaison and Instruction Librarian.


What is your background education and work experience?

I got my BA in Political Science and Environmental Policy from the University of Arizona. I always thought I would work in DC for a non-profit or for the government but when we moved to Illinois in 1999 I decided to volunteer for AmeriCorps instead. As a volunteer, I administered a local rental assistance program. That was a really tough job, helping fill out paperwork for people that needed money to make a rent payment. I learned that while I thought I wanted to be on the front lines of social work, it was too easy for me to get attached to people’s situations. After I had my daughter, I decided to apply for a position at the Champaign Public Library. At the time I was also taking a course at the iSchool to see if librarianship was right for me. That was an easy decision! I kept my position at CPL until I was offered a graduate assistantship in the Education and Social Science Library. To round out an already very busy schedule, I was also offered a position working with the Information Literacy and Instruction Coordinator, which ended up being serendipitous because I never thought of myself as a teacher.

What led you to this field?

Since I was a child I knew I wanted to contribute to society in a way that would help make the world a better place. Until I found librarianship that always felt cliche and too big to be real for me. As an AmeriCorps volunteer, I was reminded how energetic I feel when guiding someone through a real world problem. I come from a family of teachers – my mom was a high school math teacher and my nana was a first grade reading teacher. Being a “traditional” teacher never resonated with me and in fact, I’ve sworn more than once I would never be a teacher. It turns out my view of teaching was short sighted and one dimensional. In library school I learned about information literacy and I immediately saw the potential in empowering students and faculty to learn how to use and find and create information.

What is your research agenda?

I am focused on developing effective ways to teach students critical thinking skills that translate into a lifelong ability in finding, evaluating, using, sharing, and creating information. As an instruction librarian, I investigate emerging methodologies for how librarians can extend our information literacy mission into new areas, especially the factors that influence the decisions students make as creators of new knowledge. I also work with my colleagues to design best practices that assist students at all levels in understanding scholarly communication, a process through which scholarly work is created, evaluated by the academic community, disseminated through presentations and writings, and perhaps most importantly, preserved for future use. My research contributes new discoveries to teaching and learning for librarianship, and enhancing how libraries support students as they identify as scholars including preparing academic librarians to lead this transition.

Do you have any favorite work-related duties?

My favorite part of my job is when a student has an a-ha moment while I am teaching. Students have a hard time hiding when they are excited and that makes me extraordinarily fulfilled.

What are some of your favorite underutilized resources that you would recommend to researchers?

I think it’s really important for undergraduate students to learn about the value of institutional repositories. Ours is called IDEALS and anyone affiliated with the university can submit their research (including conference posters or PowerPoint slides!) for archiving and a permanent URL for their resume. For the past year, I’ve been working on a project that will collect undergraduate theses and capstone projects into IDEALS from across all disciplines. In addition to keeping a record of the research students have engaged in at Illinois, it also provides future students the opportunity to pick up research questions where previous research left off.

If you could recommend only one book to beginning researchers in your field, what would you recommend?

The Courage to Teach by Parker Palmer.

Interested in contacting Merinda? You can email her at mhensle1@illinois.edu, or set up a consultation request through the Scholarly Commons website.

Facebook Twitter Delicious Email

DIY Data Science

Data science is a special blend of statistics and programming with a focus on making complex statistical analyses more understandable and usable to users, typically through visualization. In 2012, the Harvard Business Review published the article, “Data Scientist: The Sexiest Job of the 21st Century” (Davenport, 2012), showing society’s perception of data science. While some of the excitement of 2012 has died down, data science continues on, with data scientists earning a median base salary over $100,000 (Noyes, 2016).

Here at the Scholarly Commons, we believe that having a better understanding of statistics means you are less likely to get fooled when they are deployed improperly, and will help you have a better understanding of the inner workings of data visualization and digital humanities software applications and techniques. We might not be able to make you a data scientist (though certainly please let us know if inspired by this post and you enroll in formal coursework) but we can share some resources to let you try before you buy and incorporate methods from this growing field in your own research.

As we have discussed again and again on this blog, whether you want to improve your coding, statistics, or data visualization skills, our collection has some great reads to get you started.

In particular, take a look at:

The Human Face of Big Data created by Rick Smolan and Jennifer Erwitt

  • This is a great coffee table book of data visualizations and a great flip through if you are here in the space. You will learn a little bit more about the world around you and will be inspired with creative ways to communicate your ideas in your next project.

Data Points: Visualization That Means Something by Nathan Yau

  • Nathan Yau is best known for being the man behind Flowing Data, an extensive blog of data visualizations that also offers tutorials on how to create visualizations. In this book he explains the basics of statistics and visualization.

Storytelling with Data by Cole Nussbaumer Knaflic

LibGuides to Get You Started:

And more!

There are also a lot of resources on the web to help you:

The Open Source Data Science Masters

  • This is not an accredited masters program but rather a curated collection of suggested free and low-cost print and online resources for learning the various skills needed to become a data scientist. This list was created and is maintained by Clare Corthell of Luminant Data Science Consulting
  • This list does suggest many MOOCS from universities across the country, some even available for free

Dataquest

  • This is a project-based data science course created by Vik Paruchuri, a former Foreign Service Officer turned data scientist
  • It mostly consists of a beginner Python tutorial, though it is only one of many that are out there
  • Twenty-two quests and portfolio projects are available for free, though the two premium versions offer unlimited quests, more feedback, a Slack community, and opportunities for one-on-one tutoring

David Venturi’s Data Science Masters

  • A DIY data science course, which includes a resource list, and, perhaps most importantly, includes links to reviews of data science online courses with up to date information. If you are interested in taking an online course or participating in a MOOC this is a great place to get started

Mitch Crowe Learn Data Science the Hard Way

  • Another curated list of data science learning resources, this time based on Zed Shaw’s Learn Code the Hard Way series. This list comes from Mitch Crowe, a Canadian data science

So, is data science still sexy? Let us know what you think and what resources you have used to learn data science skills in the comments!

Works Cited:

Davenport, T. H., & Patil, D. J. (2012, October 1). Data Scientist: The Sexiest Job of the 21st Century. Retrieved June 1, 2017, from https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century
Noyes, K. (2016, January 21). Why “data scientist” is this year’s hottest job. Retrieved June 1, 2017, from http://www.pcworld.com/article/3025502/why-data-scientist-is-this-years-hottest-job.html
Facebook Twitter Delicious Email

Review: Practical Copyright for Library and Information Professionals by Paul Pedley

Here at the Scholarly Commons, we have resources to learn about copyright. For starters, you can check out our author’s rights and copyright page. You can also contact Copyright Librarian Sara Benson with further questions. Today, I’ll be reviewing Practical Copyright for Library and Information Professionals by Paul Pedley.

This book looked like a practical read, (after all, it even has the word “Practical” in the title) and turned out to be one of the more unique finds on the Scholarly Commons shelf. This is a guide to British copyright, pre-Brexit, written by Paul “not a lawyer” Pedley of the Chartered Institute of Library and Information Professionals, which is the equivalent of the American Library Association, but across the pond.  It is a fantastic resource for anyone interested in an overview of British copyright law, or learning more about librarianship around the world.

British Copyright Basics:

“Copyright is automatic. As soon as a work is created and meets the requirements for protection (that it is original, that it is fixed in a material form, that it is by a British citizen or was first published in the UK, and that it fits into the protected categories or species) the work will automatically be protected by UK copyright law”

-(Pedley 2015, 3).

 

“Copyright protects works that can be categorized as being one of the following: literary works, dramatic works, musical works, artistic works, sound recordings, films, broadcasts”

-(Pedley 2015, 2-3).

#FeeltheBerne or an attempt to standardize copyright law around the world.

Unlike a patent, which has no global standard (though the EU is trying to make unified patent application and court system called the Unitary Patent), copyrights are automatically protected by the Berne Convention and “Each of the Berne Union’s 168 member countries is required to protect works from other countries to the same level of works originating in its own country” (Pedley 2015, 4). Nevertheless, although there is a Berne Convention, which originates from the 1880s, (an international treaty that the United States did not sign until nearly a hundred years later), there are still differences in copyright law and what you can do with it in different countries, though a lot of aspects remain the same around the world.

What is important to understand about British copyright law?

According to the back cover, “The UK’s copyright legislation has been referred to as the longest, most confusing and hardest to navigate in the world.” I agree with Pedley. The reason why British copyright law is so overwhelming is in part to do with with efforts to smooth out the variety of different legal systems that the UK has to juggle. To start, the UK is a common law country while the rest of the EU tends to be civil law. There are also differing conceptions of copyright within the EU. For example, some EU countries consider certain works as more than just property (the closest thing we have here are the special rights for the creators of paintings and other visual art work under the Visual Rights Act of 1990, which you might have heard about from the ongoing Fearless Girl controversy). All of this smoothing of legal system differences was done in order to have a Single Market, which started with the European Communities and then moved to the European Union. Under British law,  the order of which decisions to listen to on legal matters such as copyright is EU case law, then British case law, and then finally British law. Therefore this book is chock full of lots of interesting cases from the EU, UK and even from the Commonwealth!

Comparing UK and US Copyright Law: some similarities and differences

  • American and British copyright law are both based around common law, which can be complex, and confusing
  • Software is considered to be a literary work
  • Librarians, along with archivists and museum curators, have special rights in their role in preserving cultural heritage and making it accessible all for the greater good of society
  • The UK has “Fair dealing” as opposed to the United States’ “Fair Use”; though, they are applied in different ways (“Fair Dealing vs Fair Use, n.d.).
  • In the UK there are more types of licensing agreements, including those for government created works, while in the US government created works are usually in the public domain
  • From my understanding, maps are considered art in the UK, with the rights that come with that — I imagine a map library is a different experience in the UK!

To learn more take a look at this book!

Disclaimer: This is a blog post and is not legal advice. Neither the author of this post nor the author of the book being reviewed are lawyers. 

Works Cited:

European Patent Office. (2017, April 10). Unitary Patent & Unified Patent Court. Retrieved June 1, 2017, from https://www.epo.org/law-practice/unitary.html
Fair Dealing vs Fair Use. (n.d.). Retrieved June 12, 2017, from https://www.uleth.ca/lib/copyright/content/fair_dealing_week/fair_dealing_vs_fair_use.asp
Kaplan, I. (2017, April 13). Fearless Girl Face-off Poses a New Question: Does the Law Protect an Artist’s Message? Retrieved June 1, 2017, from https://www.artsy.net/article/artsy-editorial-fearless-girl-face-off-poses-new-question-law-protect-artists-message
LibGuides: Brexit. (n.d.) Retrieved June 1, 2017, from http://guides.library.illinois.edu/c.php?g=558659&p=3842099
Pedley, P. (2015). Practical copyright for library and information professionals. London : Facet Publishing.
WIPO-Administered Treaties. (n.d.). Retrieved June 1, 2017, from /treaties/en/ShowResults.jsp
Facebook Twitter Delicious Email

Twine Review

Twine is a tool for digital storytelling platform originally created by Baltimore-based programmer Chris Klimas back in 2009. It’s also a very straightforward turn-based game creation engine typically used for interactive fiction.

Now, you may be thinking to yourself, “I’m a serious researcher who don’t got no time for games.” Well, games are increasingly being recognized as an important part of digital pedagogy in libraries, at least according to this awesome digital pedagogy LibGuide from University of Toronto. Plus, if you’re a researcher interested in presenting your story in a nonlinear way, letting readers explore the subject at their own pace and based on what they are interested in, this could be the digital scholarship platform for you! Twine is a very easy-to-use tool, and allows you to incorporate links to videos and diagrams as well. You can also create interactive workflows and tutorials for different subjects. It’s also a lot of fun, something I don’t often say about the tools I review for this blog.

Twine is open source and free. Currently, there are three versions of Twine maintained by different repositories.There is already a lot of documentation and tutorials available for Twine so I will not be reinventing the wheel, but rather showing some of Twine’s features and clarifying things that I found confusing. Twine 1 still exists and there are certain functions that are only possible there; however, we are going to be focusing on Twine 2, which is newer and updated.

Twine 2

An example of a story on Twine

What simple Twine games look like. You would click on a linked blue or purple text to go to the next page of the story.

The Desktop version is identical to the online version; however, stories are a lot less likely to be inadvertently deleted on the desktop version. If you want to work on stories offline, or often forget to archive, you may prefer this option.

Desktop version of Twine

 

Story editor in Twine 2, Desktop edition with all your options for each passage. Yes I named the story Desktop Version of Twine.

You start with an Untitled passage, which you can change the title and content of. Depending on the version of Twine you have set up, you write in a  text-based coding language, and connect the passages of your story using links written between brackets like [[link]] that automatically generate a new passage. There are ways to hide the destination. More advanced users can add logic-based elements such as “if” statements in order to create games.

You cannot install the desktop version on the computers in Scholarly Commons, so let’s look at the browser version. Twine will give you reminders, but it’s always important to know that if you clear your browser files while working on a Twine project, you will lose your story. However, you can archive your file as an HTML document to ensure that you can continue to access it. We recommend that you archive your files often.

Here’s a quick tutorial on how to archive your stories. Step 1: Click the “Home” icon.

Twine editor with link to home menu circled

 

Click “Archive”

Arrow pointing at archive in main Twine menu

This is also where you can start or import stories.

Save Your File

Save archive file in Twine for browser

Note: You should probably  move the file from Downloads and paste it somewhere more stable, such as a flashdrive or the Cloud.

When you are ready to start writing again you can import your story file, which will have been saved as an HTML document. Also, keep in mind if you’re using a public or shared computer, Twine is based on the browser, so it will be accessible to whoever is using the browser.

And if you’re interested in interactive fiction or text-based games, there are a lot of platforms you might want to explore in addition to Twine such as: http://inform7.com/ and https://textadventures.co.uk/  and http://www.inklestudios.com/inklewriter/ 

Let us know in the comments your thoughts on Twine and similar platforms as well as the role of games and interactive fiction in research!

Facebook Twitter Delicious Email

The Georgia State University Copyright Case

Georgia State University logo.

This article was written by Scholarly Communication and Publishing Graduate Assistant Treasa Bane and Copyright Librarian Sara Benson.

Introduction

The ruling in the Georgia State University copyright case will have ramifications for rights holders and library users across the United States. If libraries have the most gain, libraries will have more guidance in making fair use decisions—at least with respect to online course reserves. But, if publishers have the most gain, they will gain more control, and annual academic licenses from the CCC will become more important and costly. However, making any sort of correlation or conclusion has proven to be difficult in this case, which has been alive for nine years strong.

History of the Case

In April 2008, Cambridge University Press, SAGE Publications, and Oxford University Press filed suit against Georgia State University (GSU) for “pervasive, flagrant and ongoing unauthorized distribution of copyrighted materials” through the library’s e-reserve system (Smith 2014, 73). When a drafted federal court complaint letter regarding uncontrolled digital copying was sent to about a dozen institutions indicating the complaint would be filed unless they contacted lawyers representing the Association of American Publishers, several institutions complied by adopting policies at the faculty senate level, but GSU did not (73). GSU said the excerpts were short and were not substitutes for textbooks; this practice was fair use. Publishers had a problem with this, saying large numbers of readings reproduced in a systematic way was not fair use.

On May 11, 2012, Judge Evans at the District Court found copyright violations in only 5 of 99 excerpts, finding that the university’s policy was a good faith interpretation of fair use (Smith 2014, 80). Judge Evans rejected the 1976 guidelines for classroom copyright; she introduced an amount of work that is “decidedly small” (79). And then on August 10, 2012, Evans rejected the plaintiffs/publishers’ severe injunction, requiring them to pay GSU’s attorney fees, which were over 2.9 million (81). The publishers were not pleased. They appealed the District Court of Northern Georgia’s ruling to the Eleventh Circuit Court of Appeals, and on October 17, 2014, the Eleventh Circuit Court of Appeals reversed and remanded the District Court’s decision in favor of the publishers (81).

On March 31, 2016, the Judge Evans reanalyzed the allegedly infringing works according to the directions of the 11th Circuit Court of Appeals and found 4 cases of infringement among 48 works, designating Georgia State the prevailing party (Smith 2014, 89). The publishers filed again in order to collect evidence about GSU’s practices because they need to know the most current conduct at GSU when dealing with the four infringements. This time, Evans estimated the weights of the four factors. Factor one, the purpose and character of the use: 25%. Factor two, the nature of the copyrighted work: 5%. Factor three, the amount or substantiality of the portion used: 30%. Factor four, the effect of the use on the potential market: 40% (2016). Evans pointed out that there was no case for copyright infringement because the publishers could not show they held the copyright, and there was no evidence that any students had used the excerpts. Another finding was that GSU’s e-reserve service was a fair use of copyrighted material purchased by its library; it was modeled on a broad consensus of best practices among academic libraries.

But the fight continued! On August 26, 2016, the plaintiffs filed a Notice of Appeal, which has been granted. Because of this, the Court of Appeals must to return to the fair use analysis for the 48 infringement claims. John Challice, Oxford University Press and Vice President and Publisher for Higher Education was quoted in “Georgia State is Going Head to Head with the Country’s Top Publishers” and summarized the desires of publishers:

We want Georgia State University (and any university that seeks to emulate Georgia State University) to change their checklist to something reasonable and legal. … We want to make it really clear to our marketplace, which are academic institutions in the US in this case, that there is no difference between copyrighted content made available in digital format or that made available printed on paper when it comes to licensing it.

More recent analysis has given factor four additional weight and factor two less weight. In instances where permissions were available and not paid, factor four strongly disfavored fair use. In cases when factors one and two favored GSU and three and four favored the publishers, a tie was created, and the court then considered the evidence of damage to the market. As a result, an overwhelming number of the cases found factor two to be neutral or in disfavor of fair use. Factor three and four were also disfavored several times. At least 4 excerpts did not favor fair use overall; however, at least 19 did favor fair use, the majority of which favored factor one, then factor four, and then factor three (2016).

Critical Points and Predictions

In order to stay relevant and maintain the same monetary expectations they had with print materials, publishers are damaging their relationship with libraries. This leaves librarians no choice but to seek other alternatives, such as open educational resources and library publishing. But more importantly, as long as librarians practice fair use, they will not lose it. Fair use is a right.

This case, which is now referred to as Cambridge University Press et al. v. Patton and Cambridge University Press et al. v. Becker (individual academics rather than GSU as a whole), will hold oral arguments through the 11th Circuit Court on July 27. As this date approaches, we should consider whether the demand for excerpts was so limited that repetitive unpaid copying would have been unlikely even if unpaid copying was a widespread practice. Additionally, we should consider whether the portion of the market captured by unpaid use was so small that it would not have had an effect on the author or publisher’s decision to produce work. Proving these will result in a stronger pull for fair use factor four and would therefore favor GSU’s academics and librarians, which would be a win for all educational institutions.

Sources

Cambridge University Press et al. v. Becker, Civil Action No. 1:08-CV-1425-ODE (U.S. Dist., March 2016).

“Georgia State is Going Head to Head with the Country’s Top Publishers.” The Signal. September 7, 2016. http://georgiastatesignal.com/georgia-state-going-head-head-countrys-top-publishers/

National Association of College and University Attorneys. Cambridge University Press w. Georgia State University: The 11th Circuit Ruling. Kevin L. Smith. October 2014: 87-91. Redacted from the Scholarly Communications @ Duke Blog.

National Association of College and University Attorneys. Georgia State University Copyright Lawsuit. Kevin L. Smith, J.D., MLS. 73-85.

Facebook Twitter Delicious Email

The Scholarly Commons Has a New Website!

The Scholarly Commons website banner.

After months of work, we are excited to present our new website: www.library.illinois.edu/sc! We hope our new website is easy to navigate and useful to students and researchers at the University of Illinois at Urbana-Champaign and beyond. We would like to invite you to take a look around the website and to let us know if you have any issues, comments, questions or concerns!

Facebook Twitter Delicious Email

What To Do When OCR Software Doesn’t Seem To Be Working

Optical character recognition can enhance your research!

While optical character recognition (OCR) is a powerful tool, it’s not a perfect one. Inputting a document into an OCR software doesn’t necessarily mean that the software will actually output something useful 100% of the time. Though most documents come out without a hitch, we have a few tips on what to do if your document just isn’t coming out.

Scanning Issues

The problem may be less with your program and more with your initial scan. Low-quality scans are less likely to be read by OCR software. Here are a few considerations to keep in mind when scanning a document you will be using OCR on:

  • Make sure your document is scanned at 300 DPI
  • Keep your brightness level at 50%
  • Try to keep your scan as straight as possible

If you’re working with a document that you cannot create another scan for, there’s still hope! OCR engines with a GUI tend to have photo editing tools in them. If your OCR software doesn’t have those tools, or if their provided tools aren’t cutting it, try using a photo manipulation tool such as Photoshop or GIMP to edit your document. Also, remember OCR software tends to be less effective when used on photographs than on scans.

Textual Issues

The issues you’re having may not stem from the scanning, but from the text itself. These issues can be more difficult to solve, because you cannot change the content of the original document, but they’re still good tips to know, especially when diagnosing issues with OCR.

  • Make sure that your document is in a language, and from a period that your OCR software recognizes; not all engines are trained to recognize all languages
  • Low contrast in documents can reduce OCR accuracy; contrast can be adjusted in a photo manipulation tool
  • Text created prior to 1850 or with a typewriter can be more difficult for OCR software to read
  • OCR software cannot read handwriting; while we’d all like to digitize our handwritten notes, OCR software just isn’t there yet

Working with Digital Files

Digital files can, in many ways, be more complicated to use OCR software on, just because someone else may have made the file. This means that a file is lower-quality to begin with, or that whoever scanned the file may have made errors. Most likely, you will run into scenarios that are easy fixes using photo manipulation tools. But there will be times that the images you come across just won’t work. It’s frustrating, but you’re not alone. Check out your options!

Always Remember that OCR is Imperfect

Even with perfect documents that you think will yield perfect results, there will be a certain percentage of mistakes. Most OCR software packages have an error rate between 97-99% per character. While this may seem like it’s not many errors, in a page with 1,800 characters, there will be between 18 and 54 errors. In a 300 page book with 1,800 characters per page, that’s between 5,400 and 16,200. So always be diligent and clean up your OCR!

The Scholarly Commons

Here at the Scholarly Commons, we have Adobe Acrobat Pro installed on every computer, and ABBYY FineReader installed on several. We can also help you set up Tesseract on your own computer. If you would like to learn more about OCR, check out our LibGuide and keep your eye open for our next Making Scanned Text Machine Readable through Optical Character Recognition Savvy Researcher workshop!

Facebook Twitter Delicious Email

Finding Digital Humanities Tools in 2017

Here at the Scholarly Commons we want to make sure our patrons know what options are out there for conducting and presenting their research. The digital humanities are becoming increasingly accepted and expected. In fact, you can even play an online game about creating a digital humanities center at a university. After a year of exploring a variety of digital humanities tools, one theme has emerged throughout: taking advantage of the capabilities of new technology to truly revolutionize scholarly communications is actually a really hard thing to do.  Please don’t lose sight of this.

Finding digital humanities tools can be quite challenging. To start, many of your options will be open source tools that you need a server and IT skills to run ($500+ per machine or a cloud with slightly less or comparable cost on the long term). Even when they aren’t expensive be prepared to find yourself in the command line or having to write code, even when a tool is advertised as beginner-friendly.

Mukurtu Help Page Screen Shot

I think this has been taken down because even they aren’t kidding themselves anymore.

There is also the issue of maintenance. While free and open source projects are where young computer nerds go to make a name for themselves, not every project is going to have the paid staff or organized and dedicated community to keep the project maintained over the years. What’s more, many digital humanities tool-building projects are often initiatives from humanists who don’t know what’s possible or what they are doing, with wildly vacillating amounts of grant money available at any given time. This is exacerbated by rapid technological changes, or the fact that many projects were created without sustainability or digital preservation in mind from the get-go. And finally, for digital humanists, failure is not considered a rite of passage to the extent it is in Silicon Valley, which is part of why sometimes you find projects that no longer work still listed as viable resources.

Finding Digital Humanities Tools Part 1: DiRT and TAPoR

Yes, we have talked about DiRT here on Commons Knowledge. Although the Digital Research Tools directory is an extensive resource full of useful reviews, over time it has increasingly become a graveyard of failed digital humanities projects (and sometimes randomly switches to Spanish). DiRT directory itself  comes from Project Bamboo, “… a  humanities cyber- infrastructure  initiative  funded  by  the  Andrew  W.  Mellon Foundation between 2008 and 2012, in order to enhance arts and humanities research through the development of infrastructure and support for shared technology services” (Dombrowski, 2014).  If you are confused about what that means, it’s okay, a lot of people were too, which led to many problems.

TAPoR 3, Text Analysis Portal for Research is DiRT’s Canadian counterpart, which also contains reviews of a variety of digital humanities tools, despite keeping text analysis in the name. Like DiRT, outdated sources are listed.

Part 2: Data Journalism, digital versions of your favorite disciplines, digital pedagogy, and other related fields.

A lot of data journalism tools crossover with digital humanities; in fact, there are even joint Digital Humanities and Data Journalism conferences! You may have even noticed how The Knight Foundation is to data journalism what the Mellon Foundation is to digital humanities. However, Journalism Tools and the list version on Medium from the Tow-Knight Center for Entrepreneurial Journalism at CUNY Graduate School of Journalism and the Resources page from Data Driven Journalism, an initiative from the European Journalism Centre and partially funded by the Dutch government, are both good places to look for resources. As with DiRT and TAPoR, there are similar issues with staying up-to-date. Also data journalism resources tend to list more proprietary tools.

Also, be sure to check out resources for “digital” + [insert humanities/social science discipline], such as digital archeology and digital history.  And of course, another subset of digital humanities is digital pedagogy, which focuses on using technology to augment educational experiences of both  K-12 and university students. A lot of tools and techniques developed for digital pedagogy can also be used outside the classroom for research and presentation purposes. However, even digital science resources can have a lot of useful tools if you are willing to scroll past an occasional plasmid sharing platform. Just remember to be creative and try to think of other disciplines tackling similar issues to what you are trying to do in their research!

Part 3: There is a lot of out-of-date advice out there.

There are librarians who write overviews of digital humanities tools and don’t bother test to see if they still work or are still updated. I am very aware of how hard things are to use and how quickly things change, and I’m not at all talking about the people who couldn’t keep their websites and curated lists updated. Rather, I’m talking about, how the “Top Tools for Digital Humanities Research” in the January/February 2017  issue of “Computers in Libraries” mentions Sophie, an interactive eBook creator  (Herther, 2017). However, Sophie has not updated since 2011 and the link for the fully open source version goes to “Watch King Kong 2 for Free”.

Screenshot of announcement for 2010 Sophie workshop at Scholarly Commons

Looks like we all missed the Scholarly Commons Sophie workshop by only 7 years.

The fact that no one caught that error either shows either how slowly magazines edit, or that no one else bothered check. If no one seems to have created any projects with the software in the past three years it’s probably best to assume it’s no longer happening; though, the best route is to always check for yourself.

Long term solutions:

Save your work in other formats for long term storage. Take your data management and digital preservation seriously. We have resources that can help you find the best options for saving your research.

If you are serious about digital humanities you should really consider learning to code. We have a lot of resources for teaching yourself these skills here at the Scholarly Commons, as well as a wide range of workshops during the school year. As far as coding languages, HTML/CSS, Javascript, Python are probably the most widely-used tools in the digital humanities, and the most helpful. Depending on how much time you put into this, learning to code can help you troubleshoot and customize your tools, as well as allow you contribute to and help maintain the open source projects that you care about.

Works Cited:

100 tools for investigative journalists. (2016). Retrieved May 18, 2017, from https://medium.com/@Journalism2ls/75-tools-for-investigative-journalists-7df8b151db35

Center for Digital Scholarship Portal Mukurtu CMS.  (2017). Support. Retrieved May 11, 2017 from http://support.mukurtu.org/?b_id=633

DiRT Directory. (2015). Retrieved May 18, 2017 from http://dirtdirectory.org/

Digital tools for researchers. (2012, November 18). Retrieved May 31, 2017, from http://connectedresearchers.com/online-tools-for-researchers/

Dombrowski, Q. (2014). What Ever Happened to Project Bamboo? Literary and Linguistic Computing. https://doi.org/10.1093/llc/fqu026

Herther, N.K. (2017). Top Tools for Digital Humanities Research. Retrieved May 18, 2017, from http://www.infotoday.com/cilmag/jan17/Herther–Top-Tools-for-Digital-Humanities-Research.shtml

Journalism Tools. (2016). Retrieved May 18, 2017 from http://journalismtools.io/

Lord, G., Nieves, A.D., and Simons, J. (2015). dhQuest. http://dhquest.com/

Resources Data Driven Journalism. (2017). Retrieved May 18, 2017, from http://datadrivenjournalism.net/resources
TAPoR 3. (2015). Retrieved May 18, 2017 from http://tapor.ca/home

Visel, D. (2010). Upcoming Sophie Workshops. Retrieved May 18, 2017, from http://sophie2.org/trac/blog/upcomingsophieworkshops

Facebook Twitter Delicious Email

Neatline 101: Getting Started

Here at Commons Knowledge we love easy-to-use interactive map creation software! We’ve compared and contrasted different tools, and talked about StoryMap JS and Shanti Interactive. The Scholarly Commons is a great place to get help on GIS projects, from ArcGIS StoryMaps and beyond. But if you want something where you can have both a map and a timeline, and if you are willing to spend money on your own server, definitely consider using Neatline.

Neatline is a plugin created by Scholar’s Lab at University of Virginia that lets you create interactive maps and timelines in Omeka exhibits. My personal favorite example is the demo site by Paul Mawyer “‘I am it and it is I’: Lovecraft in Providence” with the map tiles from Stamen Design under CC-BY 3.0 license.

Screenshot of Lovecraft Neatline exhibit

*As far as the location of Lovecraft’s most famous creation, let’s just say “Ph’nglui mglw’nafh Cthulhu R’lyeh wgah’nagl fhtagn.”

Now one caveat — Neatline requires a server. I used Reclaim Hosting which is straightforward, and which I have used for Scalar and Mukurtu. The cheapest plan available on Reclaim Hosting was $32 a year. Once I signed up for the website and domain name, I took advantage of one nice feature of Reclaim Hosting, which lets you one-click install the Omeka.org content management system (CMS). The Omeka CMS is a popular choice for digital humanities users. Other popular content management systems include Wordpress and Scalar.

One click install of Omeka through Reclaim Hosting

BUT WAIT, WHAT ABOUT OMEKA THROUGH SCHOLARLY COMMONS?

Here at the Scholarly Commons we can set up an Omeka.net site for you. You can find more information on setting up an Omeka.net site through the Scholarly Commons here. This is a great option for people who want to create a regular Omeka exhibit. However, Neatline is only available as a plugin on Omeka.org, which needs a server to host. As far as I know, there is currently no Neatline plugin for Omeka.net and I don’t think that will be happening anytime soon. On Reclaim you can install Omeka on any LAMP server. And side advice from your very forgetful blogger, write down whatever username and password you make up when you set up your Omeka site, that will save you a lot of trouble later, especially considering how many accounts you end up with when you use a server to host a site.

Okay, I’m still interested, but what do I do once I have Omeka.org installed? 

So back to the demo. I used the instructions on the documentation page on Neatline, which were good for defining a lot of the terms but not so good at explaining exactly what to do. I am focusing on the original Neatline plugin but there are other Neatline plugins like NeatlineText depending on your needs. However all plugins are installed in a similar way. You can follow the official instructions here at Installing Neatline.

But I have also provided some because the official instructions just didn’t do it for me.

So first off, download the Neatline zip file.

Go to your Control Panel, cPanel in Reclaim Hosting, and click on “File Manager.”

File Manager circled on Reclaim Hosting

Sorry this looks so goofy, Windows snipping tool free form is only for those with a steady hand.

Navigate to the the Plugins folder.

arrow points at plugins folder in file manager

Double click to open the folder. Click Upload Files.

more arrows pointing at tiny upload option in Plugins folder

If you’re using Reclaim Hosting, IGNORE THE INSTRUCTIONS DO NOT UNZIP THE ZIP FILE ON YOUR COMPUTER JUST PLOP THAT PUPPY RIGHT INTO YOUR PLUGINS FOLDER.

Upload the entire zip file

                      Plop it in!

Go back to the Plugins folder. Right click the Neatline zip file and click extract. Save extracted files in Plugins.

Extract Neatline files in File Manager

Sign into your Omeka site at [yourdomainname].[com/name/whatever]/admin if you aren’t already.

Omeka dashboard with arrows pointing at Plugins

Install Neatline for real.

Omeka Plugins page

Still confused or having trouble with setup?

Check out these tutorials as well!

Open Street Maps is great and all but what if I want to create a fancy historical map?

To create historical maps on Neatline you have two options, only one of which is included in the actual documentation for Neatline.

Officially, you are supposed to use GeoServer. GeoServer is an open source server application built in Java. Even if you have your own server, it has a lot more dependencies to run than what’s required for Omeka / Neatline.

If you want one-click Neatline installation with GeoServer and have money to spend you might want to check out AcuGIS Neatline Cloud Hosting which is recommended in the Neatline documentation and the lowest cost plan starts at $250 a year.

Unofficially, there is a tutorial for this available at Lincoln Mullen’s blog “The Backward Glance” specifically his 2015 post “How to Use Neatline with Map Warper Instead of Geoserver.”

Let us know about the ways you incorporate geospatial data in your research!  And stay tuned for Neatline 102: Creating a simple exhibit!

Works Cited:

Extending Omeka with Plugins. (2016, July 5). Retrieved May 23, 2017, from http://history2016.doingdh.org/week-1-wednesday/extending-omeka-with-plugins/

Installing Neatline Neatline Documentation. (n.d.). Retrieved May 23, 2017, from http://docs.neatline.org/installing-neatline.html

Mawyer, Paul. (n.d.). “I am it and it is I”: Lovecraft in Providence. Retrieved May 23, 2017, from http://lovecraft.neatline.org/neatline-exhibits/show/lovecraft-in-providence/fullscreen

Mullen, Lincoln. (2015).  “How to Use Neatline with Map Warper Instead of Geoserver.” Retrieved May 23, 2017 from http://lincolnmullen.com/blog/how-to-use-neatline-with-map-warper-instead-of-geoserver/

Uploading Plugins to Omeka. (n.d.). Retrieved May 23, 2017, from https://community.reclaimhosting.com/t/uploading-plugins-to-omeka/195

Working with Omeka. (n.d.). Retrieved May 23, 2017, from https://community.reclaimhosting.com/t/working-with-omeka/194

Facebook Twitter Delicious Email

Adventures at the Spring 2017 Library Hackathon

This year I participated in an event called HackCulture: A Hackathon for the Humanities, which was organized by the University Library. This interdisciplinary hackathon brought together participants and judges from a variety of fields.

This event is different than your average campus hackathon. For one, it’s about expanding humanities knowledge. In this event, teams of undergraduate and graduate students — typically affiliated with the iSchool in some way — spend a few weeks working on data-driven projects related to humanities research topics. This year, in celebration of the sesquicentennial of the University of Illinois at Urbana-Champaign, we looked at data about a variety of facets of university life provided by the University Archives.

This was a good experience. We got firsthand experience working with data; though my teammates and I struggled with OpenRefine and so we ended up coding data by hand. I now way too much about the majors that are available at UIUC and how many majors have only come into existence in the last thirty years. It is always cool to see how much has changed and how much has stayed the same.

The other big challenge we had was not everyone on the team had experience with design, and trying to convince folks not to fall into certain traps was tricky.

For an idea of how our group functioned, I outlined how we were feeling during the various checkpoints across the process.

Opening:

We had grand plans and great dreams and all kinds of data to work with. How young and naive we were.

Midpoint Check:

Laura was working on the Python script and sent a well-timed email about what was and wasn’t possible to get done in the time we were given. I find public speaking challenging so that was not my favorite workshop. I would say it went alright.

Final:

We prevailed and presented something that worked in public. Laura wrote a great Python script and cleaned up a lot of the data. You can even find it here. One day in the near future it will be in IDEALS as well where you can already check out projects from our fellow humanities hackers.

Key takeaways:

  • Choose your teammates wisely; try to pick a team of folks you’ve worked with in advance. Working with a mix of new and not-so-new people in a short time frame is hard.
  • Talk to your potential client base! This was definitely something we should have done more of.
  • Go to workshops and ask for help. I wish we had asked for more help.
  • Practicing your presentation in advance as well as usability testing is key. Yes, using the actual Usability Lab at Scholarly Commons is ideal but at the very least take time to make sure the instructions for using what you created are accurate. It’s amazing what steps you will leave off when you have used an app more than twice. Similarly make sure that you can run your program and another program at the same time because if you can’t chances are it means you might crash someone’s browser when they use it.

Overall, if you get a chance to participate in a library hackathon, go for it, it’s a great way to do a cool project and get more experience working with data!

Facebook Twitter Delicious Email