An Introduction to Traditional Knowledge Labels and Licenses

NOTE: While we are discussing matters relating to the law, this post is not meant as legal advice.

Overview

Fans of Mukurtu CMS, a digital archeology platform, as well as intellectual property nerds may already be familiar with Traditional Knowledge labels and licenses, but for everyone else here’s a quick introduction. Traditional Knowledge labels and licenses, were specifically created for researchers and artists working with or thinking of digitizing materials created by indigenous groups. Although created more educational, rather than legal value, these labels aim to allow indigenous groups to take back some control over their cultural heritage and to educate users about how to incorporate these digital heritage items in a more just and culturally sensitive way. The content that TK licenses and labels cover extends beyond digitized visual arts and design to recorded and written and oral histories and stories. TK licenses and labels are also a standard to consider when working with any cultural heritage created by marginalized communities. They also provide an interesting way to recognize ownership and the proper use of work that is in the public domain. These labels and licenses are administered by Local Contexts, an organization directed by Jane Anderson, a professor at New York University and Kim Christen, a professor at Washington State University. Local Contexts is dedicated to helping Native Americans and other indigenous groups gain recognition for, and control over, the way their intellectual property is used. This organization has received funding from sources including the National Endowment for Humanities, and the World Intellectual Property Organization.

Traditional knowledge, or TK, labels and licenses are a way to incorporate protocols for cultural practices into your humanities data management and presentation strategies. This is especially relevant because indigenous cultural heritage items are traditionally viewed by Western intellectual property laws as part of the public domain. And, of course, there is a long and troubling history of dehumanizing treatment of Native Americans by American institutions, as well as a lack of formal recognition of their cultural practices, which is only starting to be addressed. Things have been slowly improving; for example, the Native American Graves and Repatriation Act of 1990 was a law specifically created to address institutions, such as museums, which owned and displayed people’s relative’s remains and related funerary art without their permission or the permission of their surviving relatives (McManamon, 2000). The World Intellectual Property Organization’s Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore (IGC) has began to address and open up conversations about these issues in hopes of coming up with a more consistent legal framework for countries to work with; though, confusingly, most of what Traditional Knowledge labels and licenses apply to are considered “Traditional Cultural Expressions” by WIPO (“Frequently Asked Questions,” n.d.).

To see these labels and licenses in action, take a look at how how these are used is the Mira Canning Stock Route Project Archive from Australia (“Mira Canning Stock Route Project Archive,” n.d.).

The main difference between TK labels and licenses is that TK labels are an educational tool for suggested use with indigenous materials, whether or not they are legally owned by an indigenous community, while TK licenses are similar to Creative Commons licenses — though less recognized — and serve as a customizable supplement to traditional copyright law for materials owned by indigenous communities (“Does labeling change anything legally?,” n.d.).

The default types of TK licenses are: TK Education, TK Commercial, TK Attribution, TK Noncommercial.

Four proposed TK licenses

TK Licenses so far (“TK Licenses,” n.d.)

Each license and label, as well as a detailed description can be found on the Local Contexts site and information about each label is available in English, French, and Spanish.

The types of TK labels are: TK Family, TK Seasonal, TK Outreach, TK Verified, TK Attribution, TK Community Use Only, TK Secret/Sacred, TK Women General, TK Women Restricted, TK Men General, TK Men Restricted, TK Noncommercial, TK Commercial, TK Community Voice, TK Culturally Sensitive (“Traditional Knowledge (TK) Labels,” n.d.).

Example:

TK Women Restricted (TK WR) Label

A TK Women Restricted Label.

“This material has specific gender restrictions on access. It is regarded as important secret and/or ceremonial material that has community-based laws in relation to who can access it. Given its nature it is only to be accessed and used by authorized [and initiated] women in the community. If you are an external third party user and you have accessed this material, you are requested to not download, copy, remix or otherwise circulate this material to others. This material is not freely available within the community and it therefore should not be considered freely available outside the community. This label asks you to think about whether you should be using this material and to respect different cultural values and expectations about circulation and use.” (“TK Women Restricted (TK WR),” n.d.)

Wait, so is this a case where a publicly-funded institution is allowed to restrict content from certain users by gender and other protected categories?

The short answer is that this is not what these labels and licenses are used for. Local Contexts, Mukurtu, and many of the projects and universities associated with the Traditional Knowledge labels and licensing movement are publicly funded. From what I’ve seen, the restrictions are optional, especially for those outside the community (“Does labeling change anything legally?,” n.d.). It’s more a way to point out when something is meant only for members of a certain gender, or to be viewed during a time of year, than to actually restrict something only to members of a certain gender. In other words, the gender-based labels for example are meant for the type of self-censorship of viewing materials that is often found in archival spaces. That being said, some universities have what is called a Memorandum of Understanding between a university and an indigenous community, which involve universities agreeing to respect the Native American culture. The extent to which this goes for digitized cultural heritage held in university archives, for example, is unclear, though most Memorandum of Understanding are not legally binding (“What is a Memorandum of Understanding or Memorandum of Agreement?,” n.d.) . Overall, this raises lots of interesting questions about balancing conflicting views of intellectual property and access and public domain.

Works Cited:

Does labeling change anything legally? (n.d.). Retrieved August 3, 2017, from http://www.localcontexts.org/project/does-labeling-change-anything-legally/
Frequently Asked Questions. (n.d.). Retrieved August 3, 2017, from http://www.wipo.int/tk/en/resources/faqs.html
McManamon, F. P. (2000). NPS Archeology Program: The Native American Graves Protection and Repatriation Act (NAGPRA). In L. Ellis (Ed.), Archaeological Method and Theory: An Encyclopedia. New York and London: Garland Publishing Co. Retrieved from https://www.nps.gov/archeology/tools/laws/nagpra.htm
Mira Canning Stock Route Project Archive. (n.d.). Retrieved August 3, 2017, from http://mira.canningstockrouteproject.com/
TK Licenses. (n.d.). Retrieved August 3, 2017, from http://www.localcontexts.org/tk-licenses/
TK Women Restricted (TK WR). (n.d.). Retrieved August 3, 2017, from http://www.localcontexts.org/tk/wr/1.0
What is a Memorandum of Understanding or Memorandum of Agreement? (n.d.). Retrieved August 3, 2017, from http://www.localcontexts.org/project/what-is-a-memorandum-of-understandingagreement/

Further Reading:

Christen, K., Merrill, A., & Wynne, M. (2017). A Community of Relations: Mukurtu Hubs and Spokes. D-Lib Magazine, 23(5/6). https://doi.org/10.1045/may2017-christen
Educational Resources. (n.d.). Retrieved August 3, 2017, from http://www.localcontexts.org/educational-resources/
Lord, P. (n.d.). Unrepatriatable: Native American Intellectual Property and Museum Digital Publication. Retrieved from http://www.academia.edu/7770593/Unrepatriatable_Native_American_Intellectual_Property_and_Museum_Digital_Publication
Project Description. (n.d.). Retrieved August 3, 2017, from http://www.sfu.ca/ipinch/about/project-description/

Acknowledgements:

Thank you to the Rare Book and Manuscript Library and Melissa Salrin in the iSchool for helping me with my questions about indigenous and religious materials in archives and special collections at public institutions, you are the best!

If Creative Commons Licenses Were Cookies

A plate of cookies (not licenses). This image, however, is licensed under CC-0, and is part of the public domain.

NOTE: This post is not meant as legal advice, but as a humorous piece.

Creative Commons is a licensing scheme set up to supplement copyright and help creators allow others to use their work, and to have more control over the ways that the work is used. These licenses have become increasingly recognized in courts around the world and yes, people have gotten sued for not following the terms of CC licenses. Cookies, known to the rest of the English speaking world as biscuits, are delicious sugary circular wonderfulness. But what could they have in common? More than you may think.

CC-0 Public Domain:

A brigadeiro is technically a cookie because it’s round and sweet; however, it is more of a part of the greater category of desserts, much like saying something is public domain is less of a licensing statement than a revocation of the rights guaranteed under copyright law.

CC-BY:

When your content is under a CC-BY license you can build whatever you want out of it, much like gingerbread. This could include men, houses, reindeer, or whatever, but you still recognize your creation as gingerbread.

CC-BY SA:

Anzac Day cookies are a defining dessert in Australian cuisine and are used to celebrate either Anzac Day or Australian heritage, but you can add your own local twist on this favorite like frosting, much like using a CC-BY SA license, so your new creations have to be licensed the same way like how you wouldn’t make “Anzac Day” cookies for the Fourth of July.

CC BY-ND:

Like the famous or perhaps infamous Berger Cookies of Baltimore MD, this license will let you make your own content and even sell it, but the creator wants the content the same no matter what. Some people say trans fats are dangerous, but Berger Cookies says they are absolutely necessary and will fight you if you say they should change their recipe.

CC-BY-NC:

Similar to Speculoos, which are traditional and standardized cookies in regard to shape and flavor, but spawned a popular American cookie spread also called Speculoos, CC-BY-NC content can’t be commercial but the derivatives can be different and licensed differently from the original as long as they stay noncommercial.

CC BY-NC-ND:

Girl Scout Cookies have been around for exactly 100 years. The most restrictive type of CC license can, of course, be compared to the most restrictive type of cookie. The Girl Scouts retain a lot of control over their cookies: who can make them, who can sell them, what time of year they are sold, to the point where the recipes remain hidden, though they are presumably not made with real Girl Scouts.

Don’t forget to check out the CC licensing documentation to learn more and see examples that won’t make you hungry!

https://creativecommons.org/share-your-work/licensing-types-examples/licensing-examples/

More Resources:

http://guides.library.stonybrook.edu/copyright/home

What are your thoughts on Creative Commons?  What are some other cookies that remind you of Creative Commons licenses? Are brigadeiros cookies? Let us know in the comments!

Works Cited:

100 Years of Cookie History – Girl Scouts. (2017). Retrieved June 16, 2017, from http://www.girlscouts.org/en/cookies/all-about-cookies/100-years-of-cookie-history.html

Chase, D. (2017, January 25). Research & Subject Guides: Copyright, Fair Use & the Creative Commons: Home. Retrieved June 16, 2017, from http://guides.library.stonybrook.edu/copyright/home

Glyn Moody. (2016, July 13). Festival uses CC-licensed pic without attribution, pays the price. Retrieved June 16, 2017, from https://arstechnica.com/tech-policy/2016/07/creative-commons-photo-misused-lawsuit/

Gorelick, R. (2013, November 22). FDA trans-fat ban threatens Berger cookies. The Baltimore Sun. Retrieved from http://www.baltimoresun.com/entertainment/dining/baltimore-diner-blog/bs-fo-berger-cookie-trans-fat-ban-20131122-story.html

Licenses and Examples. (n.d.). Retrieved June 16, 2017, from https://creativecommons.org/share-your-work/licensing-types-examples/licensing-examples/

Lynne Olver. (2015, March 18). Food Timeline: food history research service. Retrieved June 16, 2017, from http://www.foodtimeline.org/index.html

Review: Docear

We’ve talked about Docear the Visual Citation Manager on the blog before, before my time, but it’s been a while we’ll revisit it. Though, the most recent major update to the software was in 2015, and based on the forums it seems that Docear has struggled with finding funding. However, the researchers behind this project are still active. That being said, in the worst case scenario, Docear is an open source project and if things went south, you could still get your information out. If you are considering relying on this software for organizing very long term research projects you need to use an external cloud backup service as their My Docear service is no longer available and supported if it ever existed at all.

Docear

Screenshot of Docear demo mindmap

Docear paper demo mindmap showing linked annotated PDF

Docear is an open source mind mapping, reference, and citation management software for those who want a visual way to keep their research organized. It is available for Windows, Mac, and Linux computers. Docear provides plenty of support and useful instructions through their official user manual. The examples on the app itself for trying out the mind map and PDF capability incorporate some of the research behind the product itself and makes for an informative, if somewhat meta, experience. Docear staff like to compare the software to Zotero and Mendeley, but it’s a very different type of beast. Specifically, a combination of Jabref (without the OpenOffice support) and Freeplane for mind maps, and, depending on what type of PDF viewer you use, a document annotation software. To enjoy the full capability of this software you also have to download PDF X-change viewer, though you can still do some annotating with other less supported PDF editors. Docear also uses Mr. DLib or Machine-readable digital library cataloging. While Mr. DLib has not really caught on elsewhere, it is featured as part of JabRef and specifically powers the article recommendation function. If they ever get their funding together, Docear could become a space where you can research, organize, and write an article. And unlike some of the software options discussed on this blog and in our LibGuides, you can download Docear from a zip file and run it to full capacity on Scholarly Commons computers.

Although Docear is not quite the all-encompassing research suite the creators envisioned, there are still lots of funky little features not found in other services. For example, in the Tools and Settings tab you can add map locations with OpenMaps (unfortunately there is no search function — you have to zoom and select your location) to add a geographic component to your otherwise mental map,which you can see by clicking on “View Open Maps Location” later.

Screenshot of Docear Open Maps features

You can also add time alerts for time management in Tools and Settings. But before we get ahead of ourselves, it’s easy to add a node with keyboard shortcuts and the node panel in the toolbar. You can add links to websites and other nodes right in your mind map by right clicking on a node. Apparently, you can add formulas to your mind map using LaTex but I didn’t try it, as I am not one of the people who cares about that sort of thing.

And while you do have the option of writing in Docear itself, there is a plugin for MS Word, but only on Windows. On the one hand, the plugin is old and hasn’t been updated in a few years, and it doesn’t work on the computers at Scholarly Commons. But on the other hand, since it’s based in BibTeX, if it actually does work the way they say it does, you should be able to use it with any BibTeX bibliography, and not just Docear. This means, it could give you that MS Word integration that you might be lacking with another reference manager.

Overall, if you wanted a reference manager and document annotator that is easy to get started on this is NOT the one for you, but for those patient enough to deal with the learning curve, Docear can be a good addition to your research strategy. I really hope this project gets the funding it needs to fully live up to its potential, but for now it’s still a solid option for researchers looking for a unique way to organize their work.

Bad Web Design with e-Portfolio Software

E-portfolios (sometimes spelled ePortfolio) and digital portfolios are websites where you can display your academic achievements and works for the world to see. These professional websites are often created with a specific career goal in mind and display examples that demonstrate how you meet the competencies of your career goal. Digital portfolios can be used to supplement a LinkedIn profile, and some graduate programs even require the creation of an e-portfolio in lieu of writing a master’s thesis or even as a graduation requirement.

Should I make an e-portfolio with e-portfolio software?

A lot of online portfolio software creation tools aimed at educators make sites that tend to look very formatted. Essentially, what you end up working with is close in appearance to a Google Sites page. Oftentimes, individuals pay for their own site, if funds are not provided by your university. The University of Illinois supports use of the ePortfolio site Digication, which is free to faculty, students, and staff. That being said, default templates for e-portfolios tend to be… ugly. You may consider using these if your school subscribes to them, or if you want a free portfolio site for your fifth graders. Otherwise, probably not.

Issues to consider when choosing an e-portfolio software: digital preservation, usability, aesthetics, and cost. You also want to consider the most important question here: Am I better off using Google Sites?

Mahara

Mahara is a New Zealand-based open source e-portfolio software. You need your own server to use Mahara, but you can customize the software to your liking if you know how or have a very supportive IT department. For all of my server-free readers, FolioSpaces is a web application based on Mahara, but feels a lot more like a social network for third graders. Users are unable to customize the background of their sites unless you pay up to $9.95 a year. On FolioSpaces you create “portfolios” that are actually sections where you can store different aspects of your work. FolioSpaces is an odd public space where you are likely to see posts from high school students from Michigan who really could benefit from spell check. Still, this could be a good free option for folks looking for a portfolio creation tool for their students’ classwork. However, you will probably save a lot of time and trouble, as well as have more control over privacy settings, by just using Google Sites, especially if you have Google for Education (and if you are a student at Illinois, you do).

Digication

Digication is an e-portfolio alternative. U of I students, faculty, and staff can easily create, share, and access ePotfolios for free, and continue to access them after graduation. Digication has pretty intuitive steps for creating an ePortfolio. One great aspect is the easily editable custom URLs you can create for your portfolio. With Digication, you can either use a pre-made template (some more aesthetically pleasing than others), or customize your own theme. Because we have access to Digication through the University of Illinois, it has some themes that are better suited to our needs than some of the other ePortfolio options on this list, because they are geared to a UIUC audience.

One of the best parts of Digication are the options to allow comments and “conversations” on your ePortfolio, which are a great social aspect that encourage interaction between yourself and your audience.

Like all of these options, there are pros and cons to using Digication, but it’s definitely a path to explore. For more information on Digication at the U of I, head to the ePortfolio Resources at Illinois page.

Portfolio Gen

Portfolio Gen provides free pages, as well as paid options that have more space and no “Powered by Portfolio Gen” widget on the page. Frankly, most of the themes on Portfolio Gen seem very childish, and seem to cater to an audience of younger students creating (tacky) portfolios. They are, however, the easiest to use e-portfolio software, and it would be nice if they could expand their theme options to have some better-suited for adults.

My default portfolio and landing page took about five minutes to make and looked like this:

Portfolio homepage with default settings

Pathbrite

In my opinion, this is probably the most promising e-portfolio builder that is specifically built for this purpose. Pathbrite is free for individual users but costs money for institutions. You can create a free, simple site with a Google account and incorporate documents like a resume/CV and a writing sample directly from your Google Drive from the side bar “Add Work” tab and/or by dragging and dropping the icon of the type of work you want to add to your portfolio site. Although this looks similar to the Weebly drag and drop, it will give you options to upload from all sorts of places. You can arrange uploaded items by dragging and dropping them around on your page. A particularly nice feature is that you can also incorporate screenshots and links to websites you have created by simply clicking “Web link” and including the link to the website you want to share so you don’t have to screenshot it yourself.

add items mode in pathbrite

That being said, on the “Style and Settings” tab on the side bar you have a very limited amount of control over the way that the different items are arranged on your site. You can choose between light and dark and resume views and a couple of different ways to arrange the layout of how your work will appear, but that’s about it.

Pathbrite Style and Setting Editor

My default portfolio and page took about 15 minutes to make and this is how it turned out:

demo pathbrite portfolio

Overall, I am not a big fan of any of these options. At the end of the day, I still think you’re probably better off working with a regular CMS like WordPress, Weebly, or even the most basic of site creation tools, Google Sites. If you are an artist, photographer, or some other kind of all around creative genius there are web site builders and e-portfolio designs that specifically cater to you that look nice; however, this post is focusing on researcher/educator e-portfolios that aren’t as image heavy.

And if you’re a UIUC faculty member you’re in luck, because soon you will be able to create an e-portfolio through an Illinois Experts where you can showcase your research and accomplishments.

UPDATE 11/14/2017: Those post initially and incorrectly stated that the University of Illinois at Urbana-Champaign does not provide free access to any ePortfolio site. However, we just learned that we do! University of Illinois students, faculty, and staff can create a free ePortfolio on Digication, which they can continue to access even after they have left the school. We apologize for our mistake, and hope that this news comes as a pleasant surprise for our readers!

More resources:

And make sure to check out our two fabulous LibGuides on online scholarly presence:

DIY Data Science

Data science is a special blend of statistics and programming with a focus on making complex statistical analyses more understandable and usable to users, typically through visualization. In 2012, the Harvard Business Review published the article, “Data Scientist: The Sexiest Job of the 21st Century” (Davenport, 2012), showing society’s perception of data science. While some of the excitement of 2012 has died down, data science continues on, with data scientists earning a median base salary over $100,000 (Noyes, 2016).

Here at the Scholarly Commons, we believe that having a better understanding of statistics means you are less likely to get fooled when they are deployed improperly, and will help you have a better understanding of the inner workings of data visualization and digital humanities software applications and techniques. We might not be able to make you a data scientist (though certainly please let us know if inspired by this post and you enroll in formal coursework) but we can share some resources to let you try before you buy and incorporate methods from this growing field in your own research.

As we have discussed again and again on this blog, whether you want to improve your coding, statistics, or data visualization skills, our collection has some great reads to get you started.

In particular, take a look at:

The Human Face of Big Data created by Rick Smolan and Jennifer Erwitt

  • This is a great coffee table book of data visualizations and a great flip through if you are here in the space. You will learn a little bit more about the world around you and will be inspired with creative ways to communicate your ideas in your next project.

Data Points: Visualization That Means Something by Nathan Yau

  • Nathan Yau is best known for being the man behind Flowing Data, an extensive blog of data visualizations that also offers tutorials on how to create visualizations. In this book he explains the basics of statistics and visualization.

Storytelling with Data by Cole Nussbaumer Knaflic

LibGuides to Get You Started:

And more!

There are also a lot of resources on the web to help you:

The Open Source Data Science Masters

  • This is not an accredited masters program but rather a curated collection of suggested free and low-cost print and online resources for learning the various skills needed to become a data scientist. This list was created and is maintained by Clare Corthell of Luminant Data Science Consulting
  • This list does suggest many MOOCS from universities across the country, some even available for free

Dataquest

  • This is a project-based data science course created by Vik Paruchuri, a former Foreign Service Officer turned data scientist
  • It mostly consists of a beginner Python tutorial, though it is only one of many that are out there
  • Twenty-two quests and portfolio projects are available for free, though the two premium versions offer unlimited quests, more feedback, a Slack community, and opportunities for one-on-one tutoring

David Venturi’s Data Science Masters

  • A DIY data science course, which includes a resource list, and, perhaps most importantly, includes links to reviews of data science online courses with up to date information. If you are interested in taking an online course or participating in a MOOC this is a great place to get started

Mitch Crowe Learn Data Science the Hard Way

  • Another curated list of data science learning resources, this time based on Zed Shaw’s Learn Code the Hard Way series. This list comes from Mitch Crowe, a Canadian data science

So, is data science still sexy? Let us know what you think and what resources you have used to learn data science skills in the comments!

Works Cited:

Davenport, T. H., & Patil, D. J. (2012, October 1). Data Scientist: The Sexiest Job of the 21st Century. Retrieved June 1, 2017, from https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century
Noyes, K. (2016, January 21). Why “data scientist” is this year’s hottest job. Retrieved June 1, 2017, from http://www.pcworld.com/article/3025502/why-data-scientist-is-this-years-hottest-job.html

Review: Practical Copyright for Library and Information Professionals by Paul Pedley

Here at the Scholarly Commons, we have resources to learn about copyright. For starters, you can check out our author’s rights and copyright page. You can also contact Copyright Librarian Sara Benson with further questions. Today, I’ll be reviewing Practical Copyright for Library and Information Professionals by Paul Pedley.

This book looked like a practical read, (after all, it even has the word “Practical” in the title) and turned out to be one of the more unique finds on the Scholarly Commons shelf. This is a guide to British copyright, pre-Brexit, written by Paul “not a lawyer” Pedley of the Chartered Institute of Library and Information Professionals, which is the equivalent of the American Library Association, but across the pond.  It is a fantastic resource for anyone interested in an overview of British copyright law, or learning more about librarianship around the world.

British Copyright Basics:

“Copyright is automatic. As soon as a work is created and meets the requirements for protection (that it is original, that it is fixed in a material form, that it is by a British citizen or was first published in the UK, and that it fits into the protected categories or species) the work will automatically be protected by UK copyright law”

-(Pedley 2015, 3).

 

“Copyright protects works that can be categorized as being one of the following: literary works, dramatic works, musical works, artistic works, sound recordings, films, broadcasts”

-(Pedley 2015, 2-3).

#FeeltheBerne or an attempt to standardize copyright law around the world.

Unlike a patent, which has no global standard (though the EU is trying to make unified patent application and court system called the Unitary Patent), copyrights are automatically protected by the Berne Convention and “Each of the Berne Union’s 168 member countries is required to protect works from other countries to the same level of works originating in its own country” (Pedley 2015, 4). Nevertheless, although there is a Berne Convention, which originates from the 1880s, (an international treaty that the United States did not sign until nearly a hundred years later), there are still differences in copyright law and what you can do with it in different countries, though a lot of aspects remain the same around the world.

What is important to understand about British copyright law?

According to the back cover, “The UK’s copyright legislation has been referred to as the longest, most confusing and hardest to navigate in the world.” I agree with Pedley. The reason why British copyright law is so overwhelming is in part to do with with efforts to smooth out the variety of different legal systems that the UK has to juggle. To start, the UK is a common law country while the rest of the EU tends to be civil law. There are also differing conceptions of copyright within the EU. For example, some EU countries consider certain works as more than just property (the closest thing we have here are the special rights for the creators of paintings and other visual art work under the Visual Rights Act of 1990, which you might have heard about from the ongoing Fearless Girl controversy). All of this smoothing of legal system differences was done in order to have a Single Market, which started with the European Communities and then moved to the European Union. Under British law,  the order of which decisions to listen to on legal matters such as copyright is EU case law, then British case law, and then finally British law. Therefore this book is chock full of lots of interesting cases from the EU, UK and even from the Commonwealth!

Comparing UK and US Copyright Law: some similarities and differences

  • American and British copyright law are both based around common law, which can be complex, and confusing
  • Software is considered to be a literary work
  • Librarians, along with archivists and museum curators, have special rights in their role in preserving cultural heritage and making it accessible all for the greater good of society
  • The UK has “Fair dealing” as opposed to the United States’ “Fair Use”; though, they are applied in different ways (“Fair Dealing vs Fair Use, n.d.).
  • In the UK there are more types of licensing agreements, including those for government created works, while in the US government created works are usually in the public domain
  • From my understanding, maps are considered art in the UK, with the rights that come with that — I imagine a map library is a different experience in the UK!

To learn more take a look at this book!

Disclaimer: This is a blog post and is not legal advice. Neither the author of this post nor the author of the book being reviewed are lawyers. 

Works Cited:

European Patent Office. (2017, April 10). Unitary Patent & Unified Patent Court. Retrieved June 1, 2017, from https://www.epo.org/law-practice/unitary.html
Fair Dealing vs Fair Use. (n.d.). Retrieved June 12, 2017, from https://www.uleth.ca/lib/copyright/content/fair_dealing_week/fair_dealing_vs_fair_use.asp
Kaplan, I. (2017, April 13). Fearless Girl Face-off Poses a New Question: Does the Law Protect an Artist’s Message? Retrieved June 1, 2017, from https://www.artsy.net/article/artsy-editorial-fearless-girl-face-off-poses-new-question-law-protect-artists-message
LibGuides: Brexit. (n.d.) Retrieved June 1, 2017, from http://guides.library.illinois.edu/c.php?g=558659&p=3842099
Pedley, P. (2015). Practical copyright for library and information professionals. London : Facet Publishing.
WIPO-Administered Treaties. (n.d.). Retrieved June 1, 2017, from /treaties/en/ShowResults.jsp

Twine Review

Twine is a tool for digital storytelling platform originally created by Baltimore-based programmer Chris Klimas back in 2009. It’s also a very straightforward turn-based game creation engine typically used for interactive fiction.

Now, you may be thinking to yourself, “I’m a serious researcher who don’t got no time for games.” Well, games are increasingly being recognized as an important part of digital pedagogy in libraries, at least according to this awesome digital pedagogy LibGuide from University of Toronto. Plus, if you’re a researcher interested in presenting your story in a nonlinear way, letting readers explore the subject at their own pace and based on what they are interested in, this could be the digital scholarship platform for you! Twine is a very easy-to-use tool, and allows you to incorporate links to videos and diagrams as well. You can also create interactive workflows and tutorials for different subjects. It’s also a lot of fun, something I don’t often say about the tools I review for this blog.

Twine is open source and free. Currently, there are three versions of Twine maintained by different repositories.There is already a lot of documentation and tutorials available for Twine so I will not be reinventing the wheel, but rather showing some of Twine’s features and clarifying things that I found confusing. Twine 1 still exists and there are certain functions that are only possible there; however, we are going to be focusing on Twine 2, which is newer and updated.

Twine 2

An example of a story on Twine

What simple Twine games look like. You would click on a linked blue or purple text to go to the next page of the story.

The Desktop version is identical to the online version; however, stories are a lot less likely to be inadvertently deleted on the desktop version. If you want to work on stories offline, or often forget to archive, you may prefer this option.

Desktop version of Twine

 

Story editor in Twine 2, Desktop edition with all your options for each passage. Yes I named the story Desktop Version of Twine.

You start with an Untitled passage, which you can change the title and content of. Depending on the version of Twine you have set up, you write in a  text-based coding language, and connect the passages of your story using links written between brackets like [[link]] that automatically generate a new passage. There are ways to hide the destination. More advanced users can add logic-based elements such as “if” statements in order to create games.

You cannot install the desktop version on the computers in Scholarly Commons, so let’s look at the browser version. Twine will give you reminders, but it’s always important to know that if you clear your browser files while working on a Twine project, you will lose your story. However, you can archive your file as an HTML document to ensure that you can continue to access it. We recommend that you archive your files often.

Here’s a quick tutorial on how to archive your stories. Step 1: Click the “Home” icon.

Twine editor with link to home menu circled

 

Click “Archive”

Arrow pointing at archive in main Twine menu

This is also where you can start or import stories.

Save Your File

Save archive file in Twine for browser

Note: You should probably  move the file from Downloads and paste it somewhere more stable, such as a flashdrive or the Cloud.

When you are ready to start writing again you can import your story file, which will have been saved as an HTML document. Also, keep in mind if you’re using a public or shared computer, Twine is based on the browser, so it will be accessible to whoever is using the browser.

And if you’re interested in interactive fiction or text-based games, there are a lot of platforms you might want to explore in addition to Twine such as: http://inform7.com/ and https://textadventures.co.uk/  and http://www.inklestudios.com/inklewriter/ 

Let us know in the comments your thoughts on Twine and similar platforms as well as the role of games and interactive fiction in research!

Finding Digital Humanities Tools in 2017

Here at the Scholarly Commons we want to make sure our patrons know what options are out there for conducting and presenting their research. The digital humanities are becoming increasingly accepted and expected. In fact, you can even play an online game about creating a digital humanities center at a university. After a year of exploring a variety of digital humanities tools, one theme has emerged throughout: taking advantage of the capabilities of new technology to truly revolutionize scholarly communications is actually a really hard thing to do.  Please don’t lose sight of this.

Finding digital humanities tools can be quite challenging. To start, many of your options will be open source tools that you need a server and IT skills to run ($500+ per machine or a cloud with slightly less or comparable cost on the long term). Even when they aren’t expensive be prepared to find yourself in the command line or having to write code, even when a tool is advertised as beginner-friendly.

Mukurtu Help Page Screen Shot

I think this has been taken down because even they aren’t kidding themselves anymore.

There is also the issue of maintenance. While free and open source projects are where young computer nerds go to make a name for themselves, not every project is going to have the paid staff or organized and dedicated community to keep the project maintained over the years. What’s more, many digital humanities tool-building projects are often initiatives from humanists who don’t know what’s possible or what they are doing, with wildly vacillating amounts of grant money available at any given time. This is exacerbated by rapid technological changes, or the fact that many projects were created without sustainability or digital preservation in mind from the get-go. And finally, for digital humanists, failure is not considered a rite of passage to the extent it is in Silicon Valley, which is part of why sometimes you find projects that no longer work still listed as viable resources.

Finding Digital Humanities Tools Part 1: DiRT and TAPoR

Yes, we have talked about DiRT here on Commons Knowledge. Although the Digital Research Tools directory is an extensive resource full of useful reviews, over time it has increasingly become a graveyard of failed digital humanities projects (and sometimes randomly switches to Spanish). DiRT directory itself  comes from Project Bamboo, “… a  humanities cyber- infrastructure  initiative  funded  by  the  Andrew  W.  Mellon Foundation between 2008 and 2012, in order to enhance arts and humanities research through the development of infrastructure and support for shared technology services” (Dombrowski, 2014).  If you are confused about what that means, it’s okay, a lot of people were too, which led to many problems.

TAPoR 3, Text Analysis Portal for Research is DiRT’s Canadian counterpart, which also contains reviews of a variety of digital humanities tools, despite keeping text analysis in the name. Like DiRT, outdated sources are listed.

Part 2: Data Journalism, digital versions of your favorite disciplines, digital pedagogy, and other related fields.

A lot of data journalism tools crossover with digital humanities; in fact, there are even joint Digital Humanities and Data Journalism conferences! You may have even noticed how The Knight Foundation is to data journalism what the Mellon Foundation is to digital humanities. However, Journalism Tools and the list version on Medium from the Tow-Knight Center for Entrepreneurial Journalism at CUNY Graduate School of Journalism and the Resources page from Data Driven Journalism, an initiative from the European Journalism Centre and partially funded by the Dutch government, are both good places to look for resources. As with DiRT and TAPoR, there are similar issues with staying up-to-date. Also data journalism resources tend to list more proprietary tools.

Also, be sure to check out resources for “digital” + [insert humanities/social science discipline], such as digital archeology and digital history.  And of course, another subset of digital humanities is digital pedagogy, which focuses on using technology to augment educational experiences of both  K-12 and university students. A lot of tools and techniques developed for digital pedagogy can also be used outside the classroom for research and presentation purposes. However, even digital science resources can have a lot of useful tools if you are willing to scroll past an occasional plasmid sharing platform. Just remember to be creative and try to think of other disciplines tackling similar issues to what you are trying to do in their research!

Part 3: There is a lot of out-of-date advice out there.

There are librarians who write overviews of digital humanities tools and don’t bother test to see if they still work or are still updated. I am very aware of how hard things are to use and how quickly things change, and I’m not at all talking about the people who couldn’t keep their websites and curated lists updated. Rather, I’m talking about, how the “Top Tools for Digital Humanities Research” in the January/February 2017  issue of “Computers in Libraries” mentions Sophie, an interactive eBook creator  (Herther, 2017). However, Sophie has not updated since 2011 and the link for the fully open source version goes to “Watch King Kong 2 for Free”.

Screenshot of announcement for 2010 Sophie workshop at Scholarly Commons

Looks like we all missed the Scholarly Commons Sophie workshop by only 7 years.

The fact that no one caught that error either shows either how slowly magazines edit, or that no one else bothered check. If no one seems to have created any projects with the software in the past three years it’s probably best to assume it’s no longer happening; though, the best route is to always check for yourself.

Long term solutions:

Save your work in other formats for long term storage. Take your data management and digital preservation seriously. We have resources that can help you find the best options for saving your research.

If you are serious about digital humanities you should really consider learning to code. We have a lot of resources for teaching yourself these skills here at the Scholarly Commons, as well as a wide range of workshops during the school year. As far as coding languages, HTML/CSS, Javascript, Python are probably the most widely-used tools in the digital humanities, and the most helpful. Depending on how much time you put into this, learning to code can help you troubleshoot and customize your tools, as well as allow you contribute to and help maintain the open source projects that you care about.

Works Cited:

100 tools for investigative journalists. (2016). Retrieved May 18, 2017, from https://medium.com/@Journalism2ls/75-tools-for-investigative-journalists-7df8b151db35

Center for Digital Scholarship Portal Mukurtu CMS.  (2017). Support. Retrieved May 11, 2017 from http://support.mukurtu.org/?b_id=633

DiRT Directory. (2015). Retrieved May 18, 2017 from http://dirtdirectory.org/

Digital tools for researchers. (2012, November 18). Retrieved May 31, 2017, from http://connectedresearchers.com/online-tools-for-researchers/

Dombrowski, Q. (2014). What Ever Happened to Project Bamboo? Literary and Linguistic Computing. https://doi.org/10.1093/llc/fqu026

Herther, N.K. (2017). Top Tools for Digital Humanities Research. Retrieved May 18, 2017, from http://www.infotoday.com/cilmag/jan17/Herther–Top-Tools-for-Digital-Humanities-Research.shtml

Journalism Tools. (2016). Retrieved May 18, 2017 from http://journalismtools.io/

Lord, G., Nieves, A.D., and Simons, J. (2015). dhQuest. http://dhquest.com/

Resources Data Driven Journalism. (2017). Retrieved May 18, 2017, from http://datadrivenjournalism.net/resources
TAPoR 3. (2015). Retrieved May 18, 2017 from http://tapor.ca/home

Visel, D. (2010). Upcoming Sophie Workshops. Retrieved May 18, 2017, from http://sophie2.org/trac/blog/upcomingsophieworkshops

Neatline 101: Getting Started

Here at Commons Knowledge we love easy-to-use interactive map creation software! We’ve compared and contrasted different tools, and talked about StoryMap JS and Shanti Interactive. The Scholarly Commons is a great place to get help on GIS projects, from ArcGIS StoryMaps and beyond. But if you want something where you can have both a map and a timeline, and if you are willing to spend money on your own server, definitely consider using Neatline.

Neatline is a plugin created by Scholar’s Lab at University of Virginia that lets you create interactive maps and timelines in Omeka exhibits. My personal favorite example is the demo site by Paul Mawyer “‘I am it and it is I’: Lovecraft in Providence” with the map tiles from Stamen Design under CC-BY 3.0 license.

Screenshot of Lovecraft Neatline exhibit

*As far as the location of Lovecraft’s most famous creation, let’s just say “Ph’nglui mglw’nafh Cthulhu R’lyeh wgah’nagl fhtagn.”

Now one caveat — Neatline requires a server. I used Reclaim Hosting which is straightforward, and which I have used for Scalar and Mukurtu. The cheapest plan available on Reclaim Hosting was $32 a year. Once I signed up for the website and domain name, I took advantage of one nice feature of Reclaim Hosting, which lets you one-click install the Omeka.org content management system (CMS). The Omeka CMS is a popular choice for digital humanities users. Other popular content management systems include Wordpress and Scalar.

One click install of Omeka through Reclaim Hosting

BUT WAIT, WHAT ABOUT OMEKA THROUGH SCHOLARLY COMMONS?

Here at the Scholarly Commons we can set up an Omeka.net site for you. You can find more information on setting up an Omeka.net site through the Scholarly Commons here. This is a great option for people who want to create a regular Omeka exhibit. However, Neatline is only available as a plugin on Omeka.org, which needs a server to host. As far as I know, there is currently no Neatline plugin for Omeka.net and I don’t think that will be happening anytime soon. On Reclaim you can install Omeka on any LAMP server. And side advice from your very forgetful blogger, write down whatever username and password you make up when you set up your Omeka site, that will save you a lot of trouble later, especially considering how many accounts you end up with when you use a server to host a site.

Okay, I’m still interested, but what do I do once I have Omeka.org installed? 

So back to the demo. I used the instructions on the documentation page on Neatline, which were good for defining a lot of the terms but not so good at explaining exactly what to do. I am focusing on the original Neatline plugin but there are other Neatline plugins like NeatlineText depending on your needs. However all plugins are installed in a similar way. You can follow the official instructions here at Installing Neatline.

But I have also provided some because the official instructions just didn’t do it for me.

So first off, download the Neatline zip file.

Go to your Control Panel, cPanel in Reclaim Hosting, and click on “File Manager.”

File Manager circled on Reclaim Hosting

Sorry this looks so goofy, Windows snipping tool free form is only for those with a steady hand.

Navigate to the the Plugins folder.

arrow points at plugins folder in file manager

Double click to open the folder. Click Upload Files.

more arrows pointing at tiny upload option in Plugins folder

If you’re using Reclaim Hosting, IGNORE THE INSTRUCTIONS DO NOT UNZIP THE ZIP FILE ON YOUR COMPUTER JUST PLOP THAT PUPPY RIGHT INTO YOUR PLUGINS FOLDER.

Upload the entire zip file

                      Plop it in!

Go back to the Plugins folder. Right click the Neatline zip file and click extract. Save extracted files in Plugins.

Extract Neatline files in File Manager

Sign into your Omeka site at [yourdomainname].[com/name/whatever]/admin if you aren’t already.

Omeka dashboard with arrows pointing at Plugins

Install Neatline for real.

Omeka Plugins page

Still confused or having trouble with setup?

Check out these tutorials as well!

Open Street Maps is great and all but what if I want to create a fancy historical map?

To create historical maps on Neatline you have two options, only one of which is included in the actual documentation for Neatline.

Officially, you are supposed to use GeoServer. GeoServer is an open source server application built in Java. Even if you have your own server, it has a lot more dependencies to run than what’s required for Omeka / Neatline.

If you want one-click Neatline installation with GeoServer and have money to spend you might want to check out AcuGIS Neatline Cloud Hosting which is recommended in the Neatline documentation and the lowest cost plan starts at $250 a year.

Unofficially, there is a tutorial for this available at Lincoln Mullen’s blog “The Backward Glance” specifically his 2015 post “How to Use Neatline with Map Warper Instead of Geoserver.”

Let us know about the ways you incorporate geospatial data in your research!  And stay tuned for Neatline 102: Creating a simple exhibit!

Works Cited:

Extending Omeka with Plugins. (2016, July 5). Retrieved May 23, 2017, from http://history2016.doingdh.org/week-1-wednesday/extending-omeka-with-plugins/

Installing Neatline Neatline Documentation. (n.d.). Retrieved May 23, 2017, from http://docs.neatline.org/installing-neatline.html

Mawyer, Paul. (n.d.). “I am it and it is I”: Lovecraft in Providence. Retrieved May 23, 2017, from http://lovecraft.neatline.org/neatline-exhibits/show/lovecraft-in-providence/fullscreen

Mullen, Lincoln. (2015).  “How to Use Neatline with Map Warper Instead of Geoserver.” Retrieved May 23, 2017 from http://lincolnmullen.com/blog/how-to-use-neatline-with-map-warper-instead-of-geoserver/

Uploading Plugins to Omeka. (n.d.). Retrieved May 23, 2017, from https://community.reclaimhosting.com/t/uploading-plugins-to-omeka/195

Working with Omeka. (n.d.). Retrieved May 23, 2017, from https://community.reclaimhosting.com/t/working-with-omeka/194

Adventures at the Spring 2017 Library Hackathon

This year I participated in an event called HackCulture: A Hackathon for the Humanities, which was organized by the University Library. This interdisciplinary hackathon brought together participants and judges from a variety of fields.

This event is different than your average campus hackathon. For one, it’s about expanding humanities knowledge. In this event, teams of undergraduate and graduate students — typically affiliated with the iSchool in some way — spend a few weeks working on data-driven projects related to humanities research topics. This year, in celebration of the sesquicentennial of the University of Illinois at Urbana-Champaign, we looked at data about a variety of facets of university life provided by the University Archives.

This was a good experience. We got firsthand experience working with data; though my teammates and I struggled with OpenRefine and so we ended up coding data by hand. I now way too much about the majors that are available at UIUC and how many majors have only come into existence in the last thirty years. It is always cool to see how much has changed and how much has stayed the same.

The other big challenge we had was not everyone on the team had experience with design, and trying to convince folks not to fall into certain traps was tricky.

For an idea of how our group functioned, I outlined how we were feeling during the various checkpoints across the process.

Opening:

We had grand plans and great dreams and all kinds of data to work with. How young and naive we were.

Midpoint Check:

Laura was working on the Python script and sent a well-timed email about what was and wasn’t possible to get done in the time we were given. I find public speaking challenging so that was not my favorite workshop. I would say it went alright.

Final:

We prevailed and presented something that worked in public. Laura wrote a great Python script and cleaned up a lot of the data. You can even find it here. One day in the near future it will be in IDEALS as well where you can already check out projects from our fellow humanities hackers.

Key takeaways:

  • Choose your teammates wisely; try to pick a team of folks you’ve worked with in advance. Working with a mix of new and not-so-new people in a short time frame is hard.
  • Talk to your potential client base! This was definitely something we should have done more of.
  • Go to workshops and ask for help. I wish we had asked for more help.
  • Practicing your presentation in advance as well as usability testing is key. Yes, using the actual Usability Lab at Scholarly Commons is ideal but at the very least take time to make sure the instructions for using what you created are accurate. It’s amazing what steps you will leave off when you have used an app more than twice. Similarly make sure that you can run your program and another program at the same time because if you can’t chances are it means you might crash someone’s browser when they use it.

Overall, if you get a chance to participate in a library hackathon, go for it, it’s a great way to do a cool project and get more experience working with data!