For the transcript, click on “Continue reading” below.
Open Access Week is upon us and this year’s theme, “Open for Whom?” has us investigating how open access benefits the student population here at the University of Illinois at Urbana-Champaign. According to the Open Access Week official blog, this theme is meant to start a conversation on “whose interests are being prioritized in the actions we take and the platforms we support” towards open access. They raise an important question: Are we supporting not only open access but also equitable participation in research communication?
To explore this question on our campus, on Monday morning, we set out on an initiative to see how much our students are paying for textbooks this semester and asked how free, open textbooks would help them. How might having access to open educational resources such as textbooks help our student body participate in research communication and the academic community of the University?
At the four major libraries across campus, posters were set up for students to anonymously indicate how much money they have spent on textbooks this semester. They did this by placing a sticker dot on the poster that best fit their expense range, as pictured below. Alongside each poster was a whiteboard with an open question that students could answer: “How would free, open textbooks help you?”
By Tuesday afternoon these boards were filling up with answers from students. While this was an open board to post their thoughts, of course, we had some humorous answers including: “More money for coffee, “I would cry less,” and “More McChicken.” However, despite the occasional joke, the majority of the answers focused on saving money. Many students commented on the tremendous cost of higher education and not only the high prices of textbooks but the additional costs of supplemental online workbooks provided by Chegg, WebAssign, or McGraw Hill – Connect. Students agreed that textbooks as resources for their education should be free and available. A worrying result of these discussion boards was students sharing the ways in which they illegally access textbooks in lieu of purchasing them; many sharing links to illegitimate websites.
So, open for whom? Open Educational Resources (OER) offer a more affordable option for students and educators to access a quality educational experience. The Open Textbook Library describes open access textbooks as “funded, published, and licensed to be freely used, adapted, and distributed” and open for everyone’s use. Higher education institutions are gearing towards OER instead of requiring traditional textbooks and for students who are choosing between paying rent or purchasing textbooks, this can be life-changing.
Learn more about Open Educational Resources and how to find, evaluate, use and adapt OER materials for your needs.
What are you thoughts?
GitHub is a platform mostly used by software developers for collaborative work. You might be thinking “I’m not a software developer, what does this have to do with me?” Don’t go anywhere! In this post I explain what GitHub is and how it can be applied to collaborative writing for non-programmers. Who knows, GitHub might become your new best friend.
Picture this: you and some colleagues have similar research interests and want to collaborate on a paper. You have divided the writing work to allow each of you to work on a different element of the paper. Using a cloud platform like Google Docs or Microsoft Word online you compile your work, but things start to get messy. Edits are made on the document and you are unsure who made them or why. Elements get deleted and you do not know how to retrieve your previous work. You have multiple files saved on your computer with names like “researchpaper1.dox”, “researchpaper1 with edits.dox” and “research paper1 with new edits.dox”. Managing your own work is hard enough but when collaborators are added to the mix it just becomes unmanageable. After a never ending reply-all email chain and what felt like the longest meeting of all time, you and your colleagues are finally on the same page about the writing and editing of your paper. It just makes you think, there has got to be a better way to do this. Issues with collaboration are not exclusive to writing, they happen all the time in programming, which is why software-developers came up with version control systems like Git and GitHub.
GitHub allows developers to work together through branching and merging. Branching is the process by which the original file or source code is duplicated into clone files. These clones contain all the elements already in the original file and can be worked in independently. Developers use these clones to write and test code before combining it with the original code. Once their version of the code is ready they integrate or “push” it into the source code in a process called merging. Then, other members of the team are alerted of these changes and can “pull” the merged code from the source code into their respective clones. Additionally, every version of the project is saved after changes are made, allowing users to consult previous versions. Every version of your project is saved with with descriptions of what changes were made in that particular version, these are called commits. Now, this is a simplified explanation of what GitHub does but my hope is that you now understand GitHub’s applications because what I am about to say next might blow your mind: GitHub is not just for programmers! You do not need to know any coding to work with GitHub. After all, code and written language are very similar.
Even if you cannot write a single line of code, GitHub can be incredibly useful for a variety of reasons:
1. It allows you to electronically backup your work for free.
2. All the different versions of your work are saved separately, allowing you to look back at previous edits.
3. It alerts all collaborators when a change is made and they can merge that change into their own versions of the text.
4. It allows you to write using plain text, something commonly requested by publishers.
Hopefully, if you’ve made it this far into the article you’re thinking, “This sounds great, let’s get started!” For more information on using GitHub you can consult the Library’s guide on GitHub or follow the step by step instructions on GitHub’s Hello-World Guide.
Here are some links to what others have said about using GitHub for non-programmers:
This is a guest blog by the amazing Zachary Maiorana, a GA in Scholarly and Communication Publishing
Scholars and users have a vested interest in understanding the relative authority of publications they have either written or wish to cite to form the basis of their research. Although the literature search, a common topic in library instruction and research seminars, can take place on a huge variety of discovery tools, researchers often rely on Google Scholar as a supporting or central platform.
The massive popularity of Google Scholar is likely due to its simple interface, which bears the longtime prestige of Google’s search engine; its enormous breadth, with a simple search yielding millions of results; its compatibility and parallels with other Googles Chrome and Books; and its citation metrics mechanism.
This last aspect of Google Scholar, which collects and reports data on the number of citations a given publication receives, represents the platform’s apparent ability to precisely calculate the research community’s interest in that publication. But, in the University Library’s work on the Illinois Experts (experts.illinois.edu) research and scholarship portal, we have encountered a number of circumstances in which Google Scholar has misrepresented U of I faculty members’ research.
Recent studies reveal that Google Scholar, despite its popularity and its massive reach, is not only often inaccurate in its reporting of citation metrics and title attribution, but also susceptible to deliberate manipulation. In 2010, Labbé discusses an experiment using Ike Antkare (AKA “I can’t care”), a fictitious researcher whose bibliography was manufactured with a mountain of self-referencing citations. After the purposely falsified publications went public, Google’s bots didn’t differentiate Antkare’s research from his real-life peers during their crawling of his 100 generated articles. As a result, Google Scholar reported Antkare as one of the most cited researchers in the world, with a higher H-index* than Einstein.
In 2014, Spanish researchers conducted an experiment in which they created a fake scholar with several papers making hundreds of references to works written by the experimenters. After the papers were made public on a personal site, Google Scholar scraped the data and the real-life researchers’ profiles increased by 774 citations in total. In the hands of more nefarious users seeking to aggrandize their own careers or alter scientific opinion, such practices could result in large-scale academic fraud.
For libraries, Google’s kitchen-sink-included data collection methods further result in confusing and inaccurate attributions. In our work to supplement the automated collection of publication data for faculty profiles on Illinois Experts using CVs, publishers’ sites, journal sites, databases, and Google Scholar, we frequently encounter researchers’ names and works mischaracterized by Google’s clumsy aggregation mechanisms. For example, Google Scholar’s bots often read a scholar’s name somewhere within a work that the scholar hasn’t written—perhaps they were mentioned in the acknowledgements or in a citation—and simply attribute the work to them as author.
When it comes to people’s careers and the sway of scientific opinion, such snowballing mistakes can be a recipe for large-scale misdirection. Though much research exists that shows that, in general, Google Scholar currently represents highly cited research well, weaknesses persist. Blind distrust of any dominant proprietary platform is unwise, and using Google Scholar requires particularly careful judgment.
Read more on Google Scholar’s quality and reliability:
Brown, Christopher C. 2017. “Google Scholar.” The Charleston Advisor 19 (2): 31–34. https://doi.org/10.5260/chara.19.2.31.
Halevi, Gali, Henk Moed, and Judit Bar-Ilan. 2017. “Suitability of Google Scholar as a Source of Scientific Information and as a Source of Data for Scientific Evaluation—Review of the Literature.” Journal of Informetrics 11 (3): 823–34. https://doi.org/10.1016/j.joi.2017.06.005.
Labbé, Cyril. 2016. “L’histoire d’Ike Antkare et de Ses Amis Fouille de Textes et Systèmes d’information Scientifique.” Document Numérique 19 (1): 9–37. https://doi.org/10.3166/dn.19.1.9-37.
Lopez-Cozar, Emilio Delgado, Nicolas Robinson-Garcia, and Daniel Torres-Salinas. 2012. “Manipulating Google Scholar Citations and Google Scholar Metrics: Simple, Easy and Tempting.” ArXiv:1212.0638 [Cs], December. http://arxiv.org/abs/1212.0638.
Walker, Lizzy A., and Michelle Armstrong. 2014. “‘I Cannot Tell What the Dickens His Name Is’: Name Disambiguation in Institutional Repositories.” Journal of Librarianship and Scholarly Communication 2 (2). https://doi.org/10.7710/2162-3309.1095.
*Read the library’s LibGuide on bibliometrics for an explanation of the h-index and other standard research metrics: https://guides.library.illinois.edu/c.php?g=621441&p=4328607
Although the push for open access is decades old at this point, it remains one of the most important initiatives in the world of scholarly communication and publishing. Free of barriers like the continuously rising costs of subscription-based serials, open access publishing allows researchers to explore, learn, build upon, and create new knowledge without inhibition. As Peter Suber says, “[Open access] benefits literally everyone, for the same reasons that research itself benefits literally everyone.”
Peter Suber is the Director of the Harvard Office for Scholarly Communication; Director of the Harvard Open Access Project; and, among many other titles, the “de facto leader of the worldwide open access movement.” In short, Suber is an expert when it comes to open access. Thankfully, he knows the rest of us might not have time to be.
Suber introduces his book Open Access (a part of the MIT Press Essential Knowledge Series) by writing, “I want busy people to read this book. […] My honest belief from experience in the trenches is that the largest obstacle to OA is misunderstanding. The largest cause of misunderstanding is the lack of familiarity, and the largest cause of unfamiliarity is preoccupation. Everyone is busy.”
What follows is an informative yet concise read on the broad field of open access. Suber goes into the motivation for open access, the obstacles preventing it, and what the future may hold. In clear language, Suber breaks down jargon and explains how open access navigates complex issues concerning copyright and payment. This is a great introductory read to an issue so prominent in academia.
NOTE: While we are discussing matters relating to the law, this post is not meant as legal advice.
Fans of Mukurtu CMS, a digital archeology platform, as well as intellectual property nerds may already be familiar with Traditional Knowledge labels and licenses, but for everyone else here’s a quick introduction. Traditional Knowledge labels and licenses, were specifically created for researchers and artists working with or thinking of digitizing materials created by indigenous groups. Although created more educational, rather than legal value, these labels aim to allow indigenous groups to take back some control over their cultural heritage and to educate users about how to incorporate these digital heritage items in a more just and culturally sensitive way. The content that TK licenses and labels cover extends beyond digitized visual arts and design to recorded and written and oral histories and stories. TK licenses and labels are also a standard to consider when working with any cultural heritage created by marginalized communities. They also provide an interesting way to recognize ownership and the proper use of work that is in the public domain. These labels and licenses are administered by Local Contexts, an organization directed by Jane Anderson, a professor at New York University and Kim Christen, a professor at Washington State University. Local Contexts is dedicated to helping Native Americans and other indigenous groups gain recognition for, and control over, the way their intellectual property is used. This organization has received funding from sources including the National Endowment for Humanities, and the World Intellectual Property Organization.
Traditional knowledge, or TK, labels and licenses are a way to incorporate protocols for cultural practices into your humanities data management and presentation strategies. This is especially relevant because indigenous cultural heritage items are traditionally viewed by Western intellectual property laws as part of the public domain. And, of course, there is a long and troubling history of dehumanizing treatment of Native Americans by American institutions, as well as a lack of formal recognition of their cultural practices, which is only starting to be addressed. Things have been slowly improving; for example, the Native American Graves and Repatriation Act of 1990 was a law specifically created to address institutions, such as museums, which owned and displayed people’s relative’s remains and related funerary art without their permission or the permission of their surviving relatives (McManamon, 2000). The World Intellectual Property Organization’s Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore (IGC) has began to address and open up conversations about these issues in hopes of coming up with a more consistent legal framework for countries to work with; though, confusingly, most of what Traditional Knowledge labels and licenses apply to are considered “Traditional Cultural Expressions” by WIPO (“Frequently Asked Questions,” n.d.).
The main difference between TK labels and licenses is that TK labels are an educational tool for suggested use with indigenous materials, whether or not they are legally owned by an indigenous community, while TK licenses are similar to Creative Commons licenses — though less recognized — and serve as a customizable supplement to traditional copyright law for materials owned by indigenous communities (“Does labeling change anything legally?,” n.d.).
The default types of TK licenses are: TK Education, TK Commercial, TK Attribution, TK Noncommercial.
TK Licenses so far (“TK Licenses,” n.d.)
Each license and label, as well as a detailed description can be found on the Local Contexts site and information about each label is available in English, French, and Spanish.
The types of TK labels are: TK Family, TK Seasonal, TK Outreach, TK Verified, TK Attribution, TK Community Use Only, TK Secret/Sacred, TK Women General, TK Women Restricted, TK Men General, TK Men Restricted, TK Noncommercial, TK Commercial, TK Community Voice, TK Culturally Sensitive (“Traditional Knowledge (TK) Labels,” n.d.).
A TK Women Restricted Label.
“This material has specific gender restrictions on access. It is regarded as important secret and/or ceremonial material that has community-based laws in relation to who can access it. Given its nature it is only to be accessed and used by authorized [and initiated] women in the community. If you are an external third party user and you have accessed this material, you are requested to not download, copy, remix or otherwise circulate this material to others. This material is not freely available within the community and it therefore should not be considered freely available outside the community. This label asks you to think about whether you should be using this material and to respect different cultural values and expectations about circulation and use.” (“TK Women Restricted (TK WR),” n.d.)
Wait, so is this a case where a publicly-funded institution is allowed to restrict content from certain users by gender and other protected categories?
The short answer is that this is not what these labels and licenses are used for. Local Contexts, Mukurtu, and many of the projects and universities associated with the Traditional Knowledge labels and licensing movement are publicly funded. From what I’ve seen, the restrictions are optional, especially for those outside the community (“Does labeling change anything legally?,” n.d.). It’s more a way to point out when something is meant only for members of a certain gender, or to be viewed during a time of year, than to actually restrict something only to members of a certain gender. In other words, the gender-based labels for example are meant for the type of self-censorship of viewing materials that is often found in archival spaces. That being said, some universities have what is called a Memorandum of Understanding between a university and an indigenous community, which involve universities agreeing to respect the Native American culture. The extent to which this goes for digitized cultural heritage held in university archives, for example, is unclear, though most Memorandum of Understanding are not legally binding (“What is a Memorandum of Understanding or Memorandum of Agreement?,” n.d.) . Overall, this raises lots of interesting questions about balancing conflicting views of intellectual property and access and public domain.
Thank you to the Rare Book and Manuscript Library and Melissa Salrin in the iSchool for helping me with my questions about indigenous and religious materials in archives and special collections at public institutions, you are the best!
Here at the Scholarly Commons we want to make sure our patrons know what options are out there for conducting and presenting their research. The digital humanities are becoming increasingly accepted and expected. In fact, you can even play an online game about creating a digital humanities center at a university. After a year of exploring a variety of digital humanities tools, one theme has emerged throughout: taking advantage of the capabilities of new technology to truly revolutionize scholarly communications is actually a really hard thing to do. Please don’t lose sight of this.
Finding digital humanities tools can be quite challenging. To start, many of your options will be open source tools that you need a server and IT skills to run ($500+ per machine or a cloud with slightly less or comparable cost on the long term). Even when they aren’t expensive be prepared to find yourself in the command line or having to write code, even when a tool is advertised as beginner-friendly.
I think this has been taken down because even they aren’t kidding themselves anymore.
There is also the issue of maintenance. While free and open source projects are where young computer nerds go to make a name for themselves, not every project is going to have the paid staff or organized and dedicated community to keep the project maintained over the years. What’s more, many digital humanities tool-building projects are often initiatives from humanists who don’t know what’s possible or what they are doing, with wildly vacillating amounts of grant money available at any given time. This is exacerbated by rapid technological changes, or the fact that many projects were created without sustainability or digital preservation in mind from the get-go. And finally, for digital humanists, failure is not considered a rite of passage to the extent it is in Silicon Valley, which is part of why sometimes you find projects that no longer work still listed as viable resources.
Finding Digital Humanities Tools Part 1: DiRT and TAPoR
Yes, we have talked about DiRT here on Commons Knowledge. Although the Digital Research Tools directory is an extensive resource full of useful reviews, over time it has increasingly become a graveyard of failed digital humanities projects (and sometimes randomly switches to Spanish). DiRT directory itself comes from Project Bamboo, “… a humanities cyber- infrastructure initiative funded by the Andrew W. Mellon Foundation between 2008 and 2012, in order to enhance arts and humanities research through the development of infrastructure and support for shared technology services” (Dombrowski, 2014). If you are confused about what that means, it’s okay, a lot of people were too, which led to many problems.
TAPoR 3, Text Analysis Portal for Research is DiRT’s Canadian counterpart, which also contains reviews of a variety of digital humanities tools, despite keeping text analysis in the name. Like DiRT, outdated sources are listed.
Part 2: Data Journalism, digital versions of your favorite disciplines, digital pedagogy, and other related fields.
A lot of data journalism tools crossover with digital humanities; in fact, there are even joint Digital Humanities and Data Journalism conferences! You may have even noticed how The Knight Foundation is to data journalism what the Mellon Foundation is to digital humanities. However, Journalism Tools and the list version on Medium from the Tow-Knight Center for Entrepreneurial Journalism at CUNY Graduate School of Journalism and the Resources page from Data Driven Journalism, an initiative from the European Journalism Centre and partially funded by the Dutch government, are both good places to look for resources. As with DiRT and TAPoR, there are similar issues with staying up-to-date. Also data journalism resources tend to list more proprietary tools.
Also, be sure to check out resources for “digital” + [insert humanities/social science discipline], such as digital archeology and digital history. And of course, another subset of digital humanities is digital pedagogy, which focuses on using technology to augment educational experiences of both K-12 and university students. A lot of tools and techniques developed for digital pedagogy can also be used outside the classroom for research and presentation purposes. However, even digital science resources can have a lot of useful tools if you are willing to scroll past an occasional plasmid sharing platform. Just remember to be creative and try to think of other disciplines tackling similar issues to what you are trying to do in their research!
Part 3: There is a lot of out-of-date advice out there.
There are librarians who write overviews of digital humanities tools and don’t bother test to see if they still work or are still updated. I am very aware of how hard things are to use and how quickly things change, and I’m not at all talking about the people who couldn’t keep their websites and curated lists updated. Rather, I’m talking about, how the “Top Tools for Digital Humanities Research” in the January/February 2017 issue of “Computers in Libraries” mentions Sophie, an interactive eBook creator (Herther, 2017). However, Sophie has not updated since 2011 and the link for the fully open source version goes to “Watch King Kong 2 for Free”.
Looks like we all missed the Scholarly Commons Sophie workshop by only 7 years.
The fact that no one caught that error either shows either how slowly magazines edit, or that no one else bothered check. If no one seems to have created any projects with the software in the past three years it’s probably best to assume it’s no longer happening; though, the best route is to always check for yourself.
Long term solutions:
Save your work in other formats for long term storage. Take your data management and digital preservation seriously. We have resources that can help you find the best options for saving your research.
100 tools for investigative journalists. (2016). Retrieved May 18, 2017, from https://medium.com/@Journalism2ls/75-tools-for-investigative-journalists-7df8b151db35
Center for Digital Scholarship Portal Mukurtu CMS. (2017). Support. Retrieved May 11, 2017 from http://support.mukurtu.org/?b_id=633
DiRT Directory. (2015). Retrieved May 18, 2017 from http://dirtdirectory.org/
Dombrowski, Q. (2014). What Ever Happened to Project Bamboo? Literary and Linguistic Computing. https://doi.org/10.1093/llc/fqu026
Herther, N.K. (2017). Top Tools for Digital Humanities Research. Retrieved May 18, 2017, from http://www.infotoday.com/cilmag/jan17/Herther–Top-Tools-for-Digital-Humanities-Research.shtml
Journalism Tools. (2016). Retrieved May 18, 2017 from http://journalismtools.io/
Lord, G., Nieves, A.D., and Simons, J. (2015). dhQuest. http://dhquest.com/
Visel, D. (2010). Upcoming Sophie Workshops. Retrieved May 18, 2017, from http://sophie2.org/trac/blog/upcomingsophieworkshops
Open access (OA) works are, by definition, freely available on the internet. But in order for these works to be useful, we need an effective way to discover them. Library-based discovery systems generally gather information about a work’s “version of record,” that is, the article as published in a scholarly journal. And as most researchers know, most journals are subscription-based, which can serve as a barrier to access.
The University of Illinois Libraries’ house one of the largest library collections in the United States, but from time to time scholars may still come across electronic resources to which the Library does not have direct access. Colloquially, this is sometimes referred to as “hitting a paywall.” While the Library’s Interlibrary Loan service provides an excellent resource for obtaining access to materials outside of the Library’s collection, many “paywalled” articles are also available in OA versions.The problem is that discovery systems typically aren’t designed to get a user from a paywalled version of an article to an OA version.
A new browser plug-in from Impactstory called Unpaywall aims to address this issue by pointing users to OA versions of paywalled articles, when available. When a user arrives at a webpage for an article, Unpaywall attempts to find an OA version of the article by searching through open repositories. If the plug-in succeeds in finding an open version, this is indicated with an opened lock icon on the side of the screen. Clicking on this icon takes you to a copy of the article.
Unpaywall can also distinguish between articles that are Gold OA (articles available from the publisher under an OA license) and Green OA (articles on a preprint server or an institutional repository, like IDEALS). This information is indicated by the color of the opened lock icon (Note that this is an option that is not turned on by default).
Unpaywall claims that they succeed in locating open access versions of 65-85% of articles (When an open version is not found, this is indicated with a grey closed lock icon), though librarian blogger liddylib reports a 53% success rate when trying it out on Almetric’s Top 100 Articles of 2016. Nevertheless, Unpaywall seems dedicated to improving their software, as Jason Priem, one of the program’s developers, responded to liddylib’s blog post, reporting that they had improved the product to locate some Gold OA articles that had originally been missed. Unpaywall also encourages users to report bugs.
As mentioned above, Unpaywall locates full text OA articles by using data from oaDOI, another ImpactStory project. oaDOI indexes upwards of 90 million articles. relying on data sources like the Directory of Open Access Journals, CrossRef, DataCite, and BASE. It is important to note that the OA articles to which Unpaywall directs users have all been legally made available. This distinguishes Unpaywall from projects like Sci-Hub, which provide PDFs that are often made available through less credible means.
Unpaywall is a brand new product, and so it is to be expected that some hiccups will occur. Nevertheless, it seems like a promising tool for helping more people get access to research by making open access resources more discoverable.
The Scholarly Commons is a great place to write the next great American novel; in fact, I’m surprised it has not happened yet (no pressure dear patrons — we understand that you have a lot on your plates). We’re open Monday-Friday from 9-6 and enjoy a well-lit, fairly quiet, and overall ideal working space, with Espresso Royale and the Writing Center nearby. But actually getting that writing done, that’s the real challenge. Luckily, we have suggestions for tools and software you can use to keep writing and stay on track this semester!
Writing Your First Draft:
Yes, MS Word can be accessed for free for University students through the Web Store and you can set it up to better address your research needs with features like the Zotero and Mendeley plugins to incorporate your references. And don’t forget you can go to Word > File > Options > Proofing > Writing Style and select Grammar and Style and Settings to set what Spellcheck will check for so that passive voice gets underline. However, believe it or not, there are word processors, other than MS Word, that are better for organizing and creating large writing projects, such as novels, theses, or even plays!
Scrivener is a word processor created with novelists in mind that lets you organize your research and notes while you are writing. With an education discount, a license for Scrivener costs $38.25. Scrivener is very popular and highly recommended by two of the GAs here at Scholarly Commons (you can email Claire Berman with any questions you may have about the software at cberman2 [at] illinois.edu). To really get started, check out our online copies of Scrivener: An Absolute Beginner’s Guide and Scrivener for Dummies!
Unfortunately, Mellel is only available on Mac. An educational license for the software costs $29. To some extent Mellel is similar in style and price to Pages for Mac, but also shares similarities with MS Word for Mac. However, this word processor offers more options for customizing your word processing experience than Pages or MS Word. It also provides more options for outlining your work and dividing sections in a way that even MS Word Notebook version does not, which is great if you have a large written work with many sections, such as a novel or a thesis! Mellel also partners with the citation managers Bookends and Sente.
Ulysses is a simple and straightforward word processor for Mac, but you do have to write in Markdown without a WYSIWYG editor. It costs $44.99 for Mac and $24.99 for iOS. However, it has many great features for writers (such as built in word count writing goals for sections of a paper, and Markdown makes outlining work very easy and simple). We have discussed the value and importance of Markdown elsewhere on the blog before, specifically in our posts Digital Preservation and the Power of Markdown and Getting Started with Markdown, and of course, want to remind all of our lovely readers to consider doing their writing in Markdown. Learning Markdown can open up writing and digital publishing opportunities across the web (for example: Programming Historian tutorials are written in Markdown). Plus, writing in Markdown converts easily for simple web design without the headache of having to write in HTML.
Maybe you don’t want to buy a whole new word processor. That’s fine! Here are some tools that can help creating the “write” environment to get work done:
Freedom : costs $2.50 a month, so Freedom is not free, indeed. This is an an app that allows you to block websites and even the internet, available for Mac, Windows, iOS devices. This app also has a lock feature that will not allow you to make changes to what is blocked for a set period of time.
RescueTime : another app option. Taking a slightly different approach to the rest here, the lite version of this app helps you track how you use your time and what apps and websites you use the most so that you can have a better sense of what you are doing instead of writing. The premium version, which costs $54 a year, allows you to block distracting websites.
SelfControl: a Mac option but Open Source, with community built Linux and PC versions, and most importantly it’s free! This app allows you to block websites, based on their server, for a set period of time, in which there is basically NOTHING you can do on your computer to access these sites. So choose which sites to block and the time limit wisely.
Named after Ernest Hemingway, this text editor is supposed to help you adapt his style of writing, “bold and clear.” When you paste your text into the free web version, the applet gives you the text’s reading level as well as pointing out instances of awkward grammar, unnecessary or complicated words and adverbs, and sentences that are too long or too complicated.There’s a Desktop version available for $20 though I honestly don’t think it’s worth the money, though it does give another simple space on your computer to write and get feedback.
This is an alternative to MS Word spell check with a free version to add to your browser. As a browser add-in, it checks automatically for critical spelling and grammar mistakes (advanced ones cost a monthly fee) everywhere you write except situations where you’d really want extra spell check such as Google Docs and can be wonky with WordPress. You can always copy and paste into the Grammarly window, but at that point, you’re probably better doing spell check in MS Word. There are also only two versions of English available, American and British (take that Australia!). If you are trying to learn English and want instantaneous feedback while writing on the internet, or studying for high school standardized tests, or perhaps a frequent YouTube commenter in need of a quick check before posting, then Grammarly is for you. For most people at Scholarly Commons, this is a plugin they can skip, though I can’t speak for the paid version which is supposed to be a little bit better. If you uninstall the app they try to guilt trip you, so heads up.
SpellCheckPlus: It’s BonPatron in English! Brought to you by Nadaclair Language Technologies, this web-based text editor goes beyond MS Word’s spellcheck to help identify grammar errors and ways to make your writing sound more normal to a native (Canadian) English speaker. There is a version that costs money but if you don’t import more than the allotted 250 words of text at one time you will be fine using the free version.
Let us know what you think and any tools we may have missed! Happy writing!
And to learn more and find more great productivity tools, check out:
Slideshows are serious business, and bad slides can kill. Many books, including the one I will review today, discuss the role that Morton Thiokol’s poorly designed and overly complicated slides about the Challenger O-rings played in why the shuttle was allowed to launch despite its flaws. PowerPoint has become the default presentation style in a wide range of fields — regardless of whether or not that is a good idea, see the 2014 Slate article “PowerPointLess” by Rebecca Schuman. With all that being said, in order to learn a bit more about how to present, I read The Craft of Scientific Presentations by Michael Alley, an engineering communications professor at Penn State.
To start, what did Lise Meitner, Barbara McClintock, and Rosalind Franklin have in common? According to Michael Alley, their weak science communication skills meant they were not taken as seriously even though they had great ideas and did great research… Yes, the author discusses how Niels Bohr was a very weak speaker (which only somewhat had to do with English being his third language) but it’s mostly in the context of his Nobel Prize speech or trying to talk to Winston Churchill; in other words, the kinds of opportunities that many great women in science never got… Let’s just say the decontextualized history of science factoids weaken some of the author’s arguments…
This is not to say that science communication is not important but these are some important ideas to remember:
Things presentation skills can help you with:
Things presentation skills cannot help you with:
For any presentation: know your topic well, be very prepared, and actually practice giving your talk more than you do anything else (such as making slides). But like any skill, the key is practice practice practice!
For the most part, this book is a great review of the common sense advice that’s easy to forget when you are standing in front of a large audience with everyone looking at you expectantly. The author also offers a lot of great critiques of the default presentations you can churn out with PowerPoint and of PowerPoint itself. PowerPoint has the advantage of being the most common type of slideshow presentation software, though alternatives exist and have been discussed in depth elsewhere on the blog and in university resources. Alley introduces the Assertion-Evidence approach in which you reach people through presenting your research as
memes images with text statement overlay. Specifically, you use one sentence summaries and replace bullet points with visualizations. Also you have to keep in account Murphy’s Law, where slide color or a standard font not being supported can throw off a presentation. Since Murphy’s Law does not disappear when you create a presentation around visuals, especially custom-made images and video, you may need more preparation time for this style of presentation.
Creating visualizations and one sentence summaries as well as practicing your speech to prepare for these things not working is a great strategy for preparing for a research talk. One interesting thing to think about is if Alley admits that less tested methods like TED (Technology-Entertainment-Design) and pecha kucha work for effective presentations, how much of the success of this method has to do with people caring and putting time into their presentation than a change in presentation style?
Overall this book was a good review of public speaking advice specifically targeted towards a science and engineering audience and hopefully will get people taking more time and thinking more about their presentations.
Presentation resources on campus:
And for further reading take a look at:
Hope this helps, and good luck with your research presentations!